Skip to content

Commit 45b2110

Browse files
committed
Updated Readme
1 parent c4cbab6 commit 45b2110

File tree

3 files changed

+131
-3
lines changed

3 files changed

+131
-3
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,2 @@
11
.vagrant
2+
.yum/x86_64

README.md

Lines changed: 130 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,131 @@
11
# mysql-auto-failover
2+
3+
This is a vagrant image which will start a 3 node mysql cluster which is highly available.
4+
5+
### Prerequisite
6+
7+
* Virtualbox
8+
* Vagrant
9+
10+
### Start the environment
11+
12+
This will load 4 virtual servers:
13+
14+
* Node A (Initial Master)
15+
* Node B (Slave)
16+
* Node C (Slave)
17+
* Controller (Manages Failover)
18+
19+
To start the start the servers do:
20+
21+
```
22+
vagrant up
23+
```
24+
25+
### Failover Tutorial
26+
27+
Connect to the Controller and check that the nodes are clustered:
28+
29+
```
30+
vagrant ssh controller
31+
sudo bash
32+
cat log.txt
33+
```
34+
35+
This should then report the following:
36+
37+
```
38+
2015-08-12 07:39:27 AM INFO Getting health for master: 192.168.33.10:3306.
39+
2015-08-12 07:39:27 AM INFO Health Status:
40+
2015-08-12 07:39:27 AM INFO host: 192.168.33.10, port: 3306, role: MASTER, state: UP, gtid_mode: ON, health: OK
41+
2015-08-12 07:39:27 AM INFO host: 192.168.33.20, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK
42+
2015-08-12 07:39:27 AM INFO host: 192.168.33.30, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK
43+
```
44+
45+
This has shown that 192.168.33.10 is the current master and .20 and .30 are the slaves.
46+
47+
Exit out of the controller virtual machine.
48+
49+
To prove replication is working lets add a database to node A (the current master).
50+
51+
```
52+
vagrant ssh A
53+
mysql -uroot -ppassword
54+
CREATE DATABASE failovertest;
55+
SHOW DATABASES;
56+
```
57+
58+
This will return a result of:
59+
60+
```
61+
+--------------------+
62+
| Database |
63+
+--------------------+
64+
| information_schema |
65+
| failovertest |
66+
| mysql |
67+
| performance_schema |
68+
+--------------------+
69+
```
70+
71+
Now exit out of node A SSH ('exit' will close the mysql prompt) and connect to node B to check that it as automatically replicated the data.
72+
73+
```
74+
vagrant ssh B
75+
mysql -uroot -ppassword
76+
SHOW DATABASES;
77+
```
78+
79+
This will then return an identical query result:
80+
81+
```
82+
+--------------------+
83+
| Database |
84+
+--------------------+
85+
| information_schema |
86+
| failovertest |
87+
| mysql |
88+
| performance_schema |
89+
+--------------------+
90+
```
91+
92+
Now exit out of node B SSH.
93+
94+
(If you want, you can repeat this with node C to prove it has replicated to all slaves)
95+
96+
Now to trigger a failover, we will stop the mysql service on node A.
97+
98+
```
99+
vagrant ssh A
100+
sudo service mysql stop
101+
```
102+
103+
This will stop the mysql service on the master, if you exit this and connect to the controller, then you will see the updated status.
104+
105+
```
106+
vagrant ssh controller
107+
sudo bash
108+
cat log.txt
109+
```
110+
111+
This should then report:
112+
113+
```
114+
2015-08-12 07:45:54 AM INFO Master may be down. Waiting for 3 seconds.
115+
2015-08-12 07:46:09 AM INFO Failed to reconnect to the master after 3 attemps.
116+
2015-08-12 07:46:09 AM CRITICAL Master is confirmed to be down or unreachable.
117+
2015-08-12 07:46:09 AM INFO Failover starting in 'auto' mode
118+
2015-08-12 07:46:09 AM INFO Candidate slave 192.168.33.20:3306 will become the new master.
119+
...
120+
2015-08-12 07:46:32 AM INFO Getting health for master: 192.168.33.20:3306.
121+
2015-08-12 07:46:32 AM INFO Health Status:
122+
2015-08-12 07:46:32 AM INFO host: 192.168.33.20, port: 3306, role: MASTER, state: UP, gtid_mode: ON, health: OK
123+
2015-08-12 07:46:32 AM INFO host: 192.168.33.30, port: 3306, role: SLAVE, state: UP, gtid_mode: ON, health: OK
124+
```
125+
126+
Now you can repeat the data replication tasks by connecting to node B and making a change and checking that is appears on node C.
127+
128+
129+
### Issues
130+
131+
* If the master goes down, when it comes back up it doesn't rejoin the cluster. I haven't explored this enough to understand why and how to resolve.

Vagrantfile

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,6 @@
44
Vagrant.configure(2) do | global |
55
# global.vm.box = "landregistry/centos"
66
global.vm.box = "landregistry/centos"
7-
if Vagrant.has_plugin?("vagrant-cachier")
8-
global.cache.scope = :box
9-
end
107

118
global.vm.provision "shell", inline: "sed -i -e 's,keepcache=0,keepcache=1,g' /etc/yum.conf"
129
#Reenable the line below if the .yum folder is empty

0 commit comments

Comments
 (0)