mysql官方文档中有这么一句话:
MySQL Proxy is currently an Alpha release and should not be used within production environments.
So。。。
使用haproxy来做这些事,以下仅供参考:
环境配置
master 192.168.1.106 master1
slave1 192.168.1.107 master2<--->master1(与master1主-主复制)
slave2 192.168.1.110 slave2---->master1(master1的从库)
slave3 192.168.1.111 slave3---->master1(master1的从库)
slave4 192.168.1.112 slave4----> master2(master2的从库)
monitor 192.168.1.200
192.168.1.105 eth1 写 ip
192.168.1.113 eth2 读ip
说明:
当 master 停止复制, slave1 成为主库,haproxy停止发送请求到master和slave2,slave3, slave1与slave2,slave3依然可以从master接收日志。
当slave1停止复制,master成为主库,haproxy停止发送请求到slave1和slave4,master与slave4依然可以从slave1接收日志。
当 master和slave1同时停止复制,这时2台主库变成readonly模式,数据库不能写入 ,haproxy停止发送请求到slave2,slave3,slave4(脑裂)。
当slave1 offline时,master进入backup mode,haproxy停止发送请求到slave1,slave4。
当master offline时,slave1进入backup mode,haproxy停止发送请求到master,slave2,slave3。
当master和slave1同时offline,整个DB停止工作。
1、主从配置(略)
2、安装 xinetd ,配置mysqlchk服务
vi /etc/xinetd.d/mysqlchk
- --两个master配置
- service mysqlchk-write
- {
- flags = REUSE
- socket_type = stream
- port = 9201
- wait = no
- user = root
- server = /opt/script/mysqlchk_status.sh
- log_on_failure += USERID
- disable = no
- only_from = 192.168.1.0/24 #recommended to put the IPs that need
- # to connect exclusively (security purposes)
- per_source = UNLIMITED
- }
- service mysqlchk-read
- {
- flags = REUSE
- socket_type = stream
- port = 9202
- wait = no
- user = root
- server = /opt/script/mysqlchk_replication.sh
- log_on_failure += USERID
- disable = no
- only_from = 192.168.1.0/24 #recommended to put the IPs that need
- # to connect exclusively (security purposes)
- per_source = UNLIMITED
- }
- --所有slaves只需配置复制状态检查脚本
- service mysqlchk-read
- {
- flags = REUSE
- socket_type = stream
- port = 9202
- wait = no
- user = root
- server = /opt/script/mysqlchk_replication.sh
- log_on_failure += USERID
- disable = no
- only_from = 192.168.1.0/24 #recommended to put the IPs that need
- # to connect exclusively (security purposes)
- per_source = UNLIMITED
- }
vi /etc/services
- --两个master添加:
- mysqlchk-write 9201/tcp # MySQL status check
- mysqlchk-read 9202/tcp # MySQL replication check
- --所有slaves添加:
- mysqlchk-read 9202/tcp # MySQL replication check
重启 xinetd
- # /etc/init.d/xinetd stop
- # /etc/init.d/xinetd start
查看端口号确认
- [root@master xinetd.d]# netstat -antup|grep xinetd
- tcp 0 0 0.0.0.0:9201 0.0.0.0:* LISTEN 3077/xinetd
- tcp 0 0 0.0.0.0:9202 0.0.0.0:* LISTEN 3077/xinetd
3、monitor主机安装haproxy
- tar zxvf haproxy-1.4.23.tar.gz
- cd haproxy-1.4.23
- make TARGET=linux26 ARCH=x86_64
- make install
4、配置haproxy配置文件
vi /usr/local/haproxy-1.4.23/conf/haproxy-db.cfg
- # HAProxy configuration - haproxy-db.cfg
- global
- maxconn 4096
- daemon
- pidfile /usr/local/haproxy-1.4.23/haproxy.pid
- #debug
- #quiet
- #chroot /usr/share/haproxy
- defaults
- log global
- mode http
- #option httplog
- option dontlognull
- log 127.0.0.1 local0
- retries 3
- option redispatch
- maxconn 4096
- timeout connect 1000ms
- timeout client 50000ms
- timeout server 50000ms
- listen stats :8011
- balance
- mode http
- stats enable
- stats auth root:monitor
- ##
- ## FRONTEND ##
- ##
- # Load-balanced IPs for DB writes and reads
- #
- frontend db_write
- mode tcp
- bind 192.168.1.105:3306
- default_backend cluster_db_write
- frontend db_read
- mode tcp
- bind 192.168.1.113:3306
- default_backend cluster_db_read
- # Monitor DB server availability
- #
- frontend monitor_master
- #
- # set master_backup to 'up' or 'down'
- #
- bind 127.0.0.1:9301
- mode http
- #option nolinger
- acl no_repl_master nbsrv(master_replication) eq 0
- acl no_repl_slave1 nbsrv(slave1_replication) eq 0
- acl no_master nbsrv(master_status) eq 0
- acl no_slave1 nbsrv(slave1_status) eq 0
- monitor-uri /monitor
- monitor fail unless no_repl_master no_repl_slave1 no_slave1
- monitor fail if no_master no_slave1
- frontend monitor_slave1
- #
- # set slave1_backup to 'up' or 'down'
- #
- bind 127.0.0.1:9302
- mode http
- #option nolinger
- acl no_repl_master nbsrv(master_replication) eq 0
- acl no_repl_slave1 nbsrv(slave1_replication) eq 0
- acl no_master nbsrv(master_status) eq 0
- acl no_slave1 nbsrv(slave1_status) eq 0
- monitor-uri /monitor
- monitor fail unless no_repl_master no_repl_slave1 no_master
- monitor fail if no_master no_slave1
- frontend monitor_slave2
- #
- # set slave2 read-only slave to 'down'
- #
- bind 127.0.0.1:9303
- mode http
- #option nolinger
- acl no_repl_slave2 nbsrv(slave2_replication) eq 0
- acl no_repl_master nbsrv(master_replication) eq 0
- acl slave1 nbsrv(slave1_status) eq 1
- monitor-uri /monitor
- monitor fail if no_repl_slave2
- monitor fail if no_repl_master slave1
- frontend monitor_slave3
- #
- # set slave3 read-only slave to 'down'
- #
- bind 127.0.0.1:9304
- mode http
- #option nolinger
- acl no_repl_slave3 nbsrv(slave3_replication) eq 0
- acl no_repl_master nbsrv(master_replication) eq 0
- acl slave1 nbsrv(slave1_status) eq 1
- monitor-uri /monitor
- monitor fail if no_repl_slave3
- monitor fail if no_repl_master slave1
- frontend monitor_slave4
- #
- # set slave4 read-only slave to 'down'
- #
- bind 127.0.0.1:9305
- mode http
- #option nolinger
- acl no_repl_slave4 nbsrv(slave4_replication) eq 0
- acl no_repl_slave1 nbsrv(slave1_replication) eq 0
- acl master nbsrv(master_status) eq 1
- monitor-uri /monitor
- monitor fail if no_repl_slave4
- monitor fail if no_repl_slave1 master
- # Monitor for split-brain syndrome
- #
- frontend monitor_splitbrain
- #
- # set master_splitbrain and slave1_splitbrain to 'up'
- #
- bind 127.0.0.1:9300
- mode http
- #option nolinger
- acl no_repl01 nbsrv(master_replication) eq 0
- acl no_repl02 nbsrv(slave1_replication) eq 0
- acl master nbsrv(master_status) eq 1
- acl slave1 nbsrv(slave1_status) eq 1
- monitor-uri /monitor
- monitor fail unless no_repl01 no_repl02 master slave1
- ##
- ## BACKEND ##
- ##
- # Check every DB server replication status
- # - perform an http check on port 9201 (replication status)
- # - set to 'down' if response is '503 Service Unavailable'
- # - set to 'up' if response is '200 OK'
- #
- backend master_replication
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server master 192.168.1.106:3306 check port 9202 inter 5s rise 1 fall 1
- backend slave1_replication
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server slave1 192.168.1.107:3306 check port 9202 inter 5s rise 1 fall 1
- backend slave2_replication
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server slave2 192.168.1.110:3306 check port 9202 inter 5s rise 1 fall 1
- backend slave3_replication
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server slave3 192.168.1.111:3306 check port 9202 inter 5s rise 1 fall 1
- backend slave4_replication
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server slave4 192.168.1.112:3306 check port 9202 inter 5s rise 1 fall 1
- # Check Master DB server mysql status
- # - perform an http check on port 9201 (mysql status)
- # - set to 'down' if response is '503 Service Unavailable'
- # - set to 'up' if response is '200 OK'
- #
- backend master_status
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server master 192.168.1.106:3306 check port 9201 inter 5s rise 2 fall 2
- backend slave1_status
- mode tcp
- balance roundrobin
- option tcpka
- option httpchk
- server slave1 192.168.1.107:3306 check port 9201 inter 5s rise 2 fall 2
- # DB write cluster
- # Failure scenarios:
- # - replication 'up' on master & slave1 = writes to master
- # - replication 'down' on slave1 = writes to master
- # - replication 'down' on master = writes to slave1
- # - replication 'down' on master & slave1 = go nowhere, split-brain, cluster FAIL!
- # - mysql 'down' on slave1 = writes to master_backup
- # - mysql 'down' on master = writes to slave1_backup
- # - mysql 'down' on master & slave1 = go nowhere, cluster FAIL!
- #
- backend cluster_db_write
- #
- # - max 1 db server available at all times
- # - master is preferred (top of list)
- # - db_backups set their 'up' or 'down' based on results from monitor_monitor
- #
- mode tcp
- option tcpka
- balance roundrobin
- option httpchk GET /monitor
- server master 192.168.1.106:3306 weight 1 check port 9202 inter 5s rise 2 fall 1
- server slave1 192.168.1.107:3306 weight 1 check port 9202 inter 5s rise 2 fall 1 backup
- server master_backup 192.168.1.106:3306 weight 1 check port 9301 inter 5s rise 2 fall 2 addr 127.0.0.1 backup
- server slave1_backup 192.168.1.107:3306 weight 1 check port 9302 inter 5s rise 2 fall 2 addr 127.0.0.1 backup
- # DB read cluster
- # Failure scenarios
- # - replication 'up' on master & slave1 = reads on master, slave1, all db_slaves
- # - replication 'down' on slave1 = reads on master, slaves of master
- # - replication 'down' on master = reads on slave1, slaves of slave1
- # - replication 'down' on master & slave1 = reads on master_splitbrain and master_splitbrain only
- # - mysql 'down' on slave1 = reads on master_backup, slaves of master
- # - mysql 'down' on master = reads on slave1_backup, slaves of slave1
- # - mysql 'down' on master & slave1 = go nowhere, cluster FAIL!
- #
- backend cluster_db_read
- #
- # - max 2 master db servers available at all times
- # - max N slave db servers available at all times except during split-brain
- # - monitor track 'up' and 'down' of monitor in the cluster_db_write
- # - db_backups track 'up' and 'down' of db_backups in the cluster_db_write
- # - db_splitbrains set their 'up' or 'down' based on results from monitor_splitbrain
- #
- mode tcp
- option tcpka
- balance roundrobin
- option httpchk GET /monitor
- server master 192.168.1.106:3306 weight 1 track cluster_db_write/master
- server slave1 192.168.1.107:3306 weight 1 track cluster_db_write/slave1
- server master_backup 192.168.1.106:3306 weight 1 track cluster_db_write/master_backup
- server slave1_backup 192.168.1.107:3306 weight 1 track cluster_db_write/slave1_backup
- server master_splitbrain 192.168.1.106:3306 weight 1 check port 9300 inter 5s rise 1 fall 2 addr 127.0.0.1
- server slave1_splitbrain 192.168.1.107:3306 weight 1 check port 9300 inter 5s rise 1 fall 2 addr 127.0.0.1
- #
- # Scaling & redundancy options
- # - db_slaves set their 'up' or 'down' based on results from monitor_monitor
- # - db_slaves should take longer to rise
- #
- server slave2_slave 192.168.1.110:3306 weight 1 check port 9303 inter 5s rise 5 fall 1 addr 127.0.0.1
- server slave3_slave 192.168.1.111:3306 weight 1 check port 9304 inter 5s rise 5 fall 1 addr 127.0.0.1
- server slave4_slave 192.168.1.112:3306 weight 1 check port 9305 inter 5s rise 5 fall 1 addr 127.0.0.1
5、启动haproxy
- haproxy -f /usr/local/haproxy-1.4.23/conf/haproxy-db.cfg
监控地址:http://192.168.1.200:8011/haproxy?stats
user:root password:monitor
一些参数说明 :
maxconn <number> Sets the maximum per-process number of concurrent connections to <number>. It is equivalent to the command-line argument "-n". Proxies will stop accepting connections when this limit is reached. daemon Makes the process fork into background. This is the recommended mode of operation. It is equivalent to the command line "-D" argument. It can be disabled by the command line "-db" argument. pidfile <pidfile> Writes pids of all daemons into file <pidfile>. This option is equivalent to the "-p" command line argument. The file must be accessible to the user starting the process. retries <value> Set the number of retries to perform on a server after a connection failure May be used in sections: defaults | frontend | listen | backend yes | no | yes | yes Arguments : <value> is the number of times a connection attempt should be retried on a server when a connection either is refused or times out. The default value is 3. It is important to understand that this value applies to the number of connection attempts, not full requests. When a connection has effectively been established to a server, there will be no more retry. In order to avoid immediate reconnections to a server which is restarting, a turn-around timer of 1 second is applied before a retry occurs. When "option redispatch" is set, the last retry may be performed on another server even if a cookie references a different server. See also : "option redispatch" option redispatch no option redispatch Enable or disable session redistribution in case of connection failure May be used in sections: defaults | frontend | listen | backend yes | no | yes | yes Arguments : none In HTTP mode, if a server designated by a cookie is down, clients may definitely stick to it because they cannot flush the cookie, so they will not be able to access the service anymore. Specifying "option redispatch" will allow the proxy to break their persistence and redistribute them to a working server. It also allows to retry last connection to another server in case of multiple connection failures. Of course, it requires having "retries" set to a nonzero value. This form is the preferred form, which replaces both the "redispatch" and "redisp" keywords. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. option dontlognull no option dontlognull Enable or disable logging of null connections May be used in sections : defaults | frontend | listen | backend yes | yes | yes | no Arguments : none In certain environments, there are components which will regularly connect to various systems to ensure that they are still alive. It can be the case from another load balancer as well as from monitoring systems. By default, even a simple port probe or scan will produce a log. If those connections pollute the logs too much, it is possible to enable option "dontlognull" to indicate that a connection on which no data has been transferred will not be logged, which typically corresponds to those probes. It is generally recommended not to use this option in uncontrolled environments (eg: internet), otherwise scans and other malicious activities would not be logged. If this option has been enabled in a "defaults" section, it can be disabled in a specific instance by prepending the "no" keyword before it. 另外,使用keepalived实现代理层的HA。