上一篇介绍了mysqlcluster的安装和配置,本篇将描述高可用的mysql集群组建方式,高可用的mysql集群有多种方式,本篇介绍的是 mysqlcluster+haproxy+keepalived方案,
下篇将介绍通过普通版本的mysql+haproxy+keepalived实现主从复制,读写分离的高可用方案。
一、方案概述
在此方案中引入了haproxy和keepalived两个工具:
- Haproxy主要功能是进行负载均衡
- Keepalived的主要功能是检查mysql可用状态,辅助切换
方案拓扑关系如下:
此方案中:
1)mysqlcluster集群部分
管理节点: 192.168.136.215, 192.168.136.115
数据节点和mysql节点:192.168.136.216, 192.168.136.217,。。。
其他红色标注的表示实际环境下可以进行横向扩展(数据节点和mysql节点可以分别部署)
2)负载均衡和高可用
部署节点:192.168.136.215, 192.168.136.115
虚拟路由节点:192.168.136.200
也可以另外单独找两台服务器进行部署,虚拟节点是面向用户(web调用)的节点
二、部署mysqlcluster
参见上一篇,mysqlcluster的部署,在此基础上,新增加192.168.136.115节点的部署,细节和215服务器的部署一致(略)。
115和215的配置文件/var/lib/mysql-cluster/config.ini需要进行修改:
/var/lib/mysql-cluster/config.ini内容如下:
[ndbd default]
NoOfReplicas= 2
DataMemory= 512M
IndexMemory= 18M
[ndb_mgmd]
NodeId= 1
HostName= 192.168.136.215 # hostname is a valid network adress
DataDir= /var/lib/mysql-cluster #
[ndb_mgmd]
NodeId= 2
HostName= 192.168.136.115 # hostname is a valid network adress
DataDir= /var/lib/mysql-cluster #
[ndbd]
NodeId= 3
HostName= 192.168.136.216 # hostname is a valid network adress
DataDir= /var/lib/mysql-cluster
[ndbd]
NodeId= 5
HostName= 192.168.136.217 # hostname is a valid network adress
DataDir= /var/lib/mysql-cluster
[mysqld]
hostname=192.168.136.216
[mysqld]
hostname=192.168.136.217
[mysqld]
[mysqld]
根据拓扑图,新增了115服务器作为管理节点(mgm),因此在216,217服务器上,修改一下配置文件,增加一个mgm节点,具体内容如下:
vim /etc/my.cnf
[mysqld]
#skip-grant-tables
ndbcluster
ndb-connectstring=192.168.136.215,192.168.136.115
[mysql_cluster]
ndb-connectstring=192.168.136.215,192.168.136.115
三、部署haproxy
Haproxy部署在192.168.136.215和192.168.136.115上,作用是实现负载均衡,通过部署在mysql节点(216,217)服务器上的监控脚本和监控服务,来自动判断外部的请求应该发送到哪个节点。
3.1 Haproxy下载、编译、安装
首先下载haproxy, 网址www.haproxy.org被墙了,因此下面提供了二个链接:
wget -P /home/soft http://www.haproxy.org/download/1.7/src/haproxy-1.8.10.tar.gz
wget -P /home/soft http://download.openpkg.org/components/cache/haproxy/haproxy-1.8.10.tar.gz
下载完成后,解压缩
tar -vxf haproxy-1.8.10.tar.gz
设定部署目录为/usr/local/haproxy
所以进入haproxy-1.8.10目录后,执行如下编译和安装命令
make TARGET=linux2628 ARCH=x86_64 PREFIX=/usr/local/haproxy
make install PREFIX=/usr/local/haproxy
#TARGET=linux26,ARCH=x86_64#26以上的都用linux26
#TARGET是指内核版本,ARCH指定CPU架构
[root@mysql-1 haproxy-1.8.10]# make install PREFIX=/usr/local/haproxy
install -d "/usr/local/haproxy/sbin"
install haproxy "/usr/local/haproxy/sbin"
install -d "/usr/local/haproxy/share/man"/man1
install -m 644 doc/haproxy.1 "/usr/local/haproxy/share/man"/man1
install -d "/usr/local/haproxy/doc/haproxy"
for x in 51Degrees-device-detection architecture close-options configuration cookie-options DeviceAtlas-device-detection intro linux-syn-cookies lua management netscaler-client-ip-insertion-protocol network-namespaces peers peers-v2.0 proxy-protocol SPOE WURFL-device-detection; do \
install -m 644 doc/$x.txt "/usr/local/haproxy/doc/haproxy" ; \
done
[root@mysql-1 haproxy-1.8.10]#
[root@mysql-1 haproxy-1.8.10]#
创建相关的目录
mkdir -pv /usr/local/haproxy/{conf,run,log}
3.2 创建proxy的启动脚本
启动脚本 vim /etc/init.d/haproxy,具体内容如下:
#! /bin/bash
# chkconfig: - 85 15
# description: haproxy is a World Wide Web server. It is used to serve
PROGDIR=/usr/local/haproxy
PROGNAME=haproxy
DAEMON=$PROGDIR/sbin/$PROGNAME
CONFIG=$PROGDIR/conf/$PROGNAME.conf
PIDFILE=$PROGDIR/run/$PROGNAME.pid
DESC="HAProxy daemon"
SCRIPTNAME=$PROGDIR/init.d/$PROGNAME
# Gracefully exit if the package has been removed.
test -x $DAEMON || exit 0
start()
{
if [ $(pidof haproxy | wc -l) -gt 0 ]
then
echo "$PROGNAME is Runing"
else
echo -n "Starting $DESC: $PROGNAME"
$DAEMON -f $CONFIG
echo "."
fi
}
stop()
{
if [ $(pidof haproxy | wc -l) -le 0 ]
then
echo "$PROGNAME is not running."
else
echo -n "Stopping $DESC: $PROGNAME"
cat $PIDFILE | xargs kill
echo "."
fi
}
reload()
{ echo -n "reloading $DESC: $PROGNAME"
$DAEMON -f $CONFIG -p $PIDFILE -sf $(cat $PIDFILE)
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
reload)
reload
;;
*)
echo "Usage: $SCRIPTNAME {start|stop|restart|reload}" >&2
exit 1
;;
esac
exit 0
完成后修改脚本权限
chmod +x /etc/init.d/haproxy # 可执行权限
3.3 配置haproxy的配置文件
注意一下frontend admin_stat绑定的是本机haproxy的端口,mysql-cluster配置的是mysql集群中mysql api的服务器和端口(既 [mysqld]中配置的节点)
vim /usr/local/haproxy/conf/haproxy.conf
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 50000
chroot /usr/local/haproxy
uid 99
gid 99
daemon
nbproc 2
pidfile /usr/local/haproxy/run/haproxy.pid
#debug
#quiet
defaults
log global
mode tcp
option tcplog
option dontlognull
option forwardfor
option redispatch
retries 3
timeout connect 3000
timeout client 50000
timeout server 50000
frontend admin_stat
bind *:9166
mode http
default_backend stats-back
frontend cluster-front
bind *:3306
mode tcp
default_backend mysql-cluster
backend mysql-cluster
mode tcp
balance roundrobin
option httpchk
server mysql1 192.168.136.216:3306 check port 9200 inter 12000 rise 3 fall 3
server mysql2 192.168.136.217:3306 check port 9200 inter 12000 rise 3 fall 3
backend stats-back
mode http
balance roundrobin
stats uri /admin?stats
stats auth admin:admin
stats realm Haproxy\ Statistics
3.4 Haproxy rsyslog 日志配置
vim /etc/rsyslog.conf
确保如下选项按图中设置
重启rsyslog服务:
service rsyslog restart
[root@mysql-11 haproxy]# service rsyslog restart
Shutting down system logger: [ OK ]
Starting system logger: [ OK ]
[root@mysql-11 haproxy]#
3.5 启动Haproxy
启动haproxy
service haproxy start
加入开机启动
chkconfig haproxy on
查看haproxy 是否启动
ps -ef | grep haproxy
[root@mysql-11 haproxy]# ps -ef | grep haproxy
nobody 4013 1 0 17:26 ? 00:00:00 /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.conf
nobody 4014 1 0 17:26 ? 00:00:00 /usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/conf/haproxy.conf
root 4017 2734 0 17:27 pts/1 00:00:00 grep haproxy
[root@mysql-11 haproxy]#
haproxy 访问端口9155 账号密码 admin:admin,访问url如下:
http://192.168.136.115:9166/admin?stats
http://192.168.136.215:9166/admin?stats
3.6 在mysql节点添加xinetd服务(监控服务)
Xinetd服务是linux下提供一个守护进程,我们可以使用它来帮忙将一些服务映射到某个端口上,通过这个端口查看此服务的状态。我们利用xinetd的这个功能来实现mysql的监控,部署和调用关系如下图所示:
安装xinetd,可以通过yum 方式或者下载rpm包进行安装
yum -y install xinetd
安装完成后 使用如下命令启动:
service xinetd start
3.7 在mysql主机上添加监控节点
1. 添加监控MySQL状态的端口
# vi /etc/services
mysqlchk 9200/tcp # MySQL status check
2. 使用xinetd守护进程运行MySQL状态检测
# vim /etc/xinetd.d/mysqlchk
# default: on
# description: mysqlchk
service mysqlchk
{
# this is a config for xinetd, place it in /etc/xinetd.d/
disable = no
flags = REUSE
socket_type = stream
type = UNLISTED
port = 9200
wait = no
user = root
server = /usr/local/sbin/mysqlchk_status.sh
log_on_failure += USERID
only_from = 0.0.0.0/0
per_source = UNLIMITED
}
3. 状态检测脚本
# vi /usr/local/sbin/mysqlchk_status.sh
#!/bin/bash
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
MYSQL_USERNAME="root"
MYSQL_PASSWORD="nmc123"
ERR_FILE="${4:-/dev/null}"
ERROR_MSG=$(/usr/local/mysql/bin/mysql -u${MYSQL_USERNAME} -p${MYSQL_PASSWORD} -h ${MYSQL_HOST} -e "show databases;")
if [ "$ERROR_MSG" != "" ]
then
# mysql Cluster node local state is 'runing' => return HTTP 200
# Shell return-code is 0
echo -en "HTTP/1.1 200 OK\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
echo -en "Content-Length: 40\r\n"
echo -en "\r\n"
echo -en "MYSQL Cluster Node is Runing.\r\n"
sleep 1
else
# mysql Cluster node local state is not 'runing' => return HTTP 503
# Shell return-code is 1
echo -en "HTTP/1.1 503 Service Unavailable\r\n"
echo -en "Content-Type: text/plain\r\n"
echo -en "Connection: close\r\n"
echo -en "Content-Length: 57\r\n"
echo -en "\r\n"
echo -en "MYSQL Cluster is not Runing. \r\n"
sleep 1
fi
添加权限: chmod +x /usr/local/sbin/mysqlchk_status.sh
检查脚本工作情况:
负载均衡主机上检查mysql主机的端口情况:
[root@mysql-1 ~]# telnet 192.168.136.216 9200
Trying 192.168.136.216...
Connected to 192.168.136.216.
Escape character is '^]'.
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
MYSQL Cluster Node is Runing.
Connection closed by foreign host.
[root@mysql-1 ~]#
注意检查haproxy的状态
至此,我们可以通过haproxy的两台服务器(215,115)进行连接实际mysql接口的服务器(216,217)
3.8 验证负载均衡
关闭217服务器的ndb和mysql(api)服务,通过管理节点观察:
Haproxy显示有一个mysql节点(mysql2,即217)连接不上
从115负载均衡主机连出去,依然正常,这个时候,是通过115连接到216主机完成的mysql接入。
四、部署keepalived
前面Haproxy主要功能是实现负载均衡,还需要通过keepalived 实现高可用。
yum -y install keepalived
启动keepalived
/etc/init.d/keepalived start
启动后有三个进程
[root@mysql-1 keepalived-1.3.2]# ps -ef | grep keepalived
root 6125 1 0 12:04 ? 00:00:00 /usr/sbin/keepalived -D
root 6126 6125 0 12:04 ? 00:00:00 /usr/sbin/keepalived -D
root 6127 6125 0 12:04 ? 00:00:00 /usr/sbin/keepalived -D
root 6133 2601 0 12:05 pts/0 00:00:00 grep keepalived
[root@mysql-1 keepalived-1.3.2]#
停止:
/etc/init.d/keepalived stop
加入开机启动
chkconfig keepalived on
如果是yum安装,那么这是默认的路径
[root@kep1 ~]# ls -l /etc/keepalived/keepalived.conf
-rw-r--r--. 1 root root 3562 3月 19 2015 /etc/keepalived/keepalived.conf
4.1修改配置
修改keepalived的配置
[root@mysql-1 keepalived-1.3.2]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.136.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_script chk_haproxy {
script "/etc/keepalived/check_haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.136.200
}
track_script {
chk_haproxy
}
notify_backup "/etc/init.d/haproxy restart"
notify_fault "/etc/init.d/haproxy stop"
}
以上配置中priority 100 属性在第二台的配置文件中为priority 99,根据级别来决定虚拟地址绑定到哪台主机。
其他属性:
virtual_ipaddress {
192.168.136.200
}
192.168.136.200是浮动主机地址,用来动态绑定haproxy (215或者115),对外提供了统一的200接口,保证了故障发生时,应用调用服务透明。
创建/etc/keepalived/chk_haproxy.sh
#!/bin/bash
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ];then
/etc/init.d/haproxy start
fi
sleep 2
if [ $(ps -C haproxy --no-header | wc -l) -eq 0 ]; then
/etc/init.d/keepalived stop
fi
chmod +x /etc/keepalived/chk_haproxy.sh
下面验证高可用
4.2正常启动
- 启动215和115的keepalived
检查215主机的ip,注意标红色的说明,215当前绑定了一个虚拟地址, 192.168.136.200(这个地址也是对外的接口地址)
打开一个mysql连接,这里用navicat来模拟
当前工作正常。
4.3 模拟215服务器中断,服务高可用
关闭215服务,这时192.168.136.200虚拟地址会漂移到115服务器
检查mysql服务,关闭再打开,依然正常
检查115服务器上的集群状态
Mgmd节点,只有115还正常,另外一个节点显示not connected,115haproxy显示也正常
以上结果符合预期,
4.4 服务器215重启
重新启动215服务器,将管理节点服务启动,由于已经部署了haproxy监控脚本,因此看看是否自动启动了haproxy
Keepalived启动后会自己检查根据优先级重新将虚拟地址 192.168.136.200接管过来。
可以看到通过haprox+keepalived以及一些辅助脚本,一起实现了mysql集群的高可用。