Keepalived+Nginx+Tomcat 实现高可用Web集群
2018.01.08 20:28* 字数 1382 阅读 965评论 1喜欢 9
集群规划图片
一、Nginx的安装过程
1.下载Nginx安装包,安装依赖环境包
(1)安装 C++编译环境
yum -y install gcc #C++
(2)安装pcre
yum -y install pcre-devel
(3)安装zlib
yum -y install zlib-devel
(4)安装Nginx
定位到nginx 解压文件位置,执行编译安装命令
[root@localhost nginx-1.12.2]# pwd
/usr/local/nginx/nginx-1.12.2 [root@localhost nginx-1.12.2]# ./configure && make && make install
[root@localhost nginx-1.12.2]# pwd
/usr/local/nginx/nginx-1.12.2 [root@localhost nginx-1.12.2]# ./configure && make && make install
(5)启动Nginx
安装完成后先寻找那安装完成的目录位置
[root@localhost nginx-1.12.2]# whereis nginx
nginx: /usr/local/nginx [root@localhost nginx-1.12.2]#
[root@localhost nginx-1.12.2]# whereis nginx
nginx: /usr/local/nginx [root@localhost nginx-1.12.2]#
进入Nginx子目录sbin启动Nginx
[root@localhost sbin]# ls
nginx
[root@localhost sbin]# ./nginx &
[1] 5768 [root@localhost sbin]#
[root@localhost sbin]# ls
nginx
[root@localhost sbin]# ./nginx &
[1] 5768 [root@localhost sbin]#
查看Nginx是否启动
Niginx启动成功截图
或通过进程查看Nginx启动情况
[root@localhost sbin]# ps -aux|grep nginx
root 5769 0.0 0.0 20484 608 ? Ss 14:03 0:00 nginx: master process ./nginx nobody 5770 0.0 0.0 23012 1620 ? S 14:03 0:00 nginx: worker process root 5796 0.0 0.0 112668 972 pts/0 R+ 14:07 0:00 grep --color=auto nginx [1]+ 完成 ./nginx [root@localhost sbin]#
[root@localhost sbin]# ps -aux|grep nginx
root 5769 0.0 0.0 20484 608 ? Ss 14:03 0:00 nginx: master process ./nginx nobody 5770 0.0 0.0 23012 1620 ? S 14:03 0:00 nginx: worker process root 5796 0.0 0.0 112668 972 pts/0 R+ 14:07 0:00 grep --color=auto nginx [1]+ 完成 ./nginx [root@localhost sbin]#
到此Nginx安装完成并启动成功。
(6)Nginx快捷启动和开机启动配置
编辑Nginx快捷启动脚本【注意Nginx安装路径,需要根据自己的NGINX路径进行改动】
[root@localhost init.d]# vim /etc/rc.d/init.d/nginx
[root@localhost init.d]# vim /etc/rc.d/init.d/nginx
1 #!/bin/sh
2 #
3 # nginx - this script starts and stops the nginx daemon
4 #
5 # chkconfig: - 85 15
6 # description: Nginx is an HTTP(S) server, HTTP(S) reverse \
7 # proxy and IMAP/POP3 proxy server
8 # processname: nginx
9 # config: /etc/nginx/nginx.conf
10 # config: /usr/local/nginx/conf/nginx.conf
11 # pidfile: /usr/local/nginx/logs/nginx.pid
12
13 # Source function library.
14 . /etc/rc.d/init.d/functions
15
16 # Source networking configuration.
17 . /etc/sysconfig/network
18
19 # Check that networking is up.
20 [ "$NETWORKING" = "no" ] && exit 0
21 nginx="/usr/local/nginx/sbin/nginx"
22 prog=$(basename $nginx)
23 NGINX_CONF_FILE="/usr/local/nginx/conf/nginx.conf"
24 [ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx
25 lockfile=/var/lock/subsys/nginx
26
27 make_dirs() {
28 # make required directories
29 user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
30 if [ -z "`grep $user /etc/passwd`" ]; then
31 useradd -M -s /bin/nologin $user
32 fi
33 options=`$nginx -V 2>&1 | grep 'configure arguments:'`
34 for opt in $options; do
35 if [ `echo $opt | grep '.*-temp-path'` ]; then
36 value=`echo $opt | cut -d "=" -f 2`
37 if [ ! -d "$value" ]; then
38 # echo "creating" $value
39 mkdir -p $value && chown -R $user $value
40 fi
41 fi
42 done
43 }
44
45 start() {
46 [ -x $nginx ] || exit 5
47 [ -f $NGINX_CONF_FILE ] || exit 6
48 make_dirs
49 echo -n $"Starting $prog: "
50 daemon $nginx -c $NGINX_CONF_FILE
51 retval=$?
52 echo
53 [ $retval -eq 0 ] && touch $lockfile
54 return $retval
55 }
56
57 stop() {
58 echo -n $"Stopping $prog: "
59 killproc $prog -QUIT
60 retval=$?
61 echo
62 [ $retval -eq 0 ] && rm -f $lockfile
63 return $retval
64 }
65
66 restart() {
67 #configtest || return $?
68 stop
69 sleep 1
70 start
71 }
72
73 reload() {
74 #configtest || return $?
75 echo -n $"Reloading $prog: "
76 killproc $nginx -HUP
77 RETVAL=$?
78 echo
79 }
80
81 force_reload() {
82 restart
83 }
84
85 configtest() {
86 $nginx -t -c $NGINX_CONF_FILE
87 }
88
89 rh_status() {
90 status $prog
91 }
92
93 rh_status_q() {
94 rh_status >/dev/null 2>&1
95 }
96
97 case "$1" in
98 start)
99 rh_status_q && exit 0
100 $1
101 ;;
102 stop)
103
104 rh_status_q || exit 0
105 $1
106 ;;
107 restart|configtest)
108 $1
109 ;;
110 reload)
111 rh_status_q || exit 7
112 $1
113 ;;
114 force-reload)
115 force_reload
116 ;;
117 status)
118 rh_status
119 ;;
120 condrestart|try-restart)
121 rh_status_q || exit 0
122 ;;
123 *)
124 echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
125 exit 2
126 esac
为启动脚本授权 并加入开机启动
[root@localhost init.d]# chmod -R 777 /etc/rc.d/init.d/nginx
[root@localhost init.d]# chkconfig nginx
[root@localhost init.d]# chmod -R 777 /etc/rc.d/init.d/nginx
[root@localhost init.d]# chkconfig nginx
启动Nginx
[root@localhost init.d]# ./nginx start
[root@localhost init.d]# ./nginx start
将Nginx加入系统环境变量
[root@localhost init.d]# echo 'export PATH=$PATH:/usr/local/nginx/sbin'>>/etc/profile && source /etc/profile
[root@localhost init.d]# echo 'export PATH=$PATH:/usr/local/nginx/sbin'>>/etc/profile && source /etc/profile
Nginx命令 [ service nginx (start|stop|restart) ]
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl): [ 确定 ]
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl): [ 确定 ]
Tips:快捷命令
service nginx (start|stop|restart)
service nginx (start|stop|restart)
二、KeepAlived安装和配置
1.安装Keepalived依赖环境
yum install -y popt-devel
yum install -y ipvsadm
yum install -y libnl*
yum install -y libnf*
yum install -y openssl-devel
2.编译Keepalived并安装
[root@localhost keepalived-1.3.9]# ./configure
[root@localhost keepalived-1.3.9]# make && make install
[root@localhost keepalived-1.3.9]# ./configure
[root@localhost keepalived-1.3.9]# make && make install
3.将Keepalive 安装成系统服务
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
手动复制默认的配置文件到默认路径
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@localhost etc]# mkdir /etc/keepalived
[root@localhost etc]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@localhost etc]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
为keepalived 创建软链接
[root@localhost sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@localhost sysconfig]# ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin/
设置Keepalived开机自启动
[root@localhost sysconfig]# chkconfig keepalived on
注意:正在将请求转发到“systemctl enable keepalived.service”。
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service
启动Keepalived服务
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf
关闭Keepalived服务
[root@localhost keepalived]# killall keepalived
[root@localhost keepalived]# killall keepalived
三、集群规划和搭建
集群规划图片
环境准备:
CentOS 7.2
Keepalived Version 1.4.0 - December 29, 2017
Nginx Version: nginx/1.12.2
Tomcat Version:8
集群规划清单
虚拟机 | IP | 说明 |
Keepalived+Nginx1[Master] | 192.168.43.101 | Nginx Server 01 |
Keeepalived+Nginx[Backup] | 192.168.43.102 | Nginx Server 02 |
Tomcat01 | 192.168.43.103 | Tomcat Web Server01 |
Tomcat02 | 192.168.43.104 | Tomcat Web Server02 |
VIP | 192.168.43.150 | 虚拟漂移IP |
1.更改Tomcat默认欢迎页面,用于标识切换Web
更改TomcatServer01 节点ROOT/index.jsp 信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.103信息如下:
<div id="asf-box">
<h1>${pageContext.servletContext.serverInfo}(192.168.224.103)<%=request.getHeader("X-NGINX")%></h1> </div>
更改TomcatServer02 节点ROOT/index.jsp信息,加入TomcatIP地址,并加入Nginx值,即修改节点192.168.43.104信息如下:
<div id="asf-box">
<h1>${pageContext.servletContext.serverInfo}(192.168.224.104)<%=request.getHeader("X-NGINX")%></h1> </div>
2.启动Tomcat服务,查看Tomcat服务IP信息,此时Nginx未启动,因此request-header没有Nginx信息。
Tomcat启动信息
3.配置Nginx代理信息
1.配置Master节点[192.168.43.101]代理信息
upstream tomcat {
server 192.168.43.103:8080 weight=1; server 192.168.43.104:8080 weight=1; } server{ location / { proxy_pass http://tomcat; proxy_set_header X-NGINX "NGINX-1"; } #......其他省略 }
2.配置Backup节点[192.168.43.102]代理信息
upstream tomcat {
server 192.168.43.103:8080 weight=1; server 192.168.43.104:8080 weight=1; } server{ location / { proxy_pass http://tomcat; proxy_set_header X-NGINX "NGINX-2"; } #......其他省略 }
3.启动Master 节点Nginx服务
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl): [ 确定 ]
[root@localhost init.d]# service nginx start
Starting nginx (via systemctl): [ 确定 ]
此时访问 192.168.43.101 可以看到103和104节点Tcomat交替显示,说明Nginx服务已经将请求负载到了2台tomcat上。
Nginx 负载效果
4.同理配置Backup[192.168.43.102] Nginx信息,启动Nginx后,访问192.168.43.102后可以看到Backup节点已起到负载的效果。
Backup负载效果
4.配置Keepalived 脚本信息
1.在Master节点和Slave节点 /etc/keepalived目录下添加check_nginx.sh 文件,用于检测Nginx的存活状况,添加keepalived.conf文件
check_nginx.sh文件信息如下:
#!/bin/bash
#时间变量,用于记录日志
d=`date --date today +%Y%m%d_%H:%M:%S`
#计算nginx进程数量
n=`ps -C nginx --no-heading|wc -l`
#如果进程为0,则启动nginx,并且再次检测nginx进程数量,
#如果还为0,说明nginx无法启动,此时需要关闭keepalived if [ $n -eq "0" ]; then /etc/rc.d/init.d/nginx start n2=`ps -C nginx --no-heading|wc -l` if [ $n2 -eq "0" ]; then echo "$d nginx down,keepalived will stop" >> /var/log/check_ng.log systemctl stop keepalived fi fi
添加完成后,为check_nginx.sh 文件授权,便于脚本获得执行权限。
[root@localhost keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh
[root@localhost keepalived]# chmod -R 777 /etc/keepalived/check_nginx.sh
2.在Master 节点 /etc/keepalived目录下,添加keepalived.conf 文件,具体信息如下:
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh" //检测nginx进程的脚本
interval 2
weight -20
}
global_defs {
notification_email {
//可以添加邮件提醒 } } vrrp_instance VI_1 { state MASTER #标示状态为MASTER 备份机为BACKUP interface ens33 #设置实例绑定的网卡(ip addr查看,需要根据个人网卡绑定) virtual_router_id 51 #同一实例下virtual_router_id必须相同 mcast_src_ip 192.168.43.101 priority 250 #MASTER权重要高于BACKUP 比如BACKUP为240 advert_int 1 #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒 nopreempt #非抢占模式 authentication { #设置认证 auth_type PASS #主从服务器验证方式 auth_pass 123456 } track_script { check_nginx } virtual_ipaddress { #设置vip 192.168.43.150 #可以多个虚拟IP,换行即可 } }
3.在Backup节点 etc/keepalived目录下添加 keepalived.conf 配置文件
信息如下:
vrrp_script chk_nginx {
script "/etc/keepalived/check_nginx.sh" //检测nginx进程的脚本
interval 2
weight -20
}
global_defs {
notification_email {
//可以添加邮件提醒 } } vrrp_instance VI_1 { state BACKUP #标示状态为MASTER 备份机为BACKUP interface ens33 #设置实例绑定的网卡(ip addr查看) virtual_router_id 51 #同一实例下virtual_router_id必须相同 mcast_src_ip 192.168.43.102 priority 240 #MASTER权重要高于BACKUP 比如BACKUP为240 advert_int 1 #MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒 nopreempt #非抢占模式 authentication { #设置认证 auth_type PASS #主从服务器验证方式 auth_pass 123456 } track_script { check_nginx } virtual_ipaddress { #设置vip 192.168.43.150 #可以多个虚拟IP,换行即可 } }
Tips:关于配置信息的几点说明
- state - 主服务器需配成MASTER,从服务器需配成BACKUP
- interface - 这个是网卡名,我使用的是VM12.0的版本,所以这里网卡名为ens33
- mcast_src_ip - 配置各自的实际IP地址
- priority - 主服务器的优先级必须比从服务器的高,这里主服务器配置成250,从服务器配置成240
- virtual_ipaddress - 配置虚拟IP(192.168.43.150)
- authentication - auth_pass主从服务器必须一致,keepalived靠这个来通信
- virtual_router_id - 主从服务器必须保持一致
5.集群高可用(HA)验证
- Step1 启动Master机器的Keepalived和 Nginx服务
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf
[root@localhost keepalived]# service nginx start
[root@localhost keepalived]# keepalived -D -f /etc/keepalived/keepalived.conf
[root@localhost keepalived]# service nginx start
查看服务启动进程
[root@localhost keepalived]# ps -aux|grep nginx
root 6390 0.0 0.0 20484 612 ? Ss 19:13 0:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nobody 6392 0.0 0.0 23008 1628 ? S 19:13 0:00 nginx: worker process
root 6978 0.0 0.0 112672 968 pts/0 S+ 20:08 0:00 grep --color=auto nginx
查看Keepalived启动进程
[root@localhost keepalived]# ps -aux|grep keepalived
root 6402 0.0 0.0 45920 1016 ? Ss 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6403 0.0 0.0 48044 1468 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6404 0.0 0.0 50128 1780 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 7004 0.0 0.0 112672 976 pts/0 S+ 20:10 0:00 grep --color=auto keepalived
[root@localhost keepalived]# ps -aux|grep keepalived
root 6402 0.0 0.0 45920 1016 ? Ss 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6403 0.0 0.0 48044 1468 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 6404 0.0 0.0 50128 1780 ? S 19:13 0:00 keepalived -D -f /etc/keepalived/keepalived.conf root 7004 0.0 0.0 112672 976 pts/0 S+ 20:10 0:00 grep --color=auto keepalived
使用 ip add 查看虚拟IP绑定情况,如出现192.168.43.150 节点信息则绑定到Master节点
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.43.150/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::9abb:4544:f6db:8255/64 scope link valid_lft forever preferred_lft forever inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:91:bf:59 brd ff:ff:ff:ff:ff:ff inet 192.168.43.101/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet 192.168.43.150/32 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::9abb:4544:f6db:8255/64 scope link valid_lft forever preferred_lft forever inet6 fe80::b0b3:d0ca:7382:2779/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link tentative dadfailed valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
- Step 2 启动Backup节点Nginx服务和Keepalived服务,查看服务启动情况,如Backup节点出现了虚拟IP,则Keepalvied配置文件有问题,此情况称为脑裂。
[root@localhost keepalived]# clear
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
[root@localhost keepalived]# clear
[root@localhost keepalived]# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:14:df:79 brd ff:ff:ff:ff:ff:ff inet 192.168.43.102/24 brd 192.168.43.255 scope global ens33 valid_lft forever preferred_lft forever inet6 fe80::314f:5fe7:4e4b:64ed/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN qlen 1000 link/ether 52:54:00:2b:74:aa brd ff:ff:ff:ff:ff:ff
- Step 3 验证服务
浏览并多次强制刷新地址: http://192.168.43.150 ,可以看到103和104多次交替显示,并显示Nginx-1,则表明 Master节点在进行web服务转发。 - Step 4 关闭Master keepalived服务和Nginx服务,访问Web服务观察服务转移情况
[root@localhost keepalived]# killall keepalived
[root@localhost keepalived]# service nginx stop
[root@localhost keepalived]# killall keepalived
[root@localhost keepalived]# service nginx stop
此时强制刷新192.168.43.150发现 页面交替显示103和104并显示Nginx-2 ,VIP已转移到192.168.43.102上,已证明服务自动切换到备份节点上。
- Step 5 启动Master Keepalived 服务和Nginx服务
此时再次验证发现,VIP已被Master重新夺回,并页面交替显示 103和104,此时显示Nginx-1
四、Keepalived抢占模式和非抢占模式
keepalived的HA分为抢占模式和非抢占模式,抢占模式即MASTER从故障中恢复后,会将VIP从BACKUP节点中抢占过来。非抢占模式即MASTER恢复后不抢占BACKUP升级为MASTER后的VIP。
非抢占模式配置:
- 1> 在vrrp_instance块下两个节点各增加了nopreempt指令,表示不争抢vip
- 2> 节点的state都为BACKUP
两个keepalived节点都启动后,默认都是BACKUP状态,双方在发送组播信息后,会根据优先级来选举一个MASTER出来。由于两者都配置了nopreempt,所以MASTER从故障中恢复后,不会抢占vip。这样会避免VIP切换可能造成的服务延迟。