转自:http://laoguang.blog.51cto.com/6013350/1099103
以前一直用heartbeat或corosync+pacemaker构建高可用集群,现在发现keepalived实现起来更简单。
keepalived的master向backup发送广播,当backup一段时间收不到对方传来的VRRP广播时,backup会通过竞选一个master,master就会重新持有资源。具体的理论知识参见http://bbs.ywlm.net/thread-790-1-1.html
实验目标:2台Nginx+Keepalived 2台Lamp构建高可用Web集群
规划:
- ng1.laoguang.me 192.168.1.22 ng1
- ng2.laoguang.me 192.168.1.23 ng2
- lamp1.laoguang.me 192.168.1.24 lamp1
- lamp2.laoguang.me 192.168.1.25 lamp2
拓扑:
一.基本环境准备
ng1,ng2上安装nginx
lamp1,lamp2上构建LAMP或只安装httpd,我只安装了Httpd,这里不给大家演示了,有需要请看我的其它博文,更改lamp1,lamp2的index.html的内容分别为lamp1和lamp2,以容易区分,实际集群中内容应该是一致的,由共享存储提供。
二.ng1,ng2上安装配置keepalived
下载地址:http://www.keepalived.org/download.html
2.1 安装keepalived
- tar xvf keepalived-1.2.7.tar.gz
- cd keepalived-1.2.7
- ./configure --prefix=/usr/local/keepalived
- ##可能会提示安装popt-devel包,yum即可
- make && make install
2.2 整理配置文件与脚本
- mkdir /etc/keepalived
- ##keepalived默认配置文件从/etc/keepalived下读取
- cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
- ##就一个二进制文件,直接拷贝过去即可,多的话就更改PATH吧
- cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
- ##脚本的额外配置文件读取位置
- cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
- ##启动脚本你懂得
- cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
- ##我们关键的keepalived配置文件
2.3 修改ng1的/etc/keepalived/keepalived.conf
- ! Configuration File for keepalived
- global_defs {
- notification_email {
- ibuler@qq.com ##出故障发送邮件给谁
- }
- notification_email_from keepalived@localhost ##故障用哪个邮箱发送邮件
- smtp_server 127.0.0.1 ##SMTP_Server IP
- smtp_connect_timeout 30 ##超时时间
- router_id LVS_DEVEL ##服务器标识
- }
- vrrp_instance VI_1 {
- state BACKUP
- ##状态,都为BACKUP,它们会推选Master,如果你写MASTER,它就会是Master,
- ##当Master故障时Backup会成为Master,当原来的Master恢复后,原来的Master会成为Master
- interface eth0 ##发送VRRP的接口,仔细看你的是不是eth0
- virtual_router_id 51 ##虚拟路由标识,同一个组应该用一个,即Master与Backup同一个
- priority 100 ##重要的优先级哦
- nopreempt ##不抢占,一个故障时,重启后恢复后不抢占意资源
- advert_int 1 ##同步间隔时长
- authentication { ##认证
- auth_type PASS ##认证方式
- auth_pass www.laoguang.me ##密钥
- }
- virtual_ipaddress {
- 192.168.1.18 ##VIP
- }
- }
- ##后面的删除吧,LVS上才有用
拷贝到ng2上一份,只修改priority 90 即可
- scp /etc/keepalived/keepalived.conf 192.168.1.23:/etc/keepalived/
- ##Ng2上
- vi /etc/keepalived/keepalived.conf priority 90 ##其它一致
2.4 ng1,ng2上启动keepalived
- service keepalived start
查看日志
- tail /var/log/messages
- Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering BACKUP STATE
- Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]
- Nov 27 08:07:54 localhost Keepalived_healthcheckers[41870]: Using LinkWatch kernel netlink reflector...
- Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) forcing a new MASTER election
- Nov 27 08:07:55 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Transition to MASTER STATE
- Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering MASTER STATE
- Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) setting protocol VIPs.
- Nov 27 08:07:56 localhost Keepalived_healthcheckers[41870]: Netlink reflector reports IP 192.168.1.18 added
- Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18
- Nov 27 08:08:01 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18
查看vip绑定到哪台机器上了
- ip addr ##ng1上
- ....省略
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
- link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff
- inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0
- inet 192.168.1.18/32 scope global eth0
- inet6 fe80::20c:29ff:fee8:900b/64 scope link
- valid_lft forever preferred_lft forever
由此可知vip绑定到ng1上了
三,Keepalived测试
3.1 关闭ng1上的keepalived或者直接关闭ng1 查看vip转移情况
- service keepalived stop
- ip addr
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
- link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff
- inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0
- inet6 fe80::20c:29ff:fee8:900b/64 scope link
- valid_lft forever preferred_lft forever
3.2 查看ng2上是否绑定了vip
- ip addr
- 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
- link/ether 00:0c:29:dd:00:77 brd ff:ff:ff:ff:ff:ff
- inet 192.168.1.23/24 brd 192.168.1.255 scope global eth0
- inet 192.168.1.18/32 scope global eth0
- inet6 fe80::20c:29ff:fedd:77/64 scope link
- valid_lft forever preferred_lft forever
由此可知ip转移正常,keepalived设置成功
四.配置Nginx做反向代理
4.1 修改nginx配置文件
- vi /etc/nginx/nginx.conf
- user nginx nginx; ##运行nginx的用户和组
- worker_processes 2; ##启动进程数
- error_log /var/log/nginx/error.log notice; ##错误日志记录
- pid /tmp/nginx.pid; ##pid存放位置
- worker_rlimit_nofile 65535; ##线程最大打开文件数,须配合ulimit -SHn使用
- events {
- use epoll; ##工作模型
- worker_connections 65536; ##单进程最大连接数
- }
- http { ##http模块
- include mime.types; ##包含进来
- default_type application/octet-stream; ##默认类型
- log_format main '$remote_addr - $remote_user [$time_local] "$request" '
- '$status $body_bytes_sent "$http_referer" '
- '"$http_user_agent" "$http_x_forwarded_for"';
- ##日志格式
- access_log /var/logs/nginx/http.access.log main; ##访问日志
- client_max_body_size 20m; ##最大请求文件大小
- client_header_buffer_size 16k; ##来自客户端请求header_buffer大小
- large_client_header_buffers 4 16k; ##较大请求缓冲个数与大小
- sendfile on; ##内核空间直接发送到tcp队列
- tcp_nopush on;
- tcp_nodelay on;
- keepalive_timeout 65; ##长连接时长
- gzip on; ##启用压缩
- gzip_min_length 1k; ##最小压缩大小
- gzip_buffers 4 16k; ##压缩缓冲
- gzip_http_version 1.1; ##支持协议
- gzip_comp_level 2; ##压缩等级
- gzip_types text/plain application/x-javascript text/css application/xml; ##压缩类型
- gzip_vary on; ##前端缓存服务器可以缓存压缩过的页面
- upstream laoguang.me { ##用upstream模块定义集群与RS
- server 192.168.1.24:80 max_fails=3 fail_timeout=10s; ##RS的地址,最大错误数与超时时间,超过了自动剔除
- server 192.168.1.25:80 max_fails=3 fail_timeout=10s;
- }
- server {
- listen 80; ##监听端口
- server_name 192.168.1.18; ##servername
- root html; ##根目录
- index index.html index.htm; ##你懂得
- #charset koi8-r;
- access_log logs/192.168.1.18.access.log main;
- ##这个server的访问日志
- location / {
- proxy_pass http://laoguang.me; ##反向代理
- proxy_redirect off;
- proxy_set_header X-Real-IP $remote_addr;
- ##真实客户ip告诉后端
- proxy_set_header X-Forwarded-For Proxy_add_x_forwarded_for;
- }
- location /nginx {
- access_log off;
- stub_status on; ##状态页面
- }
- error_page 500 502 503 504 /50x.html;
- location = /50x.html {
- root html;
- }
- }
- }
4.2 拷贝到ng2上一份
- scp /etc/nginx/nginx.conf 192.168.1.23:/etc/nginx/
4.3 测试反向代理能否负载均衡
lamp1,lamp2启动httpd
- service httpd start
ng1重启nginx
- service nginx restart
用RealIp访问测试能否轮询
http://192.168.1.22
同样测试ng2,如果都能实现负载均衡,那么继续
五.测试keepalived与nginx配合运行
现在192.168.1.18在 ng2上, 访问 http://192.168.1.18 测试能否轮询
ng2上 service keepalived stop 访问测试 http://192.168.1.18 能否轮询
关闭lamp1上的service httpd stop 访问测试http://192.168.1.18 是否会报错
到此高可用webserver构建完毕,没有单点故障,任何一点故障不影响业务。
本文出自 “Free Linux,Share Linux” 博客,请务必保留此出处http://laoguang.blog.51cto.com/6013350/1099103