HAProxy—— 它跟LVS一样,本身仅仅就只是一款负载均衡软件

HAProxy特点:

  • HAProxy是支持虚拟主机的,可以工作在4、7层(支持多网段)
  • 单纯从效率上来讲HAProxy更会比Nginx有更出色的负载均衡速度,在并发处理上也是优于Nginx的;
  • HAProxy特别适用于那些负载特大的web站点,这些站点通常又需要会话保持或七层处理。
  • 支持url检测后端的服务器
  • 它的运行模式使得它可以很简单安全的整合进您当前的架构中, 同时可以保护你的web服务器不被暴露到网络上。

HAProxy基础配置文件详解:
haproxy 配置中分成五部分内容,分别如下:

  1. global: 设置全局配置参数,属于进程的配置,通常是和操作系统相关。
  2. defaults:配置默认参数,这些参数可以被用到frontend,backend,Listen组件;
  3. frontend:接收请求的前端虚拟节点,Frontend可以更加规则直接指定具体使用后端的backend;
  4. backend:后端服务集群的配置,是真实服务器,一个Backend对应一个或者多个实体服务器;
  5. Listen :frontend和backend的组合体。

HAProxy的安装与基础配置:
LB server (HAProxy)

  • systemctl enable --now haproxy
  • yum install -y haproxy
  • systemctl start haproxy
  • systemctl status haproxy
  • vim /etc/haproxy/haproxy.cfg
  • systemctl reload haproxy

server3、4 后端服务器

  • systemctl enable --now httpd

测试:

Haproxy与nginx haproxy与nginx都是几层_Nginx

Haproxy与nginx haproxy与nginx都是几层_Haproxy与nginx_02

Haproxy与nginx haproxy与nginx都是几层_服务器_03


HAProxy日志配置文件:

  • vim /etc/rsyslog.conf
  • systemctl stop rsyslog
  • systemctl start rsyslog
    设置后端服务集群无法访问时,调度器自动进行响应以及报错 配置与测试
  • vim /etc/haproxy/haproxy.cfg
    backend app
    balance roundrobin
    server app2 172.25.0.4:80 check
    server backup 127.0.0.1:8000 backup # 设置备用服务器为调度器本身的回环端口 IP(127.0.0.1)
  • echo "sorry,try late again "
  • vim /etc/http/conf/httpd.conf # http 服务端口 为8000
    liaison 8000
    测试:
  • systemctl stop httpd # 关闭 ser3 /ser 4 服务器http, 后端无法正常连接
  • 访问 172.25.1.2
    反馈报错页面信息 sorry,try late again

指定静态与动态服务站点访问配置与测试
server 4 服务器主机 - 动态

  • vim /var/www/html/index.php # 配置一个.php动态网页文件
    php
    <?php
    phpinfo()
    ?>
    server 3 服务器主机 -静态
  • mkdir /var/www/html/images #在网络共享路径下创建一个 images文件夹
  • cp iso7.jif /var/www/html/images/ # 将一张图片复制到 images文件夹中

LB server 2 (HAProxy)

  • vim /etc/haproxy/haproxy.cfg
    acl url_static path_beg -i /images # 指定 访问路径 以/images 开头
    acl url_static path_end -i .jpg .gif .png # 指定 访问路径 以/images 结尾
    use_backend static if url_static # 如果前端访问 url_static 请求来源, 由 backend static 后端服务器响应
    default_backend app # 否则 由 backend app 后端服务器响应
  • systemctl reload haproxy

测试:server 1 客户端主机 用火狐浏览器 访问 LB server 3 (172.25.1.2)

  • 172.25.1.2/images/vs.jpg
  • 172.25.1.2/index.php

指定读取与写入时, 服务站点访问调用 的配置与测试
server3、4 后端服务器

  • mkdir /var/www/html/upload
  • chmod 777 /var/www/html/upload # upload目录 ,用于存储 客户端访问上传,写入的文件
  • cp index.php /var/www/html/ # index.php 是已配置好的 动态网页,具有图形化的上传、保存功能
  • ls
    index.php upload

LB server 2 (HAProxy)

  • vim /etc/haproxy/haproxy.cfg
    acl read_request method GET # 读取请求的方法 GET / HEAD
    acl read_request method HEAD
    acl write_request method PUT # 写入请求的方法 PUT / POST
    acl write_request method POST
    use_backend static if read_request # 客户访问调度器有 读取请求时 ,访问 backend static 服务器
    use_backend app if write_request # 客户访问调度器有 写入请求时 ,访问 backend app 服务器
    default_backend static
    backend static
    balance roundrobin
    server static1 172.25.0.3:80 check

backend app
balance roundrobin
server app2 172.25.0.4:80 check
server backup 127.0.0.1:8000 backup

测试:

Haproxy与nginx haproxy与nginx都是几层_Haproxy与nginx_04

应用 Keepalived 对负载均衡调度器 LB server(HAProxy),实现高可用

高可用:两台业务系统启动着相同的服务,如果有一台故障,另一台自动接管,我们将将这个称之为高可用;
Keekpalived工作原理:通过vrrp协议实现

  • 监控keepalived所在服务器上的其他业务进程
  • 根据业务进程的运行状态决定是否需要进行主备切换。这个时候,我们可以通过编写脚本对业务进程进行检测监控。

LB server1 配置同步 LB server2 (HAProxy)

LB server1

  • systemctl start haproxy # ser1 启用 HAProxy
  • systemctl start keepalived # ser1 启用 keepalived
  • chmod +x /root/check_haproxy.sh # 检测脚本添加执行权限
  • vim /etc/keepalived/keepalived.conf

LB server2

  • scp /etc/haproxy/haproxy.cfg ser1:/etc/haproxy/ # 将ser2 haproxy.cfg 已配置好的配置文件 覆盖到 ser1
  • vim /root/check_haproxy.sh # 编辑haproxy检测shell脚本, 查看haproxy进程是否存活
    #!/bin/bash
    systemctl status haproxy &> /dev/null || systemctl restart haproxy &> /dev/null # 执行查看 haproxy 的服务状态,如果报错,返回值不为零,则执行重启 haproxy 服务

killall -0 haproxy # killall -0 haproxy echo $?//若返回0,则 haproxy 进程正常运行,若返回1,则 haproxy 服务已停止

if [ $? -ne 0 ];then # 若返回1,不等于0,执行 systemctl stop keepalived
systemctl stop keepalived
fi

  • chmod +x /root/check_haproxy.sh # 检测脚本添加执行权限
  • scp /root/check_haproxy.sh ser1:/root/ # 将检测脚本发送到 ser1
  • vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
 root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_script check_haproxy {                    #  vrrp_script 通过编写脚本对 haproxy业务进程进行 检测监控 !
 script "/opt/check_haproxy.sh"
 interval 2                                                 # interval 检测时间间隔;
 weight 0                                             
}

vrrp_instance VI_1 {
    state MASTER                                    # 角色;
    interface eth0                                      # VIP 绑定端口 ;
    virtual_router_id 51                             # 让master 和backup在同一个虚拟路由里,id 号必须相同;
    priority 100
    advert_int 1
    authentication {
        auth_type PASS                             # 认证;
        auth_pass 1111                              # 密码;
    }
    track_script {
 check_haproxy
    }
    virtual_ipaddress {
 172.25.0.100
    }
}

测试:
ser1(MASTER)

  • systemctl stop haproxy
    ser2 (BACKUP --> MASTER)

应用 Pacemaker 对负载均衡调度器 LB server(HAProxy),实现高可用

  • systemctl stop keepalived # 停止 ser1、ser2 的 keepalived 高可用 服务
systemctl start pcsd                 # 启用 pcsd 服务
pcs cluster start --all              # 开启集群
pcs status
pcs  resource create vip ocf:heartbeat:IPaddr2 ip=172.25.0.100 op monitor  interval=30s              
pcs resource create haproxy systemd:haproxy op monitor  interval=60s        # 检测间隔60s 检测 haproxy 服务状态
pcs resource group add hagroup vip haproxy                                  # 添加集群 hagroup , 先添加 vip,再启用 haproxy 
pcs status

测试

  • pcs cluster stop server1
    pcs cluster start server1

fence

FENCE工具的原理及作用

  • FENCE的工作原理是:当意外原因导致主机异常或者宕机时,备机会首先调用FENCE设备,然后通过FENCE设备将异常主机重启或者从网络隔离,当FENCE操作成功执行后,返回信息给备机,备机在接到FENCE成功的信息后,开始接管主机的服务和资源。这样通过FENCE设备,将异常节点占据的资源进行了释放,保证了资源和服务始终运行在一个节点上。
  • FENCE的作用是:通过FENCE设备可以避免因出现不可预知的情况而造成的“脑裂”现象
    应用 fence设备 ,处理意外原因导致主机异常或者宕机 ,造成的“脑裂”问题

    **配置fence设备 **
    虚拟机 ser1 、ser2服务器上
  • yum install -y fence-virtd
  • systemctl enable --now fence_virtd.service
  • mkdir -r /etc/cluster # 创建存放fence 连接认证 key 文件的路径目录

物理机上

  • yum search fence # 查找 fence 相关软件
    fence-virtd-0.4.0-7.el8.x86_64
    fence-virtd-libvirt-0.4.0-7.el8.x86_64
    fence-virtd-multicast-0.4.0-7.el8.x86_64
  • yum install -y fence-virtd fence-virtd-libvirt fence-virtd-multicast
  • systemctl enable --now fence_virtd.service
  • rpm -qa|grep fence # 查看已安装的 fence 相关软件
    fence-virtd-0.4.0-7.el8.x86_64
    fence-virtd-libvirt-0.4.0-7.el8.x86_64
    fence-virtd-multicast-0.4.0-7.el8.x86_64
  • systemctl disable --now firewalld #关闭 防火墙
  • fence_virtd -c
Module search path [/usr/lib64/fence-virt]:              # Enter

Available backends:
    libvirt 0.3
Available listeners:                                     # Enter
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:                             # Enter

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:                        # Enter

Using ipv4 as family.

Multicast IP Port [1229]:                                 # Enter

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]:br0                                    # 接入口,与配置相关, 物理机与虚拟机 使用br0 桥接方式

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:                    # Enter   /etc/cluster/fence_xvm.key  认证 key 文件及 存放路径 必须一致

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:                                  # Enter

The libvirt backend module is designed for single desktops or
servers.  Do not use in environments where virtual machines
may be migrated between hosts.

Libvirt URI [qemu:///system]:                             # Enter

Configuration complete.

=== Begin Configuration ===
backends {
 libvirt {
  uri = "qemu:///system";
 }

}

listeners {
 multicast {
  port = "1229";
  family = "ipv4";
  interface = "br0";
  address = "225.0.0.12";
  key_file = "/etc/cluster/fence_xvm.key";
 }

}

fence_virtd {
 module_path = "/usr/lib64/fence-virt";
 backend = "libvirt";
 listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

创建 fence 的 key 认证文件

  • dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes copied, 6.2171e-05 s, 2.1 MB/s
  • scp /etc/cluster/fence_xvm.key ser1:/etc/cluster/ # 把物理机上创建的/etc/cluster/fence_xvm.key同步到 二台节点 (ser1 、ser2)
  • scp /etc/cluster/fence_xvm.key ser2:/etc/cluster/
  • systemctl restart fence_virtd.service # 重启 fence_virtd.service 服务
  • netstat -anlpu|grep 1229
    udp 0 0 0.0.0.0:1229 0.0.0.0:* 7105/fence_virtd

Haproxy与nginx haproxy与nginx都是几层_负载均衡_05


集群设置

  • stonith_admin -a fence_xvm -M
  • pcs property set stonith-enabled=true
  • pcs stonith create vmfence fence_xvm pcmk_host_map=“server1:vm1;server2:vm2” op monitor interval=60s # 集群节点添 vmfence
    测试:

Nginx

Nginx的优点:
1、工作在OSI第7层,可以针对http应用做一些分流的策略。比如针对域名、目录结构。它的正则比HAProxy更为强大和灵活;
2、Nginx对网络的依赖非常小,理论上能ping通就就能进行负载功能,这个也是它的优势所在;
3、Nginx安装和配置比较简单,测试起来比较方便;
4、可以承担高的负载压力且稳定,一般能支撑超过几万次的并发量;
5、Nginx可以通过端口检测到服务器内部的故障,比如根据服务器处理网页返回的状态码、超时等等,并且会把返回错误的请求重新提交到另一个节点;
6、Nginx不仅仅是一款优秀的负载均衡器/反向代理软件,它同时也是功能强大的Web应用服务器。LNMP现在也是非常流行的web环境,大有和LAMP环境分庭抗礼之势,Nginx在处理静态页面、特别是抗高并发方面相对apache有优势;
7、Nginx现在作为Web反向加速缓存越来越成熟了,速度比传统的Squid服务器更快,有需求的朋友可以考虑用其作为反向代理加速器;

nginx 资源下载站点:http://nginx.org/en/download.html

Haproxy与nginx haproxy与nginx都是几层_Nginx_06

nginx-1.18.0版本 安装

  • tar zfs nginx-1.18.0.tar.gz # 解压缩
  • cd nginx-1.18.0
  • ./configure --prefix=/usr/local/nginx --with-http_ssl_module
checking for OS
 + Linux 3.10.0-957.el7.x86_64 x86_64
checking for C compiler ... not found

./configure: error: C compiler cc is not found                 # 报错 需要安装 gcc
  • yum install -y gcc
  • ./configure --prefix=/usr/local/nginx --with-http_ssl_module
./configure: error: the HTTP rewrite module requires the PCRE library.
You can either disable the module by using --without-http_rewrite_module
option, or install the PCRE library into the system, or build the PCRE library
statically from the source with nginx by using --with-pcre=<path> option.            # 报错 需要安装 pcre-devel
  • yum install -y pcre-devel
  • ./configure --prefix=/usr/local/nginx --with-http_ssl_module
./configure: error: SSL modules require the OpenSSL library.
You can either do not enable the modules, or install the OpenSSL library
into the system, or build the OpenSSL library statically from the source
with nginx by using --with-openssl=<path> option.                                   # 报错 需要安装 openssl-devel
  • yum install -y openssl-devel
  • ./configure --prefix=/usr/local/nginx --with-http_ssl_module
  • make && make install
    # 安装完毕

Haproxy与nginx haproxy与nginx都是几层_Haproxy与nginx_07


Haproxy与nginx haproxy与nginx都是几层_服务器_08


Haproxy与nginx haproxy与nginx都是几层_负载均衡_09


Haproxy与nginx haproxy与nginx都是几层_负载均衡_10

配置 nginx

  • vim /usr/lib/systemd/system/nginx.service
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStartPre=/usr/local/nginx/sbin/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true

[Install]
WantedBy=multi-user.target
  • vim /etc/security/limits.conf
nginx - nofile 65535
  • useradd -M -d /usr/local/nginx -s /sbin/nologin nginx # 创建一个nginx 用户来使用nginx
  • vim /user/local/nginx/conf/nginx.conf
http {

 upstream westos {
        server 172.25.0.3;
        server 172.25.0.4;
     }
server {
  listen 80;
  server_name www.westos.org;

  location / {
              proxy_pass http://westos;
         }
 }
}

upstream westos {
        ip_hash;
        server 172.25.0.3;
        server 172.25.0.4;
        }

server {
        listen 80;
        server_name www1.westos.org;

        location / {
        #       root /www1;
        #       index index.html;
                proxy_pass http://westos;
        }
}

server {
        listen 80;
        server_name www2.westos.org;

        location / {
        #        root /www2;
        #        index index.html;
                proxy_pass http://redhat;
        }
}

启动 nginx

  • systemctl daemon-reload
  • systemctl start niginx