应公司需求,需要通过docker-swarm部署nacos集群,此次试验使用三台腾讯云的ECS实现

1. 机器准备

192.168.3.75 192.168.3.94 192.168.3.142 os版本:CentOS Linux release 7.9.2009 (Core) 资源配置:4H8G

2.三台机安装docker和docker-compose

1丶此处执行我写好的安装脚本
[root@docker-swarm-3 ~]# sh /data/script/init_docker_env.sh /data/docker

2丶检测docker和docker-docompose环境
[root@docker-swarm-3 ~]# docker info|grep Ver
 Server Version: 19.03.9
 Kernel Version: 3.10.0-1160.31.1.el7.x86_64
[root@docker-swarm-3 ~]# docker-compose -v
docker-compose version 1.29.2, build 5becea4c

3.创建docker-swarm

1丶创建自定义网桥
[root@docker-swarm-1 nacos]# docker network create --scope=swarm --driver=overlay --subnet 172.20.1.0/24 --gateway 172.20.1.1 srm

2丶查看网桥
[root@docker-swarm-1 ~]# docker network inspect srm

3丶创建docker-swarm【3.75机器执行作为主节点】
[root@docker-swarm-1 ~]# docker  swarm init --advertise-addr 192.168.3.75
Swarm initialized: current node (bavklj3tv6leadb0kz31qmkpd) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-16knypccbz8p05pvo37jerehf9oaft9n7pb892bf0qahjkev0l-e4m336cxpl9npw19lqtijjlv1 192.168.3.75:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

4丶工作节点加入集群【3.94和3.142】
[root@docker-swarm-2 ~]# docker swarm join --token SWMTKN-1-16knypccbz8p05pvo37jerehf9oaft9n7pb892bf0qahjkev0l-e4m336cxpl9npw19lqtijjlv1 192.168.3.75:2377
This node joined a swarm as a worker.
[root@docker-swarm-2 ~]# 

[root@docker-swarm-3 ~]# docker swarm join --token SWMTKN-1-16knypccbz8p05pvo37jerehf9oaft9n7pb892bf0qahjkev0l-e4m336cxpl9npw19lqtijjlv1 192.168.3.75:2377
This node joined a swarm as a worker.

后续操作均在管理节点操作【3.75】

5丶查看集群节点信息
[root@docker-swarm-1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
bavklj3tv6leadb0kz31qmkpd *   docker-swarm-1      Ready               Active              Leader              19.03.9
ru7mn4epe59ndch095b0gn2ds     docker-swarm-2      Ready               Active                                  19.03.9
ra3zz0qr935tki0d3o9vncsub     docker-swarm-3      Ready               Active                                  19.03.9

4.安装nacos集群

1丶给集群三个节点分别打上标签便于通过标签来人为介入swarm的service调度策略
192.168.3.75:
[root@docker-swarm-1 ~]# docker node update --label-add env=docker-server-1  docker-swarm-1

192.168.3.94
[root@docker-swarm-2 ~]# docker node update --label-add env=docker-server-2  docker-swarm-2

192.168.3.142
[root@docker-swarm-3 ~]# docker node update --label-add env=docker-server-3  docker-swarm-3

2丶部署nacos集群
[root@docker-swarm-1 nacos]# docker stack deploy -c docker_nacos.yml nacos

3丶查看状态
[root@docker-swarm-1 nacos]# docker stack services nacos
ID                  NAME                MODE                REPLICAS            IMAGE                       PORTS
pjrlejnh6t4a        nacos_nacos2        replicated          1/1                 nacos/nacos-server:latest   
riy2xue9r0sj        nacos_nacos3        replicated          1/1                 nacos/nacos-server:latest   
uf2itchbfmzs        nacos_mysql         replicated          1/1                 mysql:5.7.33                *:3306->3306/tcp
vetcfb8bkpli        nacos_nacos1        replicated          1/1                 nacos/nacos-server:latest   
[root@docker-swarm-1 nacos]# 

4丶如果需要看某个service的日志可以执行
[root@docker-swarm-1 nacos]# docker service logs -f nacos_nacos1

5.docker_nacos.yml的配置文件

[root@docker-swarm-1 nacos]# cat docker_nacos.yml 
version: '3.8'

services:



  nacos1:
    container_name: nacos1
    image: nacos/nacos-server:latest
    hostname: nacos1
    restart: always
    ports:
      - target: 8848
        published: 8848
        protocol: tcp
        mode: host  #采用host模式(默认为ingress,配置较灵活,根据自己的需求也可调整为ingress,本案例防止nacos 采用 swarm集群调度,所以改为host模式,两台服务器之间通过内网及nacos端口访问,通过nginx配置对外服务)
    volumes:
      - cluster1_logs:/home/nacos/logs  #配置docker存储日志的卷
    environment:
      MODE: cluster
      PREFER_HOST_MODE: hostname
      NACOS_SERVERS: 192.168.3.75:8848 192.168.3.94:8848 192.168.3.142:8848
      NACOS_SERVER_IP: 192.168.3.75
      NACOS_SERVER_PORT: 8848
      NACOS_AUTH_ENABLE: 'true'     #1.2.0版本默认关闭登陆界面
      MYSQL_SERVICE_HOST: mysql
      MYSQL_SERVICE_DB_NAME: nacos_devtest
      MYSQL_SERVICE_PORT: 3306
      MYSQL_SERVICE_USER: nacos
      MYSQL_SERVICE_PASSWORD: 123456
    deploy:
      replicas: 1       #部署时,指定部署一个副本
      placement:
        constraints:
          - node.labels.env==docker-server-1
      restart_policy:
        condition: on-failure
    depends_on:
    - mysql
    networks:
       - srm



  nacos2:
    container_name: nacos2
    image: nacos/nacos-server:latest
    restart: always
    hostname: nacos2
    ports:
      - target: 8848
        published: 8848
        protocol: tcp
        mode: host
    volumes:
        - cluster2_logs:/home/nacos/logs
    environment:
      MODE: cluster
      PREFER_HOST_MODE: hostname
      NACOS_SERVERS: 192.168.3.75:8848 192.168.3.94:8848 192.168.3.142:8848
      NACOS_SERVER_IP: 192.168.3.94
      NACOS_SERVER_PORT: 8848
      NACOS_AUTH_ENABLE: 'true'
      MYSQL_SERVICE_HOST: mysql
      MYSQL_SERVICE_DB_NAME: nacos_devtest
      MYSQL_SERVICE_PORT: 3306
      MYSQL_SERVICE_USER: nacos
      MYSQL_SERVICE_PASSWORD: 123456
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.env==docker-server-2
      restart_policy:
        condition: on-failure
    depends_on:
    - mysql
    networks:
       - srm



  nacos3:
    container_name: nacos3
    image: nacos/nacos-server:latest
    restart: always
    hostname: nacos3
    ports:
      - target: 8848
        published: 8848
        protocol: tcp
        mode: host
    volumes:
        - cluster3_logs:/home/nacos/logs
    environment:
      MODE: cluster
      PREFER_HOST_MODE: hostname
      NACOS_SERVERS: 192.168.3.75:8848 192.168.3.94:8848 192.168.3.142:8848 
      NACOS_SERVER_IP: 192.168.3.142
      NACOS_SERVER_PORT: 8848
      NACOS_AUTH_ENABLE: 'true'
      MYSQL_SERVICE_HOST: mysql
      MYSQL_SERVICE_DB_NAME: nacos_devtest
      MYSQL_SERVICE_PORT: 3306
      MYSQL_SERVICE_USER: nacos
      MYSQL_SERVICE_PASSWORD: 123456
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.env==docker-server-3
      restart_policy:
        condition: on-failure
    depends_on:
    - mysql
    networks:
       - srm


  mysql:
    image: mysql:5.7.33
    restart: always
    container_name: mysql
    hostname: mysql
    ports:
      - 3306:3306
    volumes:
      - /data/software/nacos/mysql/data:/var/lib/mysql
      - /etc/localtime:/etc/localtime:ro
      - /etc/my.cnf:/etc/mysql/mysql.conf.d/my.cnf
    environment:
      TZ: Asia/Shanghai
      MYSQL_ROOT_PASSWORD: sonar
      MYSQL_DATABASE: nacos_devtest
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.labels.env==docker-server-1
      restart_policy:
        condition: on-failure
    networks:
       - srm

volumes:
  cluster1_logs:
  cluster2_logs:
  cluster3_logs:


networks:
  srm:
    external: true

6. 安装docker&docker-compose的脚本

#!/bin/bash
#2021年9月1日14:50:59
#qqt-ssl

INSTALL_DOCKER(){

DOCKER_DIR=$1

#安装docker

        if ! which docker &>/dev/null;then
                install -d /data/{script,software}
                cd /data/software
                wget -c https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
                tar xf docker-19.03.9.tgz
                mv docker/* /usr/bin/

#配置unit启动文件
                cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

#改docker配置文件
                install -d /etc/docker
                tee /etc/docker/daemon.json<<EOF
{
"graph": "$DOCKER_DIR",
"bip": "172.31.0.1/24",
"registry-mirrors":["http://harbor.com","https://registry.docker-cn.com","http://hub-mirror.c.163.com","https://docker.mirrors.ustc.edu.cn"],
"insecure-registries": ["registry.access.rehat.com","quay.io","harbor.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

#启动dokcer
                systemctl daemon-reload
                systemctl start docker
        fi


}


INSTALL_DOCKER_COMPOSE(){

#安装docker-compose

        if ! which docker-compose &>/dev/null;then
                curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
                chmod u+x /usr/local/bin/docker-compose
        fi

}


FUNC_MAIN(){

#判断yum源是否正常工作

        [ $# -ne 1 ] && {
                echo "请输入docker数据目录"
                return 1
        }

        if ! rpm -ql nmap-ncat-6.40-19.el7.x86_64 >/dev/null 2>&1;then
                if ! yum install nmap-ncat -y -q;then
                        printf "yum源无法使用.请检查源.\n"
                        return 1
                fi
        else
                nc -w 2 -z -v github.com 443
                FLAG=$?
                [ $FLAG -eq 0 ] && {
                        INSTALL_DOCKER $1
                }
                nc -w 2 -z -v download.docker.com 443
                FLAG=$?
                [ $FLAG -eq 0 ] && {
                        INSTALL_DOCKER_COMPOSE
                }
        fi
}

FUNC_MAIN $@

7.浏览器访问

image.png

8.重点说明

1丶数据库的数据目录事先自己创建好

volumes:
- /data/software/nacos/mysql/data:/var/lib/mysql

2丶通过标签实现人为介入调入类似于k8s的污点taint

placement:
  constraints:
  - node.labels.env==docker-server-3

3丶下面的mode有两种模式

ports:
- target: 8848
    published: 8848
    protocol: tcp
    mode: host

解析:默认是ingress就是通过swarm的负载均衡模式,无论通过集群节点的映射端口都能访问到业务容器,此种方式类似于k8s的NodePort的svc服务暴露方式,而host则属于,业务容器运行在哪个节点,则就通过节点地址+映射端口访问对应的业务容器。