一、consul单机安装
1、安装
1.下载并解压consul
cd /opt/
mkdir consul
chmod 777 consul
cd consul
wget https://releases.hashicorp.com/consul/1.3.0/consul_1.3.0_linux_amd64.zip
unzip consul_1.3.0_linux_amd64.zip
cp consul /usr/local/bin/
2. 检查是否安装成功
consul
consul version
#前台启动
consul agent -dev -ui -node=consul-dev -client=192.168.1.210
3.浏览器输入:http://IP:8500/出现ConsulWeb界面就表示成功了
2、设置开机自启动
一、路径/usr/lib/systemd/system/,新建一个service命名为,consul.service
[Unit]
Description=consul
After=network.target
[Service]
ExecStart=/usr/local/consul/start.sh
KillSignal=SIGTERM
[Install]
WantedBy=multi-user.target
二、编辑/usr/local/consul/start.sh
#!/bin/bash
consul agent -dev -ui -node=consul-dev -client=192.168.1.210
三、执行命令
systemctl start consul
systemctl enable consul
systemctl status consul
二、consul集群安装
1、准备
三台机器:
vm-a 192.168.1.211 centos7
vm-b 192.168.1.212 centos7
vm-c 192.168.1.213 centos7
Consul官网(https://www.consul.io/downloads.html)
unzip consul_1.3.0_linux_amd64.zip
mv cosul /usr/local/bin
2、集群启动
consul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=192.168.1.211 -bind=192.168.1.211 -client=0.0.0.0 -datacenter=dc1 -ui
consul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=192.168.1.212 -bind=192.168.1.212 -client=0.0.0.0 -datacenter=dc1 -ui
consul agent -server -bootstrap-expect=3 -data-dir=/tmp/consul -node=192.168.1.213 -bind=192.168.1.213 -client=0.0.0.0 -datacenter=dc1 -ui
此时三台机器都会打印:
2019/03/20 10:57:36 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2019/03/20 10:57:36 [INFO] agent: started state syncer
2019/03/20 10:57:44 [ERR] agent: failed to sync remote state: No cluster leader
此时三台机器还未join,不能算是一个集群,三台机器上的consul均不能正常工作,因为leader未选出。
三台机器组成consul集群
consul集群:当一个consul agent启动后,它不知道任何其他节点,要学习到集群中的其他节点,agent必须加入一个已经存在的集群(cluster)。要加入这样的集群,它只需要知道这个集群中的一个节点即可。它加入后,将会和这个member gossip(交谈)并迅速发现集群中的其他节点。一个consul agent可以加入任何类型的其他agent,而不只是那些运行于server mode的agent。
分别登录第2台和第3台虚拟机上执行如下命令,让consul加入集群:
192.168.1.212加入192.168.1.211
[root@localhost consul-cluster]# consul join 192.168.1.211
Successfully joined cluster by contacting 1 nodes.
[root@localhost consul-cluster]#
192.168.1.212加入192.168.1.213
[root@localhost consul-cluster]# consul join 192.168.1.211
Successfully joined cluster by contacting 1 nodes.
[root@localhost consul-cluster]#
很快三台机器都会打印:
2019/03/20 10:59:12 [INFO] raft: Added peer d89335fd-cfb8-1fc0-3902-b847e125fa2c, starting replication
2019/03/20 10:59:12 [INFO] consul: cluster leadership acquired
2019/03/20 10:59:12 [INFO] consul: New leader elected: 192.168.1.211
证明此时leader已经选出,集群可以正常工作。访问:http://92.168.1.211:8500/
集群状态查看
[root@localhost ~]# consul operator raft list-peers
Node Address Status Type Build Protocol DC Segment
192.168.1.211 192.168.1.211:8301 alive server 1.3.0 2 dc1 <all>
192.168.1.212 192.168.1.212:8301 alive server 1.3.0 2 dc1 <all>
192.168.1.213 192.168.1.213:8301 alive server 1.3.0 2 dc1 <all>
查看members状态:
[root@localhost ~]# consul members
Node ID Address State Voter RaftProtocol
192.168.1.211 9caa4a7b-5d40-a754-a6d2-e993ebd54e1e 192.168.1.211:8300 follower true 3
192.168.1.212 6227c834-9e91-d756-5335-5e7587496235 192.168.1.212:8300 leader true 3
192.168.1.213 98ba9fdc-f94f-7df2-f7db-70f234937526 192.168.1.213:8300 follower true 3
备注:集群重启或kill consul时 需要删除/tmp/consul下所有文件
cd /tmp/consul
rm -rf *