Zookeeper的安装
简介
ZK的部署主要有三种方式:一,单机版;二,伪集群;三,集群部署。
- 单机模式:宕机导致服务不可用(单点问题)。
- 伪集群模式:在同一台机器中,使用不同端口模仿不同服务器的部署方式;多用于演示功能,实际生产中仍然存在单机模式的问题。
- 集群模式:实现了ZK服务的高可用,节点宕机不影响整个服务的运行(宕机节点数小于半数)。注意事项:ZK节点数最佳为大于等于3的奇数,且节点数不易过多,过多节点情况下,选举与数据同步会影响ZK整个服务的性能。
原因参见:道人的上一篇文章ZK的基本原理
安装部署
这里推荐ZK的安装版本在3.5版本以下(例3.4.14就可以),3.5版本集群部署的时候有坑,总是部署失败,有哪位大神可以给道人解释下这是什么原因吗?
道人介绍下两种常见的安装部署方式,一:常规部署;二:Docker部署。
一:常规部署
(1)单机模式
ZK的下载地址 :http://mirrors.hust.edu.cn/apache/zookeeper/
传输文件到服务器
sftp:/floatuse> put D:\download\chrome\zookeeper-3.4.14.tar.gz /floatuse
sftp: C:\Users\15939\Documents\NetSarang Computer\6\Xshell\Sessions\/floatuse does not exist
Uploading zookeeper-3.4.14.tar.gz to remote:/floatuse/zookeeper-3.4.14.tar.gz
sftp:/floatuse> /sec
[root@localhost floatuse]# ls
test01 test02 zookeeper-3.4.14.tar.gz
// 解压缩
[root@localhost floatuse]# tar -zxvf zookeeper-3.4.14.tar.gz
[root@localhost floatuse]# ls
test01 test02 zookeeper-3.4.14 zookeeper-3.4.14.tar.gz
// 进入config目录下,将zoo_sample.cfg改为zoo.cfg
[root@localhost zookeeper-3.4.14]# cd conf/
[root@localhost conf]# ls
configuration.xsl log4j.properties zoo_sample.cfg
[root@localhost conf]# mv zoo_sample.cfg zoo.cfg
[root@localhost conf]# ls
configuration.xsl log4j.properties zoo.cfg
zoo.cfg配置文件参数说明
修改部分:
- dataDir=/floatuse/zookeeper/data
- dataLogDir=/floatuse/zookeeper/dataLog
- 端口 clientPort=2191
[root@localhost conf]# vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/floatuse/zookeeper/data
dataLogDir=/floatuse/zookeeper/dataLog
# the port at which the clients will connect
clientPort=2191
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
参数说明
- tickTime:这个时间是作为 Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime 时间就会发送一个心跳。
- dataDir:顾名思义就是 Zookeeper 保存数据的目录,默认情况下,Zookeeper 将写数据的日志文件也保存在这个目录里。(道人这里修改了路径)
- dataLogDir:顾名思义就是Zookeeper 保存日志文件的目录(道人这里修改了路径)。
- clientPort:这个端口就是客户端连接 Zookeeper 服务器的端口,Zookeeper 会监听这个端口,接受客户端的访问请求。(默认端口为2181,由于道人该台虚机上,启动了zk容器[做了接口映射],所以修改为2191)
// 启动服务
[root@localhost zookeeper-3.4.14]# ./bin/zkServer.sh
// 连接客户端
[root@localhost zookeeper-3.4.14]# ./bin/zkCli.sh
...
// 添加了一个Znode
[zk: localhost:2181(CONNECTED) 3] create /tusan 'tusanhonghong'
Created /tusan
[zk: localhost:2181(CONNECTED) 4] ls /
[zookeeper, tusan]
(2)集群模式
提供的服务器(虚机)有 :192.168.32.134;192.168.32.135;192.168.32.136三台虚机,这里部署一个三台机器的ZK集群。
与单机版部署有以下不同
- 需要设置服务器编号,即zoo.cfg配置文件中dataDir配置的路径下,myid文件中的值。(其实就是道人上篇文章中讲到的节点Zxid–决定了那台服务器被选为Leader)。
[root@localhost data]# touch myid
[root@localhost data]# ls
myid
[root@localhost data]# vi myid
2
- 节点需要进行相同的配置(配置文件),zoo.cfg中需配置服务器节点路径
tickTime=2000
dataDir=/floatuse/zookeeper/data
clientPort=2181
initLimit=5
syncLimit=2
server.1=192.168.32.134:2888:3888
server.2=192.168.32.135:2888:3888
server.3=192.168.32.136:2888:3888
- 最后就是启动对应的ZK服务,并查看ZK节点状态(Leader,Follower)。
二:Docker部署
(1)单机模式
部署操作如下:
// 拉取镜像
[root@localhost ~]# docker pull zookeeper:3.4.14
// 查看镜像是否pull成功
[root@localhost ~]# docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/zookeeper 3.4.14 0ef24c507074 9 days ago 257 MB
docker.io/zookeeper latest f0f71453dc64 13 days ago 252 MB
docker.io/rabbitmq 3-management 5537c2a8f7c5 4 weeks ago 184 MB
docker.io/mysql 5.7 273c7fcf9499 5 weeks ago 455 MB
docker.io/mysql 8 8e8c6f8dc9df 5 weeks ago 546 MB
docker.io/tomcat 8.5.54-jdk8 31a47677561a 5 weeks ago 529 MB
docker.io/tomcat jdk8 31a47677561a 5 weeks ago 529 MB
// 运行容器(命令含义道人会在后续Docker的文章中讲到)
[root@localhost ~]# docker run -d -p 2181:2181 --name some-zookeeper --restart always 0ef24c507074
// 查看运行中的容器及其状态
[root@localhost ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d3e7c63ec3c4 0ef24c507074 "/docker-entrypoin..." About a minute ago Up About a minute 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp some-zookeeper
// 进入容器
[root@localhost ~]# docker exec -it d3e7c63ec3c4 bash
// 打开客户端
root@d3e7c63ec3c4:/zookeeper-3.4.14# ./bin/zkCli.sh
...
...
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
// 查看节点信息
ls /
[zookeeper]
(2)集群模式
提供的服务器(虚机)有 :192.168.32.134;192.168.32.135;192.168.32.136三台虚机,这里部署一个三台机器的ZK集群。
有两种部署方式,一:依次创建并运行ZK容器;二:使用docker-compose插件部署ZK集群。
依次运行容器,使用-link关联集群中节点
[root@localhost ~]# docker run -d -p 2181:2181 --name zookeeper_node1 --privileged --restart always --network zoonet --ip 192.168.32.134 \
> -v /floatuse/zookeeper/node1/volumes/data:/data \
> -v /floatuse/zookeeper/node1/volumes/datalog:/datalog \
> -v /floatuse/zookeeper/node1/volumes/logs:/logs \
> -e ZOO_MY_ID=1 \
> -e "ZOO_SERVERS=server.1=192.168.32.134:2888:3888;2181 server.2=192.168.32.135:2888:3888;2181 server.3=192.168.32.136:2888:3888;2181" 3487af26dee9
7cf11fab2d3b4da6d8ce48d8ed4a7beaab7d51dd542b8309f781e9920c36
[root@localhost ~]# docker run -d -p 2182:2181 --name zookeeper_node2 --privileged --restart always --network zoonet --ip 192.168.32.135 \
> -v /floatuse/zookeeper/node2/volumes/data:/data \
> -v /floatuse/zookeeper/node2/volumes/datalog:/datalog \
> -v /floatuse/zookeeper/node2/volumes/logs:/logs \
> -e ZOO_MY_ID=2 \
> -e "ZOO_SERVERS=server.1=192.168.32.134:2888:3888;2181 server.2=192.168.32.135:2888:3888;2181 server.3=192.168.32.136:2888:3888;2181" 3487af26dee9
a4dbfb694504acfe4b8e11b990877964477bb41f8a230bd191cba7d20996f
[root@localhost ~]# docker run -d -p 2183:2181 --name zookeeper_node3 --privileged --restart always --network zoonet --ip 192.168.32.136 \
> -v /floatuse/zookeeper/node3/volumes/data:/data \
> -v /floatuse/zookeeper/node3/volumes/datalog:/datalog \
> -v /floatuse/zookeeper/node3/volumes/logs:/logs \
> -e ZOO_MY_ID=3 \
> -e "ZOO_SERVERS=server.1=192.168.32.134:2888:3888;2181 server.2=192.168.32.135:2888:3888;2181 server.3=192.168.32.136:2888:3888;2181" 3487af26dee9
b9ae9adf86e9c7f6a3264f883206c6d0e4f6093db3200de80ef39f57160
// 查看容器
[root@localhost ~]# docker ps
docker-compose部署
首先安装插件
[root@localhost ~]# curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
注意:这里需要注意下Docker与compose的版本是否匹配
新建网络
[root@localhost ~]# docker network create --driver bridge --subnet 192.168.0.0/25 --gateway 192.168.0.1 zoo_network
docker-compose.yaml文件编辑
[root@localhost zookeeper]# vi docker-compose.yml
version: '3.1'
services:
zoo1:
image: zookeeper:3.4.14
restart: always
privileged: true
hostname: zoo1
ports:
- 2181:2181
volumes: # 挂载数据
- /usr/local/zookeeper-cluster/node1/data:/data
- /usr/local/zookeeper-cluster/node1/datalog:/datalog
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
default:
ipv4_address: 192.168.32.134
zoo2:
image: zookeeper:3.4.14
restart: always
privileged: true
hostname: zoo2
ports:
- 2182:2181
volumes: # 挂载数据
- /usr/local/zookeeper-cluster/node2/data:/data
- /usr/local/zookeeper-cluster/node2/datalog:/datalog
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
networks:
default:
ipv4_address: 192.168.32.135
zoo3:
image: zookeeper:3.4.14
restart: always
privileged: true
hostname: zoo3
ports:
- 2183:2181
volumes: # 挂载数据
- /usr/local/zookeeper-cluster/node3/data:/data
- /usr/local/zookeeper-cluster/node3/datalog:/datalog
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181
server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
networks:
default:
ipv4_address: 192.168.32.136
networks: # 自定义网络
default:
external:
name: zoo_network
使用docker-compose启动集群
[root@localhost ~]# docker-compose -f docker-compose.yml up -d