Redis Cluster 核心技术
Redis Cluster 是 redis的分布式解决方案,在 3.0版本正式推出 当遇到单机、内存、并发、流量等瓶颈时,可以采用 Cluster 架构方案达到负载均衡目的。 Redis Cluster 之前的分布式方案有两种:
1)客户端分区方案,优点分区逻辑可控,缺点是需要自己处理数据路由,高可用和故障转移等。
2) 代理方案,优点是简化客户端分布式逻辑和升级维护便利,缺点加重架构部署和性能消耗。
一、什么是Redis Cluster
1)Redis集群是一个可以在多个Redis节点之间进行数据共享的设施(installation)。
2)Redis集群不支持那些需要同时处理多个键的Redis命令,因为执行这些命令需要在多个Redis节点之间移动数据,并且在高负载的情况下,这些命令将降低Redis集群的性能,并导致不可预测的行为。
3)Redis集群通过分区(partition)来提供一定程度的可用性(availability):即使集群中有一部分节点失效或者无法进行通讯,集群也可以继续处理命令请求。
4)Redis集群有将数据自动切分(split)到多个节点的能力。
二、Redis Cluster的特点
1.高性能
1.在多酚片节点中,将16384个槽位,均匀分布到多个分片节点中
2.存数据时,将key做crc16(key),然后和16384进行取模,得出槽位值(0-16384之间)
3.根据计算得出的槽位值,找到相对应的分片节点的主节点,存储到相应槽位上
4.如果客户端当时连接的节点不是将来要存储的分片节点,分片集群会将客户端连接切换至真正存储节点进行数据存储
2.高可用
在搭建集群时,会为每一个分片的主节点,对应一个从节点,实现slaveof功能,同时当主节点down,实现类似于sentinel的自动failover的功能。
3.Redis Cluster客户端连接任意节点
如图所示,当我们用客户端连接A分片时,如果按照数据的取模,我们想要访问的数据,不在A分片中,那么集群会自动将请求进行转发。
三、集群拓扑
不太合理的拓扑:
合理的拓扑:
四、目录规划
# redis 安装目录
/opt/redis_{PORT}/{conf,logs,pid}
# redis 数据目录
/data/redis_{PORT}/redis_{PORT}.rdb
# redis 运维脚本
/root/scripts/redis_shell.sh
主节点 6380
从节点 6381
五、手动搭建部署集群
1.搭建
# db01创建命令
pkill redis
mkdir -p /opt/redis_{6380,6381}/{conf,logs,pid}
mkdir -p /data/redis_{6380,6381}
cat >/opt/redis_6380/conf/redis_6380.conf<<EOF
bind 10.0.0.51
port 6380
daemonize yes
pidfile "/opt/redis_6380/pid/redis_6380.pid"
logfile "/opt/redis_6380/logs/redis_6380.log"
dbfilename "redis_6380.rdb"
dir "/data/redis_6380/"
cluster-enabled yes
cluster-config-file nodes_6380.conf
cluster-node-timeout 15000
EOF
cd /opt/
cp redis_6380/conf/redis_6380.conf redis_6381/conf/redis_6381.conf
sed -i 's#6380#6381#g' redis_6381/conf/redis_6381.conf
rsync -avz /opt/redis_638* 10.0.0.52:/opt/
rsync -avz /opt/redis_638* 10.0.0.53:/opt/
redis-server /opt/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_6381/conf/redis_6381.conf
ps -ef|grep redis
# db02操作命令
pkill redis
find /opt/redis_638* -type f -name "*.conf"|xargs sed -i "/bind/s#51#52#g"
mkdir –p /data/redis_{6380,6381}
redis-server /opt/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_6381/conf/redis_6381.conf
ps -ef|grep redis
# db03操作命令
pkill redis
find /opt/redis_638* -type f -name "*.conf"|xargs sed -i "/bind/s#51#53#g"
mkdir –p /data/redis_{6380,6381}
redis-server /opt/redis_6380/conf/redis_6380.conf
redis-server /opt/redis_6381/conf/redis_6381.conf
ps -ef|grep redis
# 端口
ps -ef|grep redis
root 3358 1 0 17:18 ? 00:00:00 redis-server 10.0.0.51:6380 [cluster]
root 3360 1 0 17:18 ? 00:00:00 redis-server 10.0.0.51:6381 [cluster]
root 3362 2832 0 17:18 pts/0 00:00:00 grep --color=auto redis
2.发现节点
# 发现对方
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER MEET 10.0.0.51 6381
OK
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER MEET 10.0.0.52 6380
OK
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER MEET 10.0.0.52 6381
OK
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER MEET 10.0.0.53 6380
OK
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER MEET 10.0.0.53 6381
OK
# 查看集群
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6381 CLUSTER NODES
ec1fdae8ba605179ff162d4a91a1dc2f2441e64b 10.0.0.53:6380 master - 0 1577439109553 0 connected
7652f63eab644791cde8c9ee40b64047d90cb16e 10.0.0.53:6381 master - 0 1577439109049 5 connected
c152587c142a06b9bf47e70f538246975abf02d2 10.0.0.51:6380 master - 0 1577439102501 1 connected
83ad841167b88be38d4592540fef329a45714382 10.0.0.51:6381 myself,master - 0 0 3 connected
918f3aebf5b12cc73240d598b57d7d038a431eca 10.0.0.52:6381 master - 0 1577439108545 4 connected
7dd08158d20884785ec8a166455ada8643707752 10.0.0.52:6380 master - 0 1577439107538 2 connected
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER NODES
918f3aebf5b12cc73240d598b57d7d038a431eca 10.0.0.52:6381 master - 0 1577439110562 4 connected
83ad841167b88be38d4592540fef329a45714382 10.0.0.51:6381 master - 0 1577439113584 3 connected
ec1fdae8ba605179ff162d4a91a1dc2f2441e64b 10.0.0.53:6380 master - 0 1577439114593 0 connected
7652f63eab644791cde8c9ee40b64047d90cb16e 10.0.0.53:6381 master - 0 1577439111568 5 connected
c152587c142a06b9bf47e70f538246975abf02d2 10.0.0.51:6380 myself,master - 0 0 1 connected
7dd08158d20884785ec8a166455ada8643707752 10.0.0.52:6380 master - 0 1577439112577 2 connected
[root@db02 ~]# redis-cli -h 10.0.0.52 -p 6380 CLUSTER NODES
ec1fdae8ba605179ff162d4a91a1dc2f2441e64b 10.0.0.53:6380 master - 0 1577439198877 0 connected
918f3aebf5b12cc73240d598b57d7d038a431eca 10.0.0.52:6381 master - 0 1577439197870 4 connected
c152587c142a06b9bf47e70f538246975abf02d2 10.0.0.51:6380 master - 0 1577439195857 1 connected
7652f63eab644791cde8c9ee40b64047d90cb16e 10.0.0.53:6381 master - 0 1577439196864 5 connected
83ad841167b88be38d4592540fef329a45714382 10.0.0.51:6381 master - 0 1577439199885 3 connected
7dd08158d20884785ec8a166455ada8643707752 10.0.0.52:6380 myself,master - 0 0 2 connected
[root@db02 ~]# redis-cli -h 10.0.0.52 -p 6381 CLUSTER NODES
ec1fdae8ba605179ff162d4a91a1dc2f2441e64b 10.0.0.53:6380 master - 0 1577439202909 0 connected
918f3aebf5b12cc73240d598b57d7d038a431eca 10.0.0.52:6381 myself,master - 0 0 4 connected
c152587c142a06b9bf47e70f538246975abf02d2 10.0.0.51:6380 master - 0 1577439200893 1 connected
83ad841167b88be38d4592540fef329a45714382 10.0.0.51:6381 master - 0 1577439201901 3 connected
7dd08158d20884785ec8a166455ada8643707752 10.0.0.52:6380 master - 0 1577439197870 2 connected
7652f63eab644791cde8c9ee40b64047d90cb16e 10.0.0.53:6381 master - 0 1577439199885 5 connected
...
3.手动分配槽位
# 槽位规划
db01:6380 0-5460
db02:6380 5461-10921
db03:6380 10922-16383
# 分配槽位
redis-cli -h 10.0.0.51 -p 6380 CLUSTER ADDSLOTS {0..5460}
redis-cli -h 10.0.0.52 -p 6380 CLUSTER ADDSLOTS {5461..10921}
redis-cli -h 10.0.0.53 -p 6380 CLUSTER ADDSLOTS {10922..16383}
# 分配槽位 一条一条执行 因为21 22会报错
[允许的槽位个数误差范围2%以内]
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER ADDSLOTS {0..5460}
OK
[root@db01 ~]# redis-cli -h 10.0.0.52 -p 6380 CLUSTER ADDSLOTS {5461..10921}
(error) ERR Invalid or out of range slot
[root@db01 ~]# redis-cli -h 10.0.0.52 -p 6380 CLUSTER ADDSLOTS {5461..10922}
OK
[root@db01 ~]# redis-cli -h 10.0.0.53 -p 6380 CLUSTER ADDSLOTS {10923..16383}
OK
# 查看集群状态
[root@db01 ~]# redis-cli -h db01 -p 6380 CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:1
cluster_stats_messages_sent:1890
cluster_stats_messages_received:1890
[root@db02 ~]# redis-cli -h db02 -p 6380 CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:2
cluster_stats_messages_sent:2036
cluster_stats_messages_received:2036
[root@db03 ~]# redis-cli -h db03 -p 6380 CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:0
cluster_stats_messages_sent:1976
cluster_stats_messages_received:1976
4.手动部署复制关系
redis-cli -h 10.0.0.51 -p 6381 CLUSTER REPLICATE db02的6380的ID
redis-cli -h 10.0.0.52 -p 6381 CLUSTER REPLICATE db03的6380的ID
redis-cli -h 10.0.0.53 -p 6381 CLUSTER REPLICATE db01的6380的ID
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380 CLUSTER NODES
918f3aebf5b12cc73240d598b57d7d038a431eca 10.0.0.52:6381 master - 0 1577440106584 4 connected
83ad841167b88be38d4592540fef329a45714382 10.0.0.51:6381 master - 0 1577440102556 3 connected
ec1fdae8ba605179ff162d4a91a1dc2f2441e64b 10.0.0.53:6380 master - 0 1577440105576 0 connected 10923-16383
7652f63eab644791cde8c9ee40b64047d90cb16e 10.0.0.53:6381 master - 0 1577440107590 5 connected
c152587c142a06b9bf47e70f538246975abf02d2 10.0.0.51:6380 myself,master - 0 0 1 connected 0-5460
7dd08158d20884785ec8a166455ada8643707752 10.0.0.52:6380 master - 0 1577440104569 2 connected 5461-10922
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6381 CLUSTER REPLICATE 7dd08158d20884785ec8a166455ada8643707752
OK
[root@db01 ~]# redis-cli -h 10.0.0.52 -p 6381 CLUSTER REPLICATE ec1fdae8ba605179ff162d4a91a1dc2f2441e64b
OK
[root@db01 ~]# redis-cli -h 10.0.0.53 -p 6381 CLUSTER REPLICATE c152587c142a06b9bf47e70f538246975abf02d2
OK
5.故障转移
现在集群是有两个节点 并且每个节点都有自己的主从关系
测试关闭一个主 集群不会发生错误
关闭两个主 集群不会发生错误
关闭三个主 集群不会发生错误
恢复刚关闭的节点 自动变为从 如果还想让它成为主 在主节点上执行SLAVEOF no one
注意:更换主从关系 要注意数据变化 如果都一样可以操作 如果数据不统一切换 可能数据丢失
六、测试集群
# 1.尝试插入一条数据发现报错
10.0.0.51:6380> set k1 v1
(error) MOVED 12706 10.0.0.53:6380 [因为分槽位了 所以会报错]
# 2.目前的现象
- 在db01的6380节点插入数据提示报错
- 报错内容提示应该移动到db03的6380上
- 在db03的6380上执行相同的插入命令可以插入成功
- 在db01的6380节点插入数据有时候可以,有时候不行
- 使用-c参数后,可以正常插入命令,并且节点切换到了提示的对应节点上
# 3.问题原因
因为集群模式有ASK路由规则,加入-c参数后,会自动跳转到目标节点处理
并且最后由目标节点返回信息
# 4.测试参数
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6380
10.0.0.51:6380> set k1 v1
(error) MOVED 12706 10.0.0.53:6380
10.0.0.51:6380> exit
[root@db01 ~]# redis-cli -c -h 10.0.0.51 -p 6380
10.0.0.51:6380> set k1 v1
-> Redirected to slot [12706] located at 10.0.0.53:6380
OK
10.0.0.53:6380> get k1 [直接变53了]
"v1"
10.0.0.53:6380>
# 5.脚本
[root@db01 ~]# bash redis_shell.sh login 6380 [之前写的脚本 里面加了-c 可以直接用 直接插入]
10.0.0.51:6380> set k3 v3
OK
10.0.0.51:6380> set k5 v5
-> Redirected to slot [12582] located at 10.0.0.53:6380
OK
10.0.0.53:6380>
# 测试足够随机足够平均
[root@db01 ~]#for i in {1..10000};do redis-cli -c -h db01 -p 6380 set k_${i} v_${i} && echo "set k_${i} is ok";done
[1554的时候我按了终止]
[root@db01 ~]# bash redis_shell.sh login 6380
10.0.0.51:6380> DBSIZE
(integer) 522
[root@db02 ~]# bash redis_shell.sh login 6380
10.0.0.52:6380> DBSIZE
(integer) 510
[root@db03 ~]# bash redis_shell.sh login 6380
10.0.0.53:6380> DBSIZE
(integer) 525
七、使用工具搭建部署Redis Cluster
# 1.安装依赖-只要在db01上操作
[root@db01 ~]# yum makecache fast
[root@db01 ~]# yum install rubygems -y
[root@db01 ~]# gem sources --remove https://rubygems.org/
[root@db01 ~]# gem sources -a http://mirrors.aliyun.com/rubygems/
[root@db01 ~]# gem update -system [不执行也行 就是搞更新]
[root@db01 ~]# gem install redis -v 3.3.5
# 2.还原环境-所有节点都执行!!!
[root@db01 ~]# pkill redis
[root@db01 ~]# rm -rf /data/redis_6380/*
[root@db01 ~]# rm -rf /data/redis_6381/*
# 3.启动集群节点-所有节点都执行
[root@db01 ~]# redis-server /opt/redis_6380/conf/redis_6380.conf
[root@db01 ~]# redis-server /opt/redis_6381/conf/redis_6381.conf
[root@db01 ~]# ps -ef|grep redis
##没有主从关系 没有发现对方
[root@db01 ~]# bash redis_shell.sh login 6380
10.0.0.51:6380> CLUSTER INFO
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
10.0.0.51:6380> CLUSTER NODES
d9465fcf1c448960a56af5b0dec3581d0daf9735 :6380 myself,master - 0 0 0 connected
# 4.使用工具搭建部署Redis
[root@db01 /opt/redis/src]# ./redis-trib.rb create --replicas 1 10.0.0.51:6380 10.0.0.52:6380 10.0.0.53:6380 10.0.0.51:6381 10.0.0.52:6381 10.0.0.53:6381
##主节点在前 从节点在后
# 5.检查集群完整性
[root@db01 /opt/redis/src]# ./redis-trib.rb check 10.0.0.51:6380
# 6.检查集群信息
[root@db01 /opt/redis/src]# ./redis-trib.rb info 10.0.0.51:6380
10.0.0.51:6380 (d9465fcf...) -> 0 keys | 5461 slots | 1 slaves.
10.0.0.53:6380 (c4b50976...) -> 0 keys | 5461 slots | 1 slaves.
10.0.0.52:6380 (05815032...) -> 0 keys | 5462 slots | 1 slaves.
# 7.检查集群负载是否合规
[root@db01 /opt/redis/src]# ./redis-trib.rb rebalance 10.0.0.51:6380
*** No rebalancing needed! All nodes are within the 2.0% threshold.
八、使用工具扩容节点
1.扩容
这里直接在上做多实例
有几个节点 就除几 16384/4=4096
# 1.创建新节点-db01操作
[root@db01 ~]# mkdir -p /opt/redis_{6390,6391}/{conf,logs,pid}
[root@db01 ~]# mkdir -p /data/redis_{6390,6391}
[root@db01 ~]# cd /opt/
[root@db01 ~]# cp redis_6380/conf/redis_6380.conf redis_6390/conf/redis_6390.conf
[root@db01 ~]# cp redis_6380/conf/redis_6380.conf redis_6391/conf/redis_6391.conf
[root@db01 ~]# sed -i 's#6380#6390#g' redis_6390/conf/redis_6390.conf
[root@db01 ~]# sed -i 's#6380#6391#g' redis_6391/conf/redis_6391.conf
[root@db01 ~]# redis-server /opt/redis_6390/conf/redis_6390.conf
[root@db01 ~]# redis-server /opt/redis_6391/conf/redis_6391.conf
[root@db01 ~]# ps -ef|grep redis
[root@db01 ~]# redis-cli -c -h db01 -p 6380 cluster meet 10.0.0.51 6390
[root@db01 ~]# redis-cli -c -h db01 -p 6380 cluster meet 10.0.0.51 6391
[root@db01 ~]# redis-cli -c -h db01 -p 6380 cluster nodes
# 2.使用工具扩容步骤
[root@db01 ~]# cd /opt/redis/src/
[root@db01 /opt/redis/src]# ./redis-trib.rb reshard 10.0.0.51:6380
第一次交互:每个节点保留多少个槽位
How many slots do you want to move (from 1 to 16384)? 4096
4096[每个机器放多少个?]
第二次交互:接收节点的ID是什么
What is the receiving node ID? 6390的ID
第三次交互:哪个节点需要导出
Source node #1: all
第四次交互:确认是否执行分配
Do you want to proceed with the proposed reshard plan (yes/no)? yes
# 3.检查集群完整性
[root@db01 /opt/redis/src]# ./redis-trib.rb check 10.0.0.51:6380
#4.检查集群负载是否合规
[root@db01 /opt/redis/src]# ./redis-trib.rb rebalance 10.0.0.51:6380
之前的集群拓扑图
现在的集群拓扑图
# 5.调整复制顺序
[root@db01 ~]# redis-cli -h 10.0.0.53 -p 6381 CLUSTER REPLICATE 51-6390的ID
[root@db01 ~]# redis-cli -h 10.0.0.51 -p 6391 CLUSTER REPLICATE 51-6380的ID
# 6.测试写入脚本
[root@db01 ~]# cat input.sh
for i in $(seq 1 10000)
do
redis-cli -c -h db01 -p 6380 set k_${i} v_${i}
sleep 0.1
echo "set k_${i} is ok"
done
# 7.测试读脚本
[root@db03 ~]# cat du.sh
#!/bin/bash
for i in $(seq 1 10000)
do
redis-cli -c -h db01 -p 6380 get k_${i}
sleep 0.1
done
有数据写入和读取的情况下做迁移
迁移过程中不会影响读写
2.集群故障修复思路
背景:
槽位迁移过程中中断了
现象:
使用check命令和reshard命令不好使了
使用集群节点命令查看发现槽位状态不对
解决:
# 尝试使用fix命令
[root@db01 /opt/redis/src]# ./redis-trib.rb fix 10.0.0.51:6380 [数据没了]
# 如果fix命令解决不了,手动解决 [数据也会丢]
1.找到有问题的槽
10.0.0.51:6380> CLUSTER NODES
2.删除有问题的槽(所有主机都删掉坏的槽点)
10.0.0.51:6380> CLUSTER DELSLOTS xxxx
3.重新添加新槽位
10.0.0.51:6380> CLUSTER ADDSLOTS xxxx
九、使用工具收缩节点
# 1.使用工具收缩节点
[root@db01 ~]# cd /opt/redis/src/
[root@db01 /opt/redis/src]# ./redis-trib.rb reshard 10.0.0.51:6380
# 2.第一次交互: 要迁移多少个
How many slots do you want to move (from 1 to 16384)? 1365
[16384/3=5461.333 5461.333-4096=1365 扔几个出去]
# 3.第二次交互: 输入第一个需要接收节点的ID
What is the receiving node ID? db01的6380的ID
# 4.第三次交互: 输入需要导出的节点的ID
Please enter all the source node IDs.
Type 'all' to use all the nodes as source nodes for the hash slots.
Type 'done' once you entered all the source nodes IDs.
Source node #1: db01的6390的ID
Source node #2: done
# 5.第四次交互: 确认
Do you want to proceed with the proposed reshard plan (yes/no)? yes
# 6.继续重复的操作,直到6390所有的槽位都分配给了其他主节点
收:db01的6380的ID
发:db01的6390的ID
收:db02的6380的ID
发:db01的6390的ID
收:db03的6380的ID
发:db01的6390的ID
[root@db01 /opt/redis/src]# ./redis-trib.rb info 10.0.0.51:6380
10.0.0.51:6380 (d9465fcf...) -> 0 keys | 5461 slots | 2 slaves.
10.0.0.53:6380 (c4b50976...) -> 0 keys | 5462 slots | 1 slaves.
10.0.0.51:6390 (2d94dfbf...) -> 0 keys | 0 slots | 0 slaves.
10.0.0.52:6380 (05815032...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
# 8.忘记以及下线节点
[root@db01 /opt/redis/src]# ./redis-trib.rb del-node 10.0.0.51:6390 ID
[root@db01 /opt/redis/src]# ./redis-trib.rb del-node 10.0.0.51:6391 ID
附:启动脚本
# 启动脚本
[root@db01 ~]# cat redis_shell.sh
#!/bin/bash
USAG(){
echo "sh $0 {start|stop|restart|login|ps|tail} PORT"
}
if [ "$#" = 1 ]
then
REDIS_PORT='6379'
elif
[ "$#" = 2 -a -z "$(echo "$2"|sed 's#[0-9]##g')" ]
then
REDIS_PORT="$2"
else
USAG
exit 0
fi
REDIS_IP=$(hostname -I|awk '{print $1}')
PATH_DIR=/opt/redis_${REDIS_PORT}/
PATH_CONF=/opt/redis_${REDIS_PORT}/conf/redis_${REDIS_PORT}.conf
PATH_LOG=/opt/redis_${REDIS_PORT}/logs/redis_${REDIS_PORT}.log
CMD_START(){
redis-server ${PATH_CONF}
}
CMD_SHUTDOWN(){
redis-cli -c -h ${REDIS_IP} -p ${REDIS_PORT} shutdown
}
CMD_LOGIN(){
redis-cli -c -h ${REDIS_IP} -p ${REDIS_PORT}
}
CMD_PS(){
ps -ef|grep redis
}
CMD_TAIL(){
tail -f ${PATH_LOG}
}
case $1 in
start)
CMD_START
CMD_PS
;;
stop)
CMD_SHUTDOWN
CMD_PS
;;
restart)
CMD_START
CMD_SHUTDOWN
CMD_PS
;;
login)
CMD_LOGIN
;;
ps)
CMD_PS
;;
tail)
CMD_TAIL
;;
*)
USAG
esac
[root@db01 ~]# bash redis_shell.sh
sh redis_shell.sh {start|stop|restart|login|ps|tail} PORT