一、咱们说说saltstatck-Syndic是干什么的,能解决咱们运维的那些痛点。
salt syndic
一个基本的salt配置方式是一个master指挥一群minion,为了不再有假设使用任何单一拓扑结构,考虑多种布局的情况下,主控master可以控制一群master,通过syndic将操作命令传输给受控master,受控master来完成对自己旗下minion的管理,并将结果传回主控master,从而实现了主控master对所有minion的间接管理。
注意事项:各个 syndic 必须提供自己的file_roots目录,文件不会在 master-master 之间自动分发
syndic下边的minion执行的命令会执行syndic top里边的命令。
Syndic 必须运行在master上,并且连接到另一个master(比他更高级)(挟天子以令诸侯)
其实就像是zabbix的一个代理zabbix-proxy而已。如下图。
咱们有了salt syndic之后就可以实现salt的代理。减轻master的压力。能更好的对业务进行管理
二、 那咱们下边说说如何去实现它。
首先咱们的master 的ip地址是10.0.0.7
minion的ip地址是10.0.0.8
在一台minion端安装syndic
yum -y install salt-master salt-syndic
在10.0.0.8 上配置
vim /etc/salt/master
syndic_master: 10.0.0.7
/etc/init.d/salt-master restart
/etc/init.d/salt-syndic start
10.0.0.8的master和syndic端配置完成了。是不是很简单啊。
在10.0.0.7 上配置 ##允许另一台master有syndic
vim /etc/salt/master
order_masters: True
/etc/init.d/salt-master restart
清理环境key值
##10.0.0.7 10.0.0.8 两台机器上
/etc/init.d/salt-minion stop
salt-key -D
/etc/salt/pki/minion/
rm -rf *
配置minion端的mater ip地址。
vim /etc/salt/minion
master: 10.0.0.8
重启两台机器的minion端
/etc/init.d/salt-minion restart
这样就会出现
10.0.0.7上:
[root@linux-node1 pki]# salt-key
Accepted Keys:
linux-node2.example.com #node2的hostname
Denied Keys:
Unaccepted Keys:
Rejected Keys:
10.0.0.8(salt-syndic)上:
[root@linux-node2 minion]# salt-key
Accepted Keys:
linux-node1.example.com
linux-node2.example.com
Denied Keys:
Unaccepted Keys:
Rejected Keys:
查看一下配置文件
10.0.0.7上的
[root@linux-node1 base]# pwd
/srv/salt/base
[root@linux-node1 base]# cat top.sls
base:
'*':
- init.dns
[root@linux-node1 base]# cat init/dns.sls
/etc/resolv.conf:
file.managed:
- source: salt://init/files/resolv.conf
- user: root
- group: root
- mode: 644
10.0.0.8上的
[root@linux-node2 salt]# pwd
/srv/salt
[root@linux-node2 salt]# cat top.sls
base:
'*' :
- sysctl
[root@linux-node2 salt]# cat sysctl.sls
vm.swappiness:
sysctl.present:
- value: 0
net.ipv4.ip_local_port_range:
sysctl.present:
- value: 10000 65000
fs.file-max:
sysctl.present:
- value: 100000
那现在咱们就执行一下看看会调用哪个top
[root@linux-node1 init]# salt '*' state.highstate test=true
linux-node1.example.com:
----------
ID: vm.swappiness
Function: sysctl.present
Result: True
Comment: Sysctl value vm.swappiness = 0 is already set
Started: 23:36:17.331995
Duration: 30.952 ms
Changes:
----------
ID: net.ipv4.ip_local_port_range
Function: sysctl.present
Result: True
Comment: Sysctl value net.ipv4.ip_local_port_range = 10000 65000 is already set
Started: 23:36:17.363175
Duration: 26.248 ms
Changes:
----------
ID: fs.file-max
Function: sysctl.present
Result: True
Comment: Sysctl value fs.file-max = 100000 is already set
Started: 23:36:17.389637
Duration: 25.932 ms
Changes:
Summary
------------
Succeeded: 3
Failed: 0
------------
Total states run: 3
linux-node2.example.com:
----------
ID: vm.swappiness
Function: sysctl.present
Result: True
Comment: Sysctl value vm.swappiness = 0 is already set
Started: 23:36:17.300688
Duration: 26.638 ms
Changes:
----------
ID: net.ipv4.ip_local_port_range
Function: sysctl.present
Result: True
Comment: Sysctl value net.ipv4.ip_local_port_range = 10000 65000 is already set
Started: 23:36:17.327543
Duration: 24.356 ms
Changes:
----------
ID: fs.file-max
Function: sysctl.present
Result: True
Comment: Sysctl value fs.file-max = 100000 is already set
Started: 23:36:17.352098
Duration: 24.526 ms
Changes:
Summary
------------
Succeeded: 3
Failed: 0
------------
Total states run: 3