kubernetes启用ipvs模式 (以kubeadm安装的集群为例)


Kubernetes 当中启用IPVS模式_重启

 启用ipvs而不使用iptables的原因

  • ipvs 可以更快地重定向流量,并且在同步代理规则时具有更好的性能。此外,ipvs 为负载均衡算法提供了更
  • 多选项,例如:
  1. rr :轮询调度
  2. lc :最小连接数
  3. dh :目标哈希
  4. sh :源哈希
  5. sed :最短期望延迟
  6. nq : 不排队调度

 每个节点都进行如下操作:

安装ipvs相关软件包

yum -y install ipvsadm ipset

modprobe ip_vs

lsmod | grep ip_vs

修改内核参数

vi /etc/sysctl.conf 添加

net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1


# 应用生效
sysctl -p

修改kube-proxy模式

kubectl edit configmap -n kube-system kube-proxy

mode: "ipvs"
nodePortAddresses: null


如果要修改调度算法使用
scheduler: ""

默认为空 

[root@k8s-master ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

依次重启/删除kube-proxy的pod 


Kubeadm不仅仅将k8s集群搭建简化了,还采用了容器化的方式去部署k8s组件,这里只需要知道采用了容器化部署,因为帮我们封装好了。唯一一个采用的不是容器化部署的是kubelete,这个使用的是传统的systemd去管理的,在宿主机进行管理。其他的都是采用容器化部署,也就是通过启动容器帮你拉起组件。

[root@k8s-master ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES 1/1 Running 6 2d12h 192.168.179.102 k8s-master <none> <none>
kube-proxy-7wgls 1/1 Running 2 2d11h 192.168.179.103 k8s-node1 <none> <none>
kube-proxy-vkt7g 1/1 Running 3 2d11h 192.168.179.104 k8s-node2 <none> <none>
kube-proxy-xth6p 1/1 Running 2 2d12h 192.168.179.102 k8s-master <none> <none>

删除pod之后 会新起一个新的pod 

[root@k8s-master ~]# kubectl delete pod kube-proxy-7wgls -n kube-system 
pod "kube-proxy-7wgls" deleted
[root@k8s-master ~]# kubectl delete pod kube-proxy-vkt7g -n kube-system
pod "kube-proxy-vkt7g" deleted
[root@k8s-master ~]# kubectl delete pod kube-proxy-xth6p -n kube-system
pod "kube-proxy-xth6p" deleted

等容器重启完毕再去看看

[root@k8s-master ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 127.0.0.1:32567 rr
-> 10.244.169.137:80 Masq 1 0 0
TCP 172.17.0.1:31332 rr
-> 10.244.169.139:80 Masq 1 0 0
TCP 172.17.0.1:31383 rr
-> 10.244.169.138:8443 Masq 1 0 0
TCP 172.17.0.1:32567 rr
-> 10.244.169.137:80 Masq 1 0 0
TCP 192.168.179.102:31332 rr
-> 10.244.169.139:80 Masq 1 0 0
TCP 192.168.179.102:31383 rr
-> 10.244.169.138:8443 Masq 1 0 0
TCP 192.168.179.102:32567 rr
-> 10.244.169.137:80 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 192.168.179.102:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.36.73:53 Masq 1 0 0
-> 10.244.36.75:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.36.73:9153 Masq 1 0 0
-> 10.244.36.75:9153 Masq 1 0 0
TCP 10.98.137.105:80 rr
-> 10.244.169.137:80 Masq 1 0 0
TCP 10.99.50.2:80 rr
-> 10.244.169.139:80 Masq 1 0 0
TCP 10.99.68.0:443 rr
-> 192.168.179.103:4443 Masq 1 0 0
TCP 10.106.187.194:443 rr
-> 10.244.169.138:8443 Masq 1 0 0
TCP 10.108.138.194:8000 rr
-> 10.244.36.74:8000 Masq 1 0 0
TCP 127.0.0.1:31332 rr
-> 10.244.169.139:80 Masq 1 0 0
TCP 127.0.0.1:31383 rr
-> 10.244.169.138:8443 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.36.73:53 Masq 1 0 0
-> 10.244.36.75:53 Masq 1 0 0


TCP 172.17.0.1:31332 rr
-> 10.244.169.139:80 Masq 1 0 0

[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d12h
nginx NodePort 10.99.50.2 <none> 80:31332/TCP 2d11h

[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-6799fc88d8-mkq8g 1/1 Running 0 10m 10.244.169.139 k8s-node2 <none> <none>