环境准备
集群数量此次使用3台CentOS 7系列机器
节点名称 | 节点IP |
k8s-master01 | 192.168.0.150 |
k8s-node01 | 192.168.0.151 |
k8s-node02 | 192.168.0.152 |
设置系统别名
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
查看设置成功
hostname
设置hosts
vi /etc/hosts
192.168.0.150 k8s-master01
192.168.0.151 k8s-node01
192.168.0.152 k8s-node02
设置成功
安装依赖包(所有)
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wgetvimnet-tools git
设置防火墙为 Iptables 并设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables&& iptables -F && service iptables save
关闭 SELINUX
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
调整内核参数,对于 K8S
cat > kubernetes.conf<<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
关闭系统不需要服务
systemctl stop postfix && systemctl disable postfix
设置 rsyslogd 和 systemd journald
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
# 持久化保存到磁盘
Storage=persistent
# 压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
# 最大占用空间 10G
SystemMaxUse=10G
# 单日志文件最大 200M
SystemMaxFileSize=200M
# 日志保存时间 2 周
MaxRetentionSec=2week
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF
systemctl restart systemd-journald
关闭系统不需要服务设置 rsyslogd 和 systemd journald升级系统内核为 4.44及以上
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvhhttp://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'
reboot
uname -r
环境准备好了,开始部署集群
kube-proxy开启ipvs的前置条件
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&lsmod | grep -e ip_vs -e nf_conntrack_ipv4
安装 Docker 软件
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum update -y && yum install -y docker-ce
## 创建 /etc/docker 目录
mkdir /etc/docker
# 配置 daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
docker -v
安装 Kubeadm (主从配置)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install kubeadm kubectl kubelet
systemctl enable kubelet.service
如果遇到安装不了
解决方案
重启试试
yum clean all
再次操作
kubeadm version
初始化主节点
kubeadm init \
--apiserver-advertise-address=192.168.0.150 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
建议至少2 cpu ,2G,非硬性要求,1cpu,1G也可以搭建起集群。但是:
1个cpu的话初始化master的时候会报 [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
部署插件或者pod时可能会报warning:FailedScheduling:Insufficient cpu, Insufficient memory
如果出现这种提示,说明你的虚拟机分配的CPU为1核,需要重新设置虚拟机master节点内核数。
查看镜像
docker images
2)使用kubectl工具
复制如下命令直接执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
下面就可以直接使用kubectl命令了
kubectl get node
kubectl get pod --all-namespaces
node节点为NotReady,因为corednspod没有启动,缺少网络pod
安装calico网络
[root@k8s-master01 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
查看pod和node
出现这个等待
查看命名空间为kube-system 的pods
kubectl get pods -n kube-system
查看其详细信息
kubectl describe pod calico-node-t4r8w -n kube-system
多等会就好了有可能是网络问题
kubectl get pods -n kube-system
kubectl get node
加入node
kubeadm join 192.168.0.150:6443 --token pbi1bu.as6kf2w36bdjnw9r --discovery-token-ca-cert-hash sha256:020160ae69ae980d6bd942e2bcfdd47330476317ccb09c161e3853db51dd97e0
node1和node2 加入
查看节点是否安装成功
kubectl get node
如果token忘记了,则可以通过如下操作:
1)查看token,如果token失效,则重新生成一个
kubeadm token list
kubeadm token create
2)获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
3)节点加入集群
kubeadm join 192.168.0.150:6443 --token pbi1bu.as6kf2w36bdjnw9r --discovery-token-ca-cert-hash sha256:020160ae69ae980d6bd942e2bcfdd47330476317ccb09c161e3853db51dd97e0
测试kubernetes集群
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods
kubectl get pod,svc
等待执行完看结果
kubectl get pod,svc -o wide
在node1 查看
docker ps -a | grep nginx
其他命令:显示Pod的更多信息
kubectl get pod nginx-f89759699-dgd5x -o wide
其他命令: 以yaml格式显示Pod的详细信息
kubectl get pod nginx-f89759699-dgd5x -o yaml
浏览器查看 都可以查看
http://192.168.0.150:31844/ http://192.168.0.151:31844/ http://192.168.0.152:31844/
删除pod
kubectl delete pod nginx-f89759699-dgd5x
扩容
kubectl scale deploy/nginx --replicas=3
kubectl get pod
kubectl get pod nginx-f89759699-dgd5x -o wide
进入pod
kubectl exec -ti nginx-f89759699-8c62t -- /bin/sh
cd /usr/share/nginx/html
安装kubernetes-dashboard
官方部署dashboard的服务没使用nodeport,将yaml文件下载到本地,在service里添加nodeport
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc7/aio/deploy/recommended.yaml
kubectl create namespace kubernetes-dashboard
vi recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30000
selector:
k8s-app: kubernetes-dashboard
创建运行
kubectl create -f recommended.yaml
安装失败了,怎么清理环境重新安装啊?执行一条命令:(慎用)
kubeadm reset