K8S官网文档为https://kubernetes.io/docs/setup/
K8S GitHub地址为https://github.com/kubernetes/kubernetes
本次部署K8S版本为V1.24.5,使用三台VM虚拟机,实验环境如下

节点

  IP

  配置

k8s-master

  192.168.179.133

  Centos7.9Minimal 2核2G 30G硬盘

k8s-node1

  192.168.179.134

  Centos7.9Minimal 2核2G 30G硬盘

k8s-node2

  192.168.179.135

  Centos7.9Minimal 2核2G 30G硬盘

部署步骤概述
三个节点均需执行以下1~10步骤
1.升级Linux内核
2.修改主机名
3.增加hosts解析
4.关闭交换空间
5.检查uuid
6.关闭防火墙
7.修改内核参数
8.安装containerd.io
9.安装kubeadm
10.拉取相关镜像
11.Master节点执行kubeadm init初始化集群
12.Master节点安装cni,这里我用的是Flannel
13.Node节点执行kubeadm join加入集群
14.Master检查集群状态和Node节点,没问题说明集群创建成功
另外还有个时间同步问题也要注意下,像集群这些,需要额外注意系统时间同步,甚至需要另外搭建时间同步服务器

下面我们就开始执行具体操作步骤
一、升级Linux内核
三个节点均需执行
Centos7.9默认内核是3.10,该内核版本安装k8s集群可能会有bug。
相关bug链接​​​https://cloud.tencent.com/developer/article/1645618​​​

内核rpm包下载地址​​https://elrepo.org/linux/kernel/el7/x86_64/RPMS/​​​内核升级这里我是升级到最新的内核版本5.19,需要下载对应kernel-ml和kernel-ml-devel包,内核升级步骤如下

[root@localhost ~]# uname -r
3.10.0-1160.el7.x86_64
[root@localhost ~]# ls
anaconda-ks.cfg kernel-ml-5.19.9-1.el7.elrepo.x86_64.rpm kernel-ml-devel-5.19.9-1.el7.elrepo.x86_64.rpm
#本地安装rpm包,输出内容省略
[root@localhost ~]# yum localinstall kernel-ml-5.19.9-1.el7.elrepo.x86_64.rpm kernel-ml-devel-5.19.9-1.el7.elrepo.x86_64.rpm -y
#查看内核序号。5.19对应的数字0就是我们下一条命令的数字
[root@localhost ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (5.19.9-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-62bc487eae0940738df69ed71054726c) 7 (Core)
#配置默认启动内核。重启服务器生效,这里到步骤10再重启也可以
[root@localhost ~]# grub2-set-default 0

 二、修改主机名

#在k8s-master、k8s-node1、k8s-node2上分别执行对应命令修改主机名
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

三、增加hosts解析
三个节点均需执行

cat >>/etc/hosts <<EOF
192.168.179.133 k8s-master
192.168.179.134 k8s-node1
192.168.179.135 k8s-node2
EOF

四、关闭交换空间
三个节点均需执行
不关闭交换空间会有问题

#临时关闭swap
swapoff -a
#永久关闭swap  
sed -i 's/.*swap.*/#&/g' /etc/fstab

五、检查uuid
k8s通过uuid对节点进行识别

#三个节点执行以下命令,对比看下是否有重复的,如果有重复的就需要重新创建虚拟机
cat /sys/class/dmi/id/product_uuid

六、关闭防火墙
三个节点均需执行

#临时关闭防火墙
systemctl stop firewalld
#永久关闭防火墙,重启后生效
systemctl disable firewalld

七、修改内核参数
三个节点均需执行

#单独使用一个文件配置k8s的内核参数,重启后永久生效
cat >/etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

#配置containerd需要用到的内核模块,重启后临时生效
cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

#临时加载containerd需要用到的内核模块
modprobe overlay
modprobe br_netfilter

#使kubernetes.conf配置的参数临时生效
sysctl --system

 八、安装containerd.io
三个节点均需执行

#提前安装好依赖
yum install -y yum-utils device-mapper-persistent-data lvm2
#配置docker源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#安装containerd.io
yum install containerd.io-1.6.6-3.1.el7.x86_64 -y
mkdir -p /etc/containerd
#containerd config default用来查看默认配置,这里将其重定向至文件
containerd config default>/etc/containerd/config.toml
systemctl restart containerd
systemctl enable containerd

注意,在k8s v1.24版本已经开始移除了对docker的默认支持,所以这里我安装的cni是containerd。具体可以看下面的截图,我翻译不太行

K8S系列(一)——使用kubeadm部署K8S集群_centos


为什么安装containerd 1.6版本?

这个要看下k8s GitHub里对应的changed,搜索下最新的changed里的containerd版本即可,这里最新要求是V1.4.12版本,所以我直接安装的1.6版本,一般向下兼容吗,所以安装版本高点应该没啥问题,或者也可以按照要求装1.4.12版本

K8S系列(一)——使用kubeadm部署K8S集群_bootstrap_02


九、安装kubeadm

三个节点均需执行

#配置k8s源,gpgcheck和repo_gpgcheck需要配置为0,不然会提示校验不通过
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#更新源
yum makecache

#安装kubeadm等包
yum install kubelet-1.24.5-0.x86_64 kubeadm-1.24.5-0.x86_64 kubectl-1.24.5-0.x86_64 git -y

#开启kubelet服务
systemctl start kubelet
systemctl enable kubelet

十、拉取相关镜像
三个节点均需执行

#该命令需要能访问国外网站,你懂的,目前我是通过这种方式拉取镜像。物理机能访问国外网站,然后vm用到NAT模式,该模式下VM也可上国外网站
kubeadm config images pull

PS:另一种方式是改用国内地址源,然后修改镜像tag,这种方法后面用到我再补充
我一般是做完这步再做重启服务器操作,正好关机给虚机做个快照方便后续有问题再回滚回来
十一、Master节点执行kubeadm init初始化集群
在Master上执行

kubeadm init \
--pod-network-cidr=192.168.0.0/16 \
--cri-socket unix:///run/containerd/containerd.sock \
--upload-certs \
--control-plane-endpoint=k8s-master

查看代码

kubeadm init执行结果输出内容,点击查看

[root@k8s-master ~]# kubeadm init \
> --pod-network-cidr=192.168.0.0/16 \
> --cri-socket unix:///run/containerd/containerd.sock \
> --upload-certs \
> --control-plane-endpoint=k8s-master
I0921 17:24:09.259608 1510 version.go:255] remote version is much newer: v1.25.1; falling back to: stable-1.24
[init] Using Kubernetes version: v1.24.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.179.133]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.133 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.179.133 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.006069 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c1432cf1fd134b5743b4d4641f2250bc821cac09718150bf6992f64908420794
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: p3ju4k.0nuw2sn9y3cl89zv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

kubeadm join k8s-master:6443 --token p3ju4k.0nuw2sn9y3cl89zv \
--discovery-token-ca-cert-hash sha256:024ef28efe30861492f3eb4f5c886de0930563e6358b2c1d1616626daa07d50b \
--control-plane --certificate-key c1432cf1fd134b5743b4d4641f2250bc821cac09718150bf6992f64908420794

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-master:6443 --token p3ju4k.0nuw2sn9y3cl89zv \
--discovery-token-ca-cert-hash sha256:024ef28efe30861492f3eb4f5c886de0930563e6358b2c1d1616626daa07d50b
[root@k8s-master ~]#

pod-network-cidr自定义子网范围(这个值我们在部署Flannel也会用到),control-plane-endpoint值改成master主机名。如果kubeadm init出问题,可通过kubeadm reset重置,重置后可再次执行kubeadm init。--cri-socket指定sock位置,一般是固定的,不同cri的socket有所不同,如下

K8S系列(一)——使用kubeadm部署K8S集群_bootstrap_03


上面是k8s官网的值,和我实际用的不太一样,我用的是下面这个别人博客里的,如下图,按照这个值时可正常部署,还没试过官方的那个值。有时间再研究下是什么情况

K8S系列(一)——使用kubeadm部署K8S集群_bootstrap_04

kubeadm init输出内容里,我们需要关注三个点。一个是处理配置文件,二是新增k8s master的kubeadm join命令,三是新Node加入集群的kubeadm join命令,如下

K8S系列(一)——使用kubeadm部署K8S集群_bootstrap_05

所以,这里我们还需要在Master上执行输出结果中的那三条命令

  mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

我们还可以查看下集群状态

[root@k8s-master ~]# kubectl cluster-info    #正常下会有类似的输出结果
Kubernetes control plane is running at https://k8s-master:6443
CoreDNS is running at https://k8s-master:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master ~]#

十二、Master节点安装cni,这里我用的是Flannel
在Master上执行
这个时候我们在master上执行kubectl get pods --all-namespaces查看全部pods,可能会发现coredns不是正常的Running状态,所以我们需要先安装cni,安装好cni后,coredns就会正常Running。这里我用的cni是Flannel
1.在master上新建kube-flannel.yml文件,文件内容为​​​https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml​​​ 里的文本
2.修改 kube-flannel.yml里的Network值为192.168.0.0/16,即"Network": "192.168.0.0/16"。和kubeadm init里的pod-network-cidr值保持一致,否则Flannel无法正常启动
3.master上执行

#使用yml文件部署Flannel
kubectl apply -f kube-flannel.yml

 4.master上执行kubectl get pods --all-namespaces查看全部pods,正常情况下会全部处于Running状态
十三、Node节点执行kubeadm join加入集群
在node1、node2上分别执行

[root@k8s-node2 ~]# kubeadm join k8s-master:6443 --token p3ju4k.0nuw2sn9y3cl89zv \
> --discovery-token-ca-cert-hash sha256:024ef28efe30861492f3eb4f5c886de0930563e6358b2c1d1616626daa07d50b
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s-node2 ~]#

注意:这里的kubeadm join是直接复制master上kubeadm init执行结果输出内容的第二个kubeadm join,不要搞错了
十四、Master检查集群和Node节点状态,没问题说明集群创建成功
在Master上执行

#查看全部node节点状态信息
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane 18h v1.24.5
k8s-node1 Ready <none> 10h v1.24.5
k8s-node2 Ready <none> 10h v1.24.5
[root@k8s-master ~]#

在master上执行kubectl get nodes可以看到新加入的Node节点。显示的新增Node状态开始时未NotReady,过几分钟后会自动变成Ready
三个节点重启服务器,k8s集群也会在机器重启后自动拉起

本文参考自:https://computingforgeeks.com/deploy-kubernetes-cluster-on-ubuntu-with-kubeadm/#comments