• 注意

该文章记录于2024年3月份,在 4.4 小节中,安装 calico 插件,是可以直接从 docker hub 拉取的,但从2024年6月起,因政策原因被封禁,联网无法直接拉取(有些同学说可以拉取,能有能的办法,会的自然会,不会的也别问),解决办法直接手工导入离线镜像即可,具体操作请参考下面【G050】文章

1 您需要了解

  • 操作系统版本 CentOS Stream release 8 ,采用最小化 minimal 安装,配置静态 IP
  • 安装源您可访问 阿里开源镜像站 或其他镜像站进行下载
  • 环境用到 3台 虚拟机,vcpus至少2个,单网卡,网卡类型配置为 NAT桥接,需连通外部网络
  • 安装详细步骤中,Kubernetes 源配置方法有些变化,已在脚本中进行更新,直接下载使用
  • 匹配最新版本 calico 网络插件 v3.27.2,具体要求及配置流程请参考 官文 Install Calico
  • 为有更好的浏览体验,您可以点击文章左上方目录按钮来显示文章整体目录结构

2 环境规划

主机名 IP 网关/DNS CPU/内存 磁盘 角色 备注
kmaster 192.168.100.138 192.168.100.2 2c / 8G 100g 控制节点
knode1 192.168.100.139 192.168.100.2 2c / 8G 100g 工作节点1
knode2 192.168.100.140 192.168.100.2 2c / 8G 100g 工作节点2

3 系统环境配置

三台节点配置好主机名及IP地址即可,系统环境配置分别通过脚本完成,脚本及配置文件点击此处获取

[root@kmaster ~]# sh Stream8-k8s-v1.29.2.sh
[root@knode1 ~]# sh Stream8-k8s-v1.29.2.sh
[root@knode2 ~]# sh Stream8-k8s-v1.29.2.sh

***kmaster输出记录***
==============================================================================================================================
 Package                               Architecture          Version                          Repository                 Size
==============================================================================================================================
Installing:
 kubeadm                               x86_64                1.29.2-150500.1.1                kubernetes                9.7 M
 kubectl                               x86_64                1.29.2-150500.1.1                kubernetes                 10 M
 kubelet                               x86_64                1.29.2-150500.1.1                kubernetes                 19 M

......略......

Verifying        : kubeadm-1.29.2-150500.1.1.x86_64                                                                    7/10 
Verifying        : kubectl-1.29.2-150500.1.1.x86_64                                                                    8/10 
Verifying        : kubelet-1.29.2-150500.1.1.x86_64                                                                    9/10 
Verifying        : kubernetes-cni-1.3.0-150500.1.1.x86_64                                                             10/10

Installed:
  conntrack-tools-1.4.4-11.el8.x86_64       cri-tools-1.29.0-150500.1.1.x86_64         kubeadm-1.29.2-150500.1.1.x86_64      
  kubectl-1.29.2-150500.1.1.x86_64          kubelet-1.29.2-150500.1.1.x86_64           kubernetes-cni-1.3.0-150500.1.1.x86_64
  libnetfilter_cthelper-1.0.0-15.el8.x86_64 libnetfilter_cttimeout-1.0.0-11.el8.x86_64 libnetfilter_queue-1.0.4-3.el8.x86_64 
  socat-1.7.4.1-1.el8.x86_64               

Complete!
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
10 configuration successful ^_^
Congratulations ! The basic configuration has been completed

***knode1和knode2输出记录与kmaster一致***

4 集群搭建

4.1 初始化集群(仅master节点)

脚本中最后一条第 11 条命令,单独拷贝在 kmaster 节点上运行

[root@kmaster ~]# kubeadm init --image-repository /google_containers --kubernetes-version=v1.29.2 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.29.2
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0312 12:13:25.451708   13656 checks.go:835] detected that the sandbox image "/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.100.138]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.138 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster localhost] and IPs [192.168.100.138 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.002081 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kmaster as control-plane by adding the labels: [/control-plane /exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster as control-plane by adding the taints [/control-plane:NoSchedule]
[bootstrap-token] Using token: 9wo4e7.663zzjqsxfoo8rol
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.138:6443 --token 9wo4e7.663zzjqsxfoo8rol \
	--discovery-token-ca-cert-hash sha256:3b2314552f1c32e08d2f6947c00e295ad308175b17a04a8cd9a496a0419dc275 

4.2 配置环境变量(仅master节点)

[root@kmaster ~]# mkdir -p $HOME/.kube
[root@kmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@kmaster ~]# echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /etc/profile
[root@kmaster ~]# source /etc/profile

[root@kmaster ~]# kubectl get node
NAME      STATUS     ROLES           AGE     VERSION
kmaster   NotReady   control-plane   5m37s   v1.29.2

4.3 工作节点加入集群(knode1节点及knode2节点)

4.1 小节 最后生成的 kubeadm join 语句,分别拷贝至两个节点执行

[root@knode1 ~]# kubeadm join 192.168.100.138:6443 --token 9wo4e7.663zzjqsxfoo8rol \
> --discovery-token-ca-cert-hash sha256:3b2314552f1c32e08d2f6947c00e295ad308175b17a04a8cd9a496a0419dc275
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@knode2 ~]# kubeadm join 192.168.100.138:6443 --token 9wo4e7.663zzjqsxfoo8rol \
> --discovery-token-ca-cert-hash sha256:3b2314552f1c32e08d2f6947c00e295ad308175b17a04a8cd9a496a0419dc275
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@kmaster ~]# kubectl get node
NAME      STATUS     ROLES           AGE     VERSION
kmaster   NotReady   control-plane   6m59s   v1.29.2
knode1    NotReady   <none>          50s     v1.29.2
knode2    NotReady   <none>          46s     v1.29.2

4.4 安装 calico 网络(仅master节点)

安装网络组件前,集群状态为 NotReady,安装后,稍等片刻,集群状态将变为 Ready

  • 查看集群状态
[root@kmaster ~]# kubectl get node
NAME      STATUS     ROLES           AGE     VERSION
kmaster   NotReady   control-plane   6m59s   v1.29.2
knode1    NotReady   <none>          50s     v1.29.2
knode2    NotReady   <none>          46s     v1.29.2
  • 安装 Tigera Calico operator
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/tigera-operator.yaml
***如果由于网络问题无法创建,可以提前把文件内容上传到本地(自行网盘获取,没找到?认真找一下)***

[root@kmaster ~]# kubectl create -f tigera-operator-3-27-2.yaml 
namespace/tigera-operator created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
/ created
serviceaccount/tigera-operator created
/tigera-operator created
/tigera-operator created
deployment.apps/tigera-operator created
  • 配置 custom-resources.yaml
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.27.2/manifests/custom-resources.yaml
***如果由于网络问题无法创建,可以提前把文件内容上传到本地(自行网盘获取,没找到?认真找一下)***

[root@kmaster ~]# vim custom-resources-3-27-2.yaml 
***更改IP地址池中的 CIDR,和 kubeadm 初始化集群中的 --pod-network-cidr 参数保持一致(配置文件已做更改)***
cidr: 10.244.0.0/16

[root@kmaster ~]# kubectl create -f custom-resources-3-27-2.yaml 
/default created
/default created

***动态查看calico容器状态,待全部running后,集群状态变为正常***
[root@kmaster ~]# watch kubectl get pods -n calico-system

NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-f88f77676-crjd9   1/1     Running   0          27m
calico-node-2w85m                         1/1     Running   0          27m
calico-node-f7wqs                         1/1     Running   0          27m
calico-node-j4mpk                         1/1     Running   0          27m
calico-typha-c5b7dff9d-9glpc              1/1     Running   0          27m
calico-typha-c5b7dff9d-n5r4c              1/1     Running   0          27m
csi-node-driver-bbf2r                     2/2     Running   0          27m
csi-node-driver-q4ctf                     2/2     Running   0          27m
csi-node-driver-zwz5p                     2/2     Running   0          27m
  • 再次查看集群状态
[root@kmaster ~]# kubectl get node
NAME      STATUS   ROLES           AGE   VERSION
kmaster   Ready    control-plane   38m   v1.29.2
knode1    Ready    <none>          32m   v1.29.2
knode2    Ready    <none>          32m   v1.29.2

[root@kmaster ~]# kubectl get pod -A
NAMESPACE          NAME                                      READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-7f468f98b5-ncns4         1/1     Running   0          24m
calico-apiserver   calico-apiserver-7f468f98b5-xx6dd         1/1     Running   0          24m
calico-system      calico-kube-controllers-f88f77676-crjd9   1/1     Running   0          28m
calico-system      calico-node-2w85m                         1/1     Running   0          28m
calico-system      calico-node-f7wqs                         1/1     Running   0          28m
calico-system      calico-node-j4mpk                         1/1     Running   0          28m
calico-system      calico-typha-c5b7dff9d-9glpc              1/1     Running   0          28m
calico-system      calico-typha-c5b7dff9d-n5r4c              1/1     Running   0          28m
calico-system      csi-node-driver-bbf2r                     2/2     Running   0          28m
calico-system      csi-node-driver-q4ctf                     2/2     Running   0          28m
calico-system      csi-node-driver-zwz5p                     2/2     Running   0          28m
kube-system        coredns-857d9ff4c9-h8qbw                  1/1     Running   0          39m
kube-system        coredns-857d9ff4c9-x9hx2                  1/1     Running   0          39m
kube-system        etcd-kmaster                              1/1     Running   0          39m
kube-system        kube-apiserver-kmaster                    1/1     Running   0          39m
kube-system        kube-controller-manager-kmaster           1/1     Running   0          39m
kube-system        kube-proxy-5ddsn                          1/1     Running   0          39m
kube-system        kube-proxy-rgbhv                          1/1     Running   0          33m
kube-system        kube-proxy-rs29b                          1/1     Running   0          33m
kube-system        kube-scheduler-kmaster                    1/1     Running   0          39m
tigera-operator    tigera-operator-748c69cf45-48bpc          1/1     Running   0          29m

[root@kmaster ~]# crictl images
IMAGE                                                             TAG                 IMAGE ID            SIZE
/calico/cni                                              v3.27.2             bbf4b051c5078       88.2MB
/calico/csi                                              v3.27.2             b2c0fe47b0708       8.74MB
/calico/kube-controllers                                 v3.27.2             849ce09815546       33.4MB
/calico/node-driver-registrar                            v3.27.2             73ddb59b21918       11.2MB
/calico/node                                             v3.27.2             50df0b2eb8ffe       117MB
/calico/pod2daemon-flexvol                               v3.27.2             ea79f2d96a361       7.6MB
/google_containers/coredns                   v1.11.1             cbb01a7bd410d       18.2MB
/google_containers/etcd                      3.5.10-0            a0eed15eed449       56.6MB
/google_containers/kube-apiserver            v1.29.2             8a9000f98a528       35.1MB
/google_containers/kube-controller-manager   v1.29.2             138fb5a3a2e34       33.4MB
/google_containers/kube-proxy                v1.29.2             9344fce2372f8       28.4MB
/google_containers/kube-scheduler            v1.29.2             6fc5e6b7218c7       18.5MB
/google_containers/pause                     3.6                 6270bb605e12e       302kB
/google_containers/pause                     3.9                 e6f1816883972       322kB

  • END