一、知识点分析
1.什么是k8s、做什么用的?
kubernetes简称k8s,2014年6月由Google公司正式公布出来并宣布开源,是容器集群管理系统(容器编排工具),是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。
2.什么是docker?
docker是完整的一套容器管理系统,它提供了一组命令,让用户更加方便直接地使用容器技术,而不需要过多关心底层内核技术。
3.什么是calicao?
Calico 是一套基于路由(BGP)的 SDN,它通过路由转发的方式实现容器的跨主机通信。Calico 将每个节点虚拟为一个“路由器”并为之分配独立的虚拟网段,该路由器为当前节点上的容器提供路由服务。
二、环境及配置文件
(注意:三台节点是可以访问互联网的)
主机名 | ip地址 | 角色 | 备注 |
node-3 | 192.168.200.153 | master、harbor | 管理节点 |
node-4 | 192.168.200.154 | node | 计算节点 |
node-5 | 192.168.200.155 | node | 计算节点 |
1.阿坤的calico.yaml【r7q1】
官方配置文件:
wget https://docs.projectcalico.org/v3.22/manifests/calico.yaml --no-check-certificate
2.阿坤的kubeadm-init.yaml【r7q1】
3.docker-compose-linux-x86_64【r7q1】
4.harbor-offline-installer-v2.4.2【r7q1】
三、系统环境配置
本次搭建要求准备虚拟机环境,具体要求如下:
1.准备虚拟机master实验环境
2.最低配置:2cpu,2G内存
3.卸载防火墙 firewalld
4.禁用 selinux 和 swap
5.配置yum仓库,安装kubeadm、kubelet、kubectl、docker-ce
6.配置docker私有镜像仓库和cgroup驱动(daemon.json)
7.配置内核参数(/etc/sysctl.d/k8s.conf)
1.1步骤(其俩节点亦同)
1)host文件配置
[root@node-3 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.153 node-3
192.168.200.154 node-4
192.168.200.155 node-5
[root@node-3 ~]#
2)修改主机名
hostnamectl set-hostname node-3
3)关闭firewalld(或卸载)
systemctl stop firewalld && systemctl disable firewalld
4)重置iptables(清理不干净直接影响k8s、calico、docker集群网络环境)
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
sysctl net.bridge.bridge-nf-call-iptables=1
5)清理残余网卡(纯新环境请忽略)
#之前搭过集群环境的可能有cni、flannel、calico网卡残余
ip link del cni0
ip link del flannel.1
ip link del tunl0
6)禁用 selinux 和 swap
[root@node-3 ~]# vim /etc/selinux/config
...
SELINUX=disabled
...
:wq
[root@node-3 ~]# swapoff -a
#注释掉swap挂载
[root@node-3 ~]# vim /etc/fstab
#UUID=67ab21b6-7963-41ad-857e-8d5d9b707d14 swap swap defaults 0 0
:wq
7)配置yum仓库,安装kubeadm、kubelet、kubectl、docker-ce
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum repolist
yum install -y docker-ce docker-ce-cli containerd.io
yum install -y kubeadm kubelet kubectl
yum install -y ipvsadm ipset
yum install -y wget
yum remove -y NetworkManager
#卸载NetworkManager启用network(默认是启用的)
systemctl restart network
systemctl enable network
#根据calico官方的说法可以不卸载增加配置文件,让calico具有网络管理权
vim /etc/NetworkManager/conf.d/calico.conf
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
8)配置daemon.json(需要手动创建)
#registry镜像仓库我用的harbor
[root@node-3 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors": ["https://3laho3y3.mirror.aliyuncs.com"],
"insecure-registries":["192.168.200.153:8080", "registry:8080"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
:wq
# 注意:重启docker服务前要停止所有容器
# 有配置改动的话需要daemon加载
[root@node-3 ~]# systemctl daemon-reload
[root@node-3 ~]# systemctl restart docker
9)配置内核参数(/etc/sysctl.d/k8s.conf)
[root@node-3 ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
:wq
#加载内核模块
[root@node-3 ~]# modprobe br_netfilter
[root@node-3 ~]# sysctl --system
10)重启(可选)
#设置的selinux需要重启或临时放行,修改主机名后需重新登入
#重启
[root@node-3 ~]# reboot
#不重启,选择放行selinux,我的重启过所以显示disabled,没重启应该是permissive
[root@node-3 ~]# setenforce 0
[root@node-3 ~]# getenforce
Disabled
四、部署私有仓库harbor
(注意:我用的外部数据库postgresql,也可以使用自带的数据库,此处不在赘述postgresql安装方法,如有需要后期会出。)
1.安装docker-compose
[root@node-3 ~]# mv ../docker-compose-linux-x86_64 /usr/local/bin/docker-compose
[root@node-3 ~]# chmod +x /usr/local/bin/docker-compose
[root@node-3 ~]# docker-compose --version
2.安装harbor
[root@node-3 ~]# tar -zxf harbor-offline-installer-v2.4.2.tgz
[root@node-3 ~]# cd harbor
[root@node-3 ~]# cp harbor.yml.tmpl harbor.yml
[root@node-3 ~]# vim harbor.yml
hostname: 192.168.200.153
port: 8080
#注释 https 相关配置
# https related config
#https:
# https port for harbor, default is 443
#port: 443
# The path of cert and key files for nginx
#certificate: /your/certificate/path
#private_key: /your/private/key/path
# 配置对应的 url,这里顶格
#external_url: https://devharbor.ak.com
#登入密码
harbor_admin_password: Csdn@123
# 配置数据存储目录
data_volume: /srv/docker/harbor/data
location: /srv/docker/harbor/log
#配置外部数据库(如果使用默认数据库)
external_database:
harbor:
host: 192.168.200.151
port: 5432
db_name: harbor
username: harbor
password: Csdn@123
ssl_mode: disable
max_idle_conns: 2
max_open_conns: 0
notary_signer:
host: 192.168.200.151
port: 5432
db_name: harbor_signer
username: harbor
password: Csdn@123
ssl_mode: disable
notary_server:
host: 192.168.200.151
port: 5432
db_name: harbor_server
username: harbor
password: Csdn@123
ssl_mode: disable
:wq
#数据库配置200.151操作(不使用外部数据库的忽略)
[root@localhost ~]# cd /srv/program/pgsql-10.20/
[root@localhost pgsql-10.20]# cd pgsql/
[root@localhost pgsql]# su postgres
[postgres@localhost pgsql]$ ./bin/psql
psql.bin (10.20)
Type "help" for help.
postgres=# create database harbor with encoding='utf8' owner=harbor;
postgres=# create database harbor_server with encoding='utf8' owner=harbor;
postgres=# create database harbor_signer with encoding='utf8' owner=harbor;
postgres=# \q
[postgres@localhost pgsql]$
#node3操作
#安装harbor
[root@node-3 harbor]# ./install.sh
#Harbor依赖的镜像及启动服务如下
[root@node-3 harbor]# docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
harbor-core "/harbor/entrypoint.…" core running (healthy)
harbor-jobservice "/harbor/entrypoint.…" jobservice running (healthy)
harbor-log "/bin/sh -c /usr/loc…" log running (healthy) 127.0.0.1:1514->10514/tcp
harbor-portal "nginx -g 'daemon of…" portal running (healthy)
nginx "nginx -g 'daemon of…" proxy running (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp
redis "redis-server /etc/r…" redis running (healthy)
registry "/home/harbor/entryp…" registry running (healthy)
registryctl "/home/harbor/start.…" registryctl running (healthy)
访问地址: http://192.168.200.153:8080/harbor
用户: admin
密码:Csdn@123
因为我是搭建后写的博客所以已经有k8s镜像了
可以看到下方私有仓库地址为:192.168.200.153:8080/library
五、构建k8s集群
了解kubeadm命令:
kubeadm 命令
config: 配置管理命令
help: 查看帮助
init: 初始命令
join: node加入集群的命令
reset: 还原状态命令
token: token凭证管理命令
version:查看版本
1.列出部署kubernetes需要哪些镜像(在master主机操作)
[root@node-3 ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.23.5
k8s.gcr.io/kube-controller-manager:v1.23.5
k8s.gcr.io/kube-scheduler:v1.23.5
k8s.gcr.io/kube-proxy:v1.23.5
k8s.gcr.io/pause:3.6
k8s.gcr.io/etcd:3.5.1-0
k8s.gcr.io/coredns/coredns:v1.8.6
[root@node-3 ~]#
2.将所有需要的镜像导入私有仓库(在master主机操作)
# 从阿里云拉取镜像
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/pause:3.6
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[root@node-3 ~]# docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
# 导入私有仓库harbor
[root@node-3 ~]# docker push 192.168.200.153:8080/library/kube-apiserver:v1.23.5
[root@node-3 ~]# docker push 192.168.200.153:8080/library/kube-proxy:v1.23.5
[root@node-3 ~]# docker push 192.168.200.153:8080/library/kube-controller-manager:v1.23.5
[root@node-3 ~]# docker push 192.168.200.153:8080/library/kube-scheduler:v1.23.5
[root@node-3 ~]# docker push 192.168.200.153:8080/library/etcd:3.5.1-0
[root@node-3 ~]# docker push 192.168.200.153:8080/library/coredns:v1.8.6
[root@node-3 ~]# docker push 192.168.200.153:8080/library/pause:3.6
# 删除多余的阿里云镜像
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.5
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.5
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.5
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.23.5
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/pause:3.6
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[root@node-3 ~]# docker rmi registry.aliyuncs.com/google_containers/coredns:v1.8.6
3.安装master主机
#私有仓库我用的harbor,看大家需要后期会发一个详细教程
#大体思路就是直接从共用仓库进行拉取,然后推送到本地的harbor私有仓库里
#如果没搭建私有仓库请使用imageRepository: registry.aliyuncs.com/google_containers
#kubectl、kubeadm支持自动补全功能,可以节省大量输入,自动补全脚本由 kubectl、kubeadm产生,仅需要在您的 shell 配置文件中调用即可。
#设置Tab补齐
[root@node-3 ~]# kubectl completion bash >/etc/bash_completion.d/kubectl
[root@node-3 ~]# kubeadm completion bash >/etc/bash_completion.d/kubeadm
# 注意 :配置完成以后需要退出,重新登录后才能生效
[root@node-3 ~]# exit
[root@node-3 ~]# kubeadm init --dry-run #环境检查,根据提示信息排错
#生成官方k8s配置文件
[root@node-3 ~]# kubeadm config print init-defaults >kubeadm-init.yaml
[root@node-3 ~]# vim kubeadm-init.yaml
06: ttl: 24h0m0s # token 生命周期
12: advertiseAddress: 192.168.200.153 # apiserver 的IP地址
32: imageRepository: 192.168.200.153:8080/library # 镜像仓库地址
34: kubernetesVersion: 1.23.5 # 当前安装的 k8s 版本
36: dnsDomain: cluster.local # 默认域名地址
37: podSubnet: 10.244.0.0/16 # 容器地址cidr,新添加
38: serviceSubnet: 10.254.0.0/16 # 服务地址cidr
#文件最后手动添加如下4行
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
:wq
[root@node-3 ~]# kubeadm init --config=kubeadm-init.yaml | tee master-init.log
#出现如下字段说明Ok
... ...
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
... ...
#以下命令复制保存一下,节点加入集群需要
kubeadm join 192.168.200.153:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:57ff39dfc6a30d52b95550ad2bc4986f24060e66ec96a00b2ae937da683c9213
[root@node-3 ~]# mkdir -p /root/.kube
[root@node-3 ~]# cp -i /etc/kubernetes/admin.conf /root/.kube/config
#启动服务并验证
[root@node-3 ~]# kubectl get componentstatuses #或者kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
[root@node-3 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:58:47Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:52:18Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
[root@node-3 ~]#
4.安装node节点并加入master
#以node4为例
#启动服务
[root@node-4 ~]# systemctl enable --now docker kubelet
#如果忘了保存加入集群的命令token 值可以通过以下命令查看(master主机操作)
[root@node-3 ~]# kubeadm token create --ttl=0 --print-join-command
kubeadm join 192.168.200.153:6443 --token oqlqjr.joy5xjancokpg2co --discovery-token-ca-cert-hash sha256:77c4ed2bea586d49184d002ecb2d9476202d2890db5c1ec86f2dadfce2884879
#也可以重新创建(master主机操作)
[root@node-3 ~]# kubeadm token list # 列出 token
[root@node-3 ~]# kubeadm token delete <token> # 删除 token
[root@node-3 ~]# kubeadm token create # 创建 token
[root@node-3 ~]# kubeadm token create --ttl=0 --print-join-command # 查看 token
#回到node4节点操作,加入master
[root@node-4 ~]# kubeadm join 192.168.200.153:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:57ff39dfc6a30d52b95550ad2bc4986f24060e66ec96a00b2ae937da683c9213
#node4、node5加入以后可以回到master验证
#由于我的网络组件已经安装完毕所以STATUS显示Ready,此时大家应该是NotReady
[root@node-3 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-3 Ready control-plane,master 2d v1.23.5
node-4 Ready <none> 2d v1.23.5
node-5 Ready <none> 2d v1.23.5
六、配置calico网络插件
#获取官方配置文件(我用的当前最新calico 3.22.1支持k8s最新版1.23.5)
[root@node-3 ~]# wget https://docs.projectcalico.org/v3.22/manifests/calico.yaml --no-check-certificate
[root@node-3 ~]# vim calico.yaml
apiVersion: policy/v1
kind: PodDisruptionBudget
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16" #要和kubeadm-init.yaml中podSubnet一致
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# IP automatic detection
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*" #根据自己的网卡填写,可以绑定具体的例如ens192
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Never"
:wq
[root@node-3 ~]# kubectl apply -f calico.yaml
[root@node-3 ~]# kubectl get nodes #STATUS是Ready则ok
NAME STATUS ROLES AGE VERSION
node-3 Ready control-plane,master 2d v1.23.5
node-4 Ready <none> 2d v1.23.5
node-5 Ready <none> 2d v1.23.5
#查看k8s集群状态
[root@node-3 ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-56fcbf9d6b-8kjjc 1/1 Running 1 (2d ago) 2d
kube-system calico-node-24zd6 1/1 Running 1 (2d ago) 2d
kube-system calico-node-t78sj 1/1 Running 0 2d
kube-system calico-node-vtdtp 1/1 Running 0 2d
kube-system coredns-5546454b5c-8mzhz 1/1 Running 0 2d
kube-system coredns-5546454b5c-fmqn6 1/1 Running 0 2d
kube-system etcd-node-3 1/1 Running 7 (2d ago) 2d
kube-system kube-apiserver-node-3 1/1 Running 4 (2d ago) 2d
kube-system kube-controller-manager-node-3 1/1 Running 9 (23h ago) 2d
kube-system kube-proxy-747w8 1/1 Running 0 2d
kube-system kube-proxy-qxf6d 1/1 Running 1 (2d ago) 2d
kube-system kube-proxy-v2n9j 1/1 Running 0 2d
kube-system kube-scheduler-node-3 1/1 Running 17 (45h ago) 2d
READY全部在线集群搭建到此结束!
图中最后两个NAMESPACE是我部署的kuboard组件用来代替kubernetes dashboard来管理k8s集群,在这里给大家简单展示一下:
七、结束
感谢大家一直以来的支持与鼓励,感兴趣的小伙伴可以点一手关注,后期会推出更多精彩内容!