k8s集群安装手册
- 说明
- 目标
- 准备环境
- 所有节点安装docker/kubelet/kubeadm/kubectl
- 部署k8s master
- 安装网络组件calico
- Node节点加入k8s master
- 测试集群是否正常
- 部署k8s默认Dashboard
说明
你好,下文是教你一步一步安装k8s集群,本文适合刚入门k8s的伙伴,不建议用于生产环境的搭建,仅供测试~
目标
- 所有主机安装docker和k8s软件
- 部署k8s Master
- 部署容器网络插件
- Node加入k8s集
- 部署dashboard
准备环境
准备3台主机进行集群搭建测试
172.16.19.96 k8s-master
172.16.19.171 k8s-node1
172.16.19.172 k8s-node2
- 关闭主机防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables
systemctl disable iptables
- 关闭selinux
sed -i 's/enforcing/disabled/g' /etc/selinux/config
setenforce 0
- 关闭swap
swapoff -a 临时
vim /etc/fstab 永久
- 配置主机名和hosts
hostname k8s-master
hostname k8s-node1
hostname k8s-node2
vim /etc/hosts
172.16.19.96 k8s-master
172.16.19.171 k8s-node1
172.16.19.172 k8s-node2
- 将桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
echo '1' > /proc/sys/net/ipv4/ip_forward
所有节点安装docker/kubelet/kubeadm/kubectl
- docker安装
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker version
- k8s组件安装
k8s 阿里yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#可以先看下支持列表
yum --enablerepo=kubernetes list kubelet kubeadm kubectl --showduplicates | sort -r
#安装指定版本
yum --enablerepo=kubernetes install kubelet-1.16.9-0 kubeadm-1.16.9-0 kubectl-1.16.9-0
部署k8s master
- k8s 初始化配置文件
导出默认配置文件
kubeadm config print init-defaults > kubeadm-v3.conf
修改默认镜像地址: vi kubeadm-v3.conf
#指向阿里源,能科学上网就不用改
imageRepository: registry.aliyuncs.com/google_containers
#通过配置文件提前拉取镜像
kubeadm config images pull --config kubeadm-v3.conf
配置文件请参考链接:https://pan.baidu.com/s/1zcRmtss3EjTDuRnFzA4bJA 密码:h8pg
- 集群初始化
#使用配置文件拉取镜像
kubeadm config images pull --config kubeadm-v3.conf
#使用配置文件初始化集群
kubeadm init --config kubeadm-v3.conf
- 成功后输出:
[root@c2-master1 kubeadm]# kubeadm init --config kubeadm-v3.conf
[init] Using Kubernetes version: v1.16.9
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.9. Latest validated version: 18.09
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [c2-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local k8s-api-c2.ops.geekplus.cc c2-master1 c2-master2 c2-master3] and IPs [10.96.0.1 172.16.19.105 172.16.19.239 127.0.0.1 172.16.19.105 172.16.19.106 172.16.19.108 172.16.19.239]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.522694 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node c2-master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node c2-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
1 Your Kubernetes control-plane has initialized successfully!
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.16.19.96:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3610643b232a0c536ef2cac74f4ca7b836dd5d8500d52045934f5be92c447535 --control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.19.239:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:9aa544afc7007ee55f51a8d531bcd83960096ed28ba9f9ea360f68385bf4ff90
- 拷贝授权配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
###其他节点加入集群
master: kubeadm join xxxx --control-plane 参数中带控制面板
node: 没有控制板参数
- 查看当前集群情况
kubectl get nodes
安装网络组件calico
wget wget https://kuboard.cn/install-script/calico/calico-3.9.2.yaml
kubectl apply -f calico-3.9.2.yaml
#需要确保calico-3.9.2.yaml和kubeadm-v3.conf 配置的CALICO_IPV4POOL_CIDR相同
配置文件请参考链接:https://pan.baidu.com/s/10_qigXeUshq8NtLXOG-ETQ 密码:34py
Node节点加入k8s master
- 添加Master主机
同步证书/etc/kubernetes/pki/到需要做master的其他节点同路径下
scp -r /etc/kubernetes/pki/ root@172.16.19.108:/etc/kubernetes/
scp -r /etc/kubernetes/pki/ root@172.16.19.106:/etc/kubernetes/
执行join命令
kubeadm join 172.16.19.96:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3610643b232a0c536ef2cac74f4ca7b836dd5d8500d52045934f5be92c447535 --control-plane
安装网络组件
kubectl apply -f calico-3.9.2.yaml
#需要确保calico-3.9.2.yaml和kubeadm-v3.conf 配置的CALICO_IPV4POOL_CIDR相同
确认网络组件启动成功
kubectl get po -n kube-system |grep calico
确认节点加入成功
kubectl get nodes
- 添加Worker主机
kubeadm join 172.16.19.96:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:3610643b232a0c536ef2cac74f4ca7b836dd5d8500d52045934f5be92c447535
到master节点查看确认是否加入成功
kubectl get nodes
测试集群是否正常
部署一个nginx来测试一下集群是否正常
在k8s集群创建一个pod
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
访问地址http://NodeIP:Port
部署k8s默认Dashboard
官方参考文档: https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ github项目地址: https://github.com/kubernetes/dashboard
当前部署dashboard版本:v2.0.0,注意检查dashboard版本与kubernetes版本兼容性:
https://github.com/kubernetes/dashboard/releases
- 执行yaml文件直接部署:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
- 查看dashboard运行状态,以deployment方式部署,运行2个pod及2个service:
kubectl -n kubernetes-dashboard get pods
kubectl -n kubernetes-dashboard get svc
- 访问dashboard
这里作为演示,使用nodeport方式将dashboard服务暴露在集群外,指定使用30443端口,可自定义:
kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard -p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
- 查看暴露的service,已修改为nodeport类型:
kubectl -n kubernetes-dashboard get svc
- 或者下载yaml文件手动修改service部分
$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml
#修改servcie部分
$ more recommended.yaml
...
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30443
selector:
k8s-app: kubernetes-dashboard
---
...
- 更新配置
kubectl apply -f recommended.yaml
- 登录dashboard
浏览器访问dashboard:
https://<any_node_ip>:30443
Dashboard 支持 Kubeconfig 和 Token 两种认证方式,我们这里选择Token认证方式登录。
官方参考文档: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
创建dashboard-adminuser.yaml:
cat > dashboard-adminuser.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
EOF
- 创建登录用户
kubectl apply -f dashboard-adminuser.yaml
说明:上面创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard 命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可。
- 查看admin-user账户的token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
把获取到的Token复制到登录界面的Token输入框中,成功登陆dashboard