DevOps-K8s集群

DevOps

DevOps 是一组过程、方法与系统的统称,并通过工具实现自动化部署,确保部署任务的可重复性,减少部署出错的可能性。随着微服务、中台架构的兴起,DevOps的重要性日益显著。本文以K8s为底座,部署Gitlab+Jenkins+K8s的方式全面构建DevOps的CICD功能

K8s部署

使用kubeadm 的方式部署集群
本文使用 1 master 2 node 的方式部署,3台机器需要安装新版本的Docker-CE

1.准备开始

主机

系统版本

Docker版本

master

CentOS Linux release 7.6.1810 (Core)

20.10.5

node1

CentOS Linux release 7.6.1810 (Core)

20.10.5

node2

CentOS Linux release 7.6.1810 (Core)

20.10.5

  • 一台兼容Linux主机。Kubernetes项目为基于Debian和Red Hat 的Linux 发行版以及一些不提供包管理的发行版提供通用的命令
  • 每台机器2GB 或更多的RAM (如果少语音这个数字将会影响应用的运行)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能互相连接(公网和内网都可以)
  • 节点之间不可以有重复的主机名、MAC地址或者prodect_uuid
  • 禁用交换分区。保证kubelet正常功能

检查所需端口

协议

方向

端口范围

作用

使用者

TCP

入站

6443

Kubernetes API服务

所有组件

TCP

入站

2379~2380

etcd服务器客户端API

kube-apiserver,etcd

TCP

入站

10250

Kubelet API

Kubelet 自身、控制本平面组件

TCP

入站

10251

kuber-scheduler

kube-scheduler自身

TCP

入站

10252

kube-controller-manager

kube-controller-manager自身

工作节点

协议

方向

端口范围

作用

使用者

TCP

入站

10250

Kubelet API

kubelet 自身、控制平面组件

TCP

入站

30000~32767

NodePort 服务

所有组件

修改主机名
修改主机名分为三步

  1. 使用hostnamectl set-hostname $
  2. 修改 /etc/sysconfig/network 文件 修改 HOSTNAME = ${name}.domainname
  3. 修改本机的域名解析文件 /etc/hosts, 使得本机的应用程序能解析新的主机名
    修改 xxx.xxx.xxx.xxx ${name}.domainname $

2.使用Kubeadm安装主节点和计算节点(1.18.20)

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

# Set SELinux in permissive mode (effectively disabling it)
setenforce 0 
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
## Master节点需要安装 kubectl  Node节点不需要
yum install -y kubelet-1.18.20 kubeadm-1.18.20 kubectl-1.18.20 --disableexcludes=kubernetes systemctl enable --now kubelet
## Node 节点命令
yum install -y kubelet-1.18.20 kubeadm-1.18.20 --disableexcludes=kubernetes systemctl enable --now kubelet

3.设置iptables

cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

4.关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

5.设置cgroup为 systemd

cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl enable docker
systemctl restart docker

6.关闭Swap 分区

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

7.设置DNS

vim /etc/hosts
##添加集群的配置
xxx.xxx.xxx.xxx master
xxx.xxx.xxx.xxx node1
xxx.xxx.xxx.xxx node2

8.Master初始化

[root@m-100 ~]# kubeadm init \
        --cert-dir /etc/kubernetes/pki \
        --image-repository registry.aliyuncs.com/google_containers \
        --pod-network-cidr 10.11.0.0/16 \
        --service-cidr 10.20.0.0/16
W0217 10:01:19.304156   10476 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0217 10:01:19.304321   10476 version.go:102] falling back to the local client version: v1.17.3
W0217 10:01:19.304755   10476 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0217 10:01:19.304772   10476 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.3
[preflight] Running pre-flight checks
    [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [m-100 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.12.1.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [m-100 localhost] and IPs [10.12.1.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [m-100 localhost] and IPs [10.12.1.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0217 10:01:24.604930   10476 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0217 10:01:24.606403   10476 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 35.003335 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node m-100 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node m-100 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0aizeu.ji5y0dooy8g658n1
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
 
Your Kubernetes control-plane has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
Then you can join any number of worker nodes by running the following on each as root:
 
kubeadm join 10.12.1.100:6443 --token 0aizeu.ji5y0dooy8g658n1 \
    --discovery-token-ca-cert-hash sha256:1bc30ca4b0c09582bf0537ca2f516ae2c510becd5bdefe4ec866f9201f3519a5

9.Node节点分别执行 kubeadm join

kubeadm join 10.12.1.100:6443 --token 0aizeu.ji5y0dooy8g658n1 \
    --discovery-token-ca-cert-hash sha256:1bc30ca4b0c09582bf0537ca2f516ae2c510becd5bdefe4ec866f9201f3519a5

10.配置kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

11.Master节点验证集群

kubectl get po -A

devops和saas的区别 devops和k8s什么关系_DNS

发现coredns pending ,是因为未安装网络插件
集群中安装calico插件

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml

此处需要注意 calico.yaml中这个参数 CALICO_IPV4POOL_CIDR,需要与公司网冲突

安装完成后发现 coreDns 都已经正常

devops和saas的区别 devops和k8s什么关系_bootstrap_02