中文官网参考[英文官网]插件

基础配置

我的任意节点 centos 版本为 7.6 / 7.7 或 7.8
我的任意节点 CPU 内核数量大于等于 2,且内存大于等于 4G
我的任意节点 hostname 不是 localhost,且不包含下划线、小数点、大写字母
我的任意节点都有固定的内网 IP 地址
我的任意节点都只有一个网卡,如果有特殊目的,我可以在完成 K8S 安装后再增加新的网卡
我的任意节点上 Kubelet使用的 IP 地址 可互通(无需 NAT 映射即可相互访问),且没有防火墙、安全组隔离

1、修改本地/etc/hosts文件
#将以下内容追加(>>)到 /etc/hosts文件

cat <<EOF >> /etc/hosts
172.26.48.4    k8s-master
172.26.48.5    k8s-node1
172.26.135.94  k8s-node2
EOF

2、CentOS 7 配置国内阿里云镜像源
#将以下内容替换(>)到 /etc/yum.repos.d/kubernetes.repo文件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

注:如果上边这个源不好用,再用下面的镜像源

cat <<LEO > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
LEO

3、关闭 SELinux,目的为了允许容器能够与本机文件系统交互。

setenforce 0

setenforce: SELinux is disabled

systemctl daemon-reload

4、修改网络开启桥接网络支持,只针对(RHEL/CentOS 7)系统

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
sysctl --system

5、关闭swap——不关闭配置节点或是配置master都会有问题

swapoff -a

6、安装 ebtables ethtool,否则后边执行 kubeadm init 的时候会报错

yum install ebtables ethtool -y

#然后修改当前内核状态 这个文件是在 Docker安装成功后才出现的

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

7、安装kubelet、kubeadm、kubectl

yum install -y kubelet kubeadm kubectl
  • 7.1、按版本安装执行如下命令
    yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 7.2、按版本安装如若报错需要运行如下命令(1.13以下包括1.13在安装网络插件推荐flannel用0.10版本)
    yum install -y kubelet-1.13.0 kubeadm-1.13.0 kubectl-1.13.0 kubernetes-cni-0.6.0
# 查看k8s问题节点日志
[root@k8s-master deploy]# journalctl -f -u kubelet

启动

systemctl enable kubelet && systemctl start kubelet

8、镜像准备
kubernetes 服务启动依赖很多镜像,但是这些镜像要是在国内没有(fan qiang)的话,是下载不下来的。这里我们可以去 Docker Hub 下载指定版本的镜像替代,下载完成后,通过 docker tag … 命令修改成指定名称的镜像即可。

kubeadm config images list
I0524 22:03:10.774681 19610 version.go:96] could not fetch a
 Kubernetes version from the internet: unable to get URL
 “https://dl.k8s.io/release/stable-1.txt”: Get
 https://dl.k8s.io/release/stable-1.txt: net/http: request canceled
 while waiting for connection (Client.Timeout exceeded while awaiting
 headers) I0524 22:03:10.774766 19610 version.go:97] falling back to
 the local client version: v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
 k8s.gcr.io/kube-controller-manager:v1.14.2
 k8s.gcr.io/kube-scheduler:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1

9、创建文件setup_image.sh 编写脚本批量下载镜像,并修改镜像tag与google的k8s镜像名称一致

#!/bin/bash
# 定义镜像集合数组
images=(
    kube-apiserver:v1.14.2
    kube-controller-manager:v1.14.2
    kube-scheduler:v1.14.2
    kube-proxy:v1.14.2
    pause:3.1
    etcd:3.3.10
)
# 循环从国内Docker镜像库 https://hub.docker.com 中下载镜像
for img in ${images[@]};
do
    # 从国内源下载镜像
    docker pull mirrorgooglecontainers/$img
    # 改变镜像名称
    docker tag  mirrorgooglecontainers/$img k8s.gcr.io/$img
    # 删除源始镜像
    docker rmi  mirrorgooglecontainers/$img
    #
    echo '================'
done

# 有一个在Docker Hub中找不到,换个下载仓库
docker pull coredns/coredns:1.3.1
docker tag  coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi  coredns/coredns:1.3.1

10、kubeadm 常用命令

# 启动一个 Kubernetes 主节点
[root@k8s-master deploy]# kubeadm init
# 启动一个 Kubernetes 工作节点并且将其加入到集群
[root@k8s-master deploy]# kubeadm join
# 更新一个 Kubernetes 集群到新版本
[root@k8s-master deploy]# kubeadm upgrade
# 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群,您需要对集群做一些配置以便使用 kubeadm upgrade 命令
[root@k8s-master deploy]# kubeadm config
# 管理 kubeadm join 使用的令牌
[root@k8s-master deploy]# kubeadm token
# 重新生成链接 Token
[root@k8s-master deploy]# kubeadm token create --print-join-command
# 查看未失效的 Token列表
[root@k8s-master deploy]# kubeadm token list
# 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改
[root@k8s-master deploy]# kubeadm reset
# 查询所有 pod
[root@k8s-master deploy]# kubectl get pod -A -o wide
# 查询所有节点
[root@k8s-master deploy]# kubectl get nodes -o wide
# 查看k8s问题节点日志
[root@k8s-master deploy]# journalctl -f -u kubelet
# 查看命名空间
[root@k8s-master deploy]# kubectl get namespace
# 查看集群状态
/home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://192.168.1.34:2379" -d store
/home/tidb/tidb-ansible/resources/bin/pd-ctl -u "http://192.168.1.48:2379" -d member

11、初始化 master (正常情况)
kubeadm init --pod-network-cidr=<pod网络IP地址/子网掩码> --kubernetes-version=<k8s版本>

kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.26.48.4:6443 --token yx9yza.rcb08m1giup70y63 \
    --discovery-token-ca-cert-hash sha256:f6548aa3508014ac5dab129231b54f5085f37fe8e6fc5d362f787be70a1a8a6e
[root@k8s-master deploy]#

注:执行下面的命令的意思是,给相应的用户kuberctl执行权限,如果其他用户需要权限的话,需要在其他用户下再次运行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown 查看Kubernetes的角色权限_ico(id -g) $HOME/.kube/config

如果其他用户没有权限使用kubectl会报以下的错误:

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

11.1、初始化master出现的错误情况

[root@alimaster k8s]# kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.0-beta5. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

以上是因为cup核数不够,如果测试用可以忽略

kubeadm init  --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.14.2 --ignore-preflight-errors=NumCPU

12、好了初始化 Master 完成后,我们使用命令 kubectl get node 查看集群节点信息,但是你会发现并没有出现 Node 信息,反而报错如下:

[root@k8s-master deploy]# kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?

13、出现以上原因是没有执行init中日志提示的那一步

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master deploy]# kubectl get node
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   NotReady   master   9m8s   v1.14.2
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE     NAME                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-8xbcf   0/1     Pending   0          10s
kube-system   coredns-fb8b8dccf-ztxxg   0/1     Pending   0          10s
kube-system   kube-proxy-kcvph          1/1     Running   0          9s
[root@k8s-master deploy]#

14、安装 pod 网络附加组件
kubernetes 提供了很多种网络组件选择,有 Calia、Canal、Flannel、Kube-router、Romana、Weave Net 可以使用,具体使用可以参考 (3/4)安装pod网络 来操作,这里我们选择 Flannel 作为网络组件。
注意: 为了使Flannel正常工作,执行kubeadm init命令时需要增加–pod-network-cidr=10.244.0.0/16参数。Flannel适用于amd64,arm,arm64和ppc64le上工作,但使用除amd64平台得其他平台,你必须手动下载并替换amd64。
[官网]也可使用Calico网络插件
flannel-rbac

14.1、*安装 calico 网络插件(推荐)
calico-3.13.1.yaml(文件在本文末附加)

wget https://kuboard.cn/install-script/calico/calico-3.13.1.yaml
kubectl apply -f calico-3.13.1.yaml

或使用Flannel网络(flannel仓库地址)
可指定相应的网络组件
1、如果k8s是1.13版本的则需要用以下的命令安装flannel
以下pull镜像指定版本0.10而kubectl apply使用flannel/v0.11.0的.yml是因为配置文件中封装引用的是flannel0.10的镜像。注:k8s1.13 kubectl apply使用flannel/v0.10.0是无法生效的

docker pull quay.io/coreos/flannel:v0.10.0-amd64
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.11.0/Documentation/k8s-manifests/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.11.0/Documentation/kube-flannel.yml

1.2、1.14版本可用以下命令执行

# 查看当前系统的发行版信息
[root@k8s-master deploy]# lsb_release -a
LSB Version:    :core-4.1-amd64:core-4.1-noarch
Distributor ID: CentOS
Description:    CentOS Linux release 7.5.1804 (Core)
Release:        7.5.1804
Codename:       Core
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
[root@k8s-master deploy]#
[root@k8s-master deploy]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@k8s-master deploy]#
# 需要等待一小会儿,在查看运行状态就都是 Running 了
[root@k8s-master deploy]# kubectl get pod -A
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-8xbcf              1/1     Running   0          2m8s
kube-system   coredns-fb8b8dccf-ztxxg              1/1     Running   0          2m8s
kube-system   etcd-k8s-master                      1/1     Running   0          81s
kube-system   kube-apiserver-k8s-master            1/1     Running   0          81s
kube-system   kube-controller-manager-k8s-master   1/1     Running   0          74s
kube-system   kube-flannel-ds-amd64-hk4wt          1/1     Running   0          51s
kube-system   kube-proxy-kcvph                     1/1     Running   0          2m7s
kube-system   kube-scheduler-k8s-master            1/1     Running   0          69s
[root@k8s-master deploy]#

15、最后一步禁用kubernetes源
enabled=1 改为0(0禁用/1开启源)

/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=0
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

拆除

要撤消什么kubeadm做,你应该先排出节点,并确保该节点将其关闭之前是空的。
使用适当的凭证与控制平面节点通信,请运行:

驱逐节点
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
恢复节点
kubectl uncordon  <node name>
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

然后,在要删除的节点上,重置所有kubeadm安装状态:

kubeadm reset

reset之后提示以下信息

[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重置过程不会重置或清除iptables规则或IPVS表。如果您希望重置iptables,则必须手动进行:

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

如果要重置IPVS表,则必须运行以下命令:

ipvsadm -C

如果您想重新开始,只需运行kubeadm init或kubeadm join使用适当的参数即可。

删除干净k8s

systemctl stop kubelet
yum remove -y kubelet kubeadm kubectl kubernetes-cni

modprobe -r ipip
#Linux lsmod命令用于显示已载入系统的模块。
lsmod

Module:模块的名称。 这通常是模块文件的名称,减去扩展名(.o或.ko),但它可能有一个自定义名称,可以在使用insmod命令插入模块时将其指定为选项。
Size:驻留模块使用的内存量,以字节为单位。
Used by:此列包含一个数字,表示正在使用的模块实例数。 如果该数字为零,则当前未使用该模块。 数字后面的文本表示有关使用模块的内容的任何可用信息:这通常是设备名称,文件系统标识符或另一个模块的名称。

rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd

再次安装k8s1.13版本

yum install -y kubelet-1.13.0 kubeadm-1.13.0 kubectl-1.13.0 kubernetes-cni-0.6.0

安装k8s1.13版本要重启服务器



关闭swap

swapoff -a
swapon -a

永久关闭swap

sed -ri 's/.*swap.*/#&/' /etc/fstab

cat calico-3.13.1.yaml

---
# Source: calico/templates/calico-config.yaml
# This ConfigMap is used to configure a self-hosted Calico installation.
kind: ConfigMap
apiVersion: v1
metadata:
  name: calico-config
  namespace: kube-system
data:
  # Typha is disabled.
  typha_service_name: "none"
  # Configure the backend to use.
  calico_backend: "bird"

  # Configure the MTU to use
  veth_mtu: "1440"

  # The CNI network configuration to install on each node.  The special
  # values in this config will be automatically populated.
  cni_network_config: |-
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "nodename": "__KUBERNETES_NODE_NAME__",
          "mtu": __CNI_MTU__,
          "ipam": {
              "type": "calico-ipam"
          },
          "policy": {
              "type": "k8s"
          },
          "kubernetes": {
              "kubeconfig": "__KUBECONFIG_FILEPATH__"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        },
        {
          "type": "bandwidth",
          "capabilities": {"bandwidth": true}
        }
      ]
    }

---
# Source: calico/templates/kdd-crds.yaml

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgppeers.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPPeer
    plural: bgppeers
    singular: bgppeer

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: blockaffinities.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BlockAffinity
    plural: blockaffinities
    singular: blockaffinity

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamblocks.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMBlock
    plural: ipamblocks
    singular: ipamblock

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamconfigs.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMConfig
    plural: ipamconfigs
    singular: ipamconfig

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ipamhandles.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPAMHandle
    plural: ipamhandles
    singular: ipamhandle

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networksets.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkSet
    plural: networksets
    singular: networkset

---
---
# Source: calico/templates/rbac.yaml

# Include a clusterrole for the kube-controllers component,
# and bind it to the calico-kube-controllers serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
rules:
  # Nodes are watched to monitor for deletions.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - watch
      - list
      - get
  # Pods are queried to check for existence.
  - apiGroups: [""]
    resources:
      - pods
    verbs:
      - get
  # IPAM resources are manipulated when nodes are deleted.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
    verbs:
      - list
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  # Needs access to update clusterinformations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - clusterinformations
    verbs:
      - get
      - create
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-kube-controllers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-kube-controllers
subjects:
- kind: ServiceAccount
  name: calico-kube-controllers
  namespace: kube-system
---
# Include a clusterrole for the calico-node DaemonSet,
# and bind it to the calico-node serviceaccount.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: calico-node
rules:
  # The CNI plugin needs to get pods, nodes, and namespaces.
  - apiGroups: [""]
    resources:
      - pods
      - nodes
      - namespaces
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - endpoints
      - services
    verbs:
      # Used to discover service IPs for advertisement.
      - watch
      - list
      # Used to discover Typhas.
      - get
  # Pod CIDR auto-detection on kubeadm needs access to config maps.
  - apiGroups: [""]
    resources:
      - configmaps
    verbs:
      - get
  - apiGroups: [""]
    resources:
      - nodes/status
    verbs:
      # Needed for clearing NodeNetworkUnavailable flag.
      - patch
      # Calico stores some configuration information in node annotations.
      - update
  # Watch for changes to Kubernetes NetworkPolicies.
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - watch
      - list
  # Used by Calico for policy information.
  - apiGroups: [""]
    resources:
      - pods
      - namespaces
      - serviceaccounts
    verbs:
      - list
      - watch
  # The CNI plugin patches pods/status.
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - patch
  # Calico monitors various CRDs for config.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - globalfelixconfigs
      - felixconfigurations
      - bgppeers
      - globalbgpconfigs
      - bgpconfigurations
      - ippools
      - ipamblocks
      - globalnetworkpolicies
      - globalnetworksets
      - networkpolicies
      - networksets
      - clusterinformations
      - hostendpoints
      - blockaffinities
    verbs:
      - get
      - list
      - watch
  # Calico must create and update some CRDs on startup.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ippools
      - felixconfigurations
      - clusterinformations
    verbs:
      - create
      - update
  # Calico stores some configuration information on the node.
  - apiGroups: [""]
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
  # These permissions are only requried for upgrade from v2.6, and can
  # be removed after upgrade or on fresh installations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - bgpconfigurations
      - bgppeers
    verbs:
      - create
      - update
  # These permissions are required for Calico CNI to perform IPAM allocations.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
      - ipamblocks
      - ipamhandles
    verbs:
      - get
      - list
      - create
      - update
      - delete
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - ipamconfigs
    verbs:
      - get
  # Block affinities must also be watchable by confd for route aggregation.
  - apiGroups: ["crd.projectcalico.org"]
    resources:
      - blockaffinities
    verbs:
      - watch
  # The Calico IPAM migration needs to get daemonsets. These permissions can be
  # removed if not upgrading from an installation using host-local IPAM.
  - apiGroups: ["apps"]
    resources:
      - daemonsets
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: calico-node
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: calico-node
subjects:
- kind: ServiceAccount
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: calico-node
  namespace: kube-system
  labels:
    k8s-app: calico-node
spec:
  selector:
    matchLabels:
      k8s-app: calico-node
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        k8s-app: calico-node
      annotations:
        # This, along with the CriticalAddonsOnly toleration below,
        # marks the pod as a critical add-on, ensuring it gets
        # priority scheduling and that its resources are reserved
        # if it ever gets evicted.
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      hostNetwork: true
      tolerations:
        # Make sure calico-node gets scheduled on all nodes.
        - effect: NoSchedule
          operator: Exists
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - effect: NoExecute
          operator: Exists
      serviceAccountName: calico-node
      # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
      # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
      terminationGracePeriodSeconds: 0
      priorityClassName: system-node-critical
      initContainers:
        # This container performs upgrade from host-local IPAM to calico-ipam.
        # It can be deleted if this is a fresh installation, or if you have already
        # upgraded to use calico-ipam.
        - name: upgrade-ipam
          image: calico/cni:v3.13.1
          command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
          env:
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
          volumeMounts:
            - mountPath: /var/lib/cni/networks
              name: host-local-net-dir
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
          securityContext:
            privileged: true
        # This container installs the CNI binaries
        # and CNI network config file on each node.
        - name: install-cni
          image: calico/cni:v3.13.1
          command: ["/install-cni.sh"]
          env:
            # Name of the CNI config file to create.
            - name: CNI_CONF_NAME
              value: "10-calico.conflist"
            # The CNI network config to install on each node.
            - name: CNI_NETWORK_CONFIG
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: cni_network_config
            # Set the hostname based on the k8s node name.
            - name: KUBERNETES_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # CNI MTU Config variable
            - name: CNI_MTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # Prevents the container from sleeping forever.
            - name: SLEEP
              value: "false"
          volumeMounts:
            - mountPath: /host/opt/cni/bin
              name: cni-bin-dir
            - mountPath: /host/etc/cni/net.d
              name: cni-net-dir
          securityContext:
            privileged: true
        # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
        # to communicate with Felix over the Policy Sync API.
        - name: flexvol-driver
          image: calico/pod2daemon-flexvol:v3.13.1
          volumeMounts:
          - name: flexvol-driver-host
            mountPath: /host/driver
          securityContext:
            privileged: true
      containers:
        # Runs calico-node container on each Kubernetes node.  This
        # container programs network policy and routes on each
        # host.
        - name: calico-node
          image: calico/node:v3.13.1
          env:
            # Use Kubernetes API as the backing datastore.
            - name: DATASTORE_TYPE
              value: "kubernetes"
            # Wait for the datastore.
            - name: WAIT_FOR_DATASTORE
              value: "true"
            # Set based on the k8s node name.
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            # Choose the backend to use.
            - name: CALICO_NETWORKING_BACKEND
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: calico_backend
            # Cluster type to identify the deployment type
            - name: CLUSTER_TYPE
              value: "k8s,bgp"
            # Auto-detect the BGP IP address.
            - name: IP
              value: "autodetect"
            # Enable IPIP
            - name: CALICO_IPV4POOL_IPIP
              value: "Always"
            # Set MTU for tunnel device used if ipip is enabled
            - name: FELIX_IPINIPMTU
              valueFrom:
                configMapKeyRef:
                  name: calico-config
                  key: veth_mtu
            # The default IPv4 pool to create on startup if none exists. Pod IPs will be
            # chosen from this range. Changing this value after installation will have
            # no effect. This should fall within `--cluster-cidr`.
            # - name: CALICO_IPV4POOL_CIDR
            #   value: "192.168.0.0/16"
            # Disable file logging so `kubectl logs` works.
            - name: CALICO_DISABLE_FILE_LOGGING
              value: "true"
            # Set Felix endpoint to host default action to ACCEPT.
            - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
              value: "ACCEPT"
            # Disable IPv6 on Kubernetes.
            - name: FELIX_IPV6SUPPORT
              value: "false"
            # Set Felix logging to "info"
            - name: FELIX_LOGSEVERITYSCREEN
              value: "info"
            - name: FELIX_HEALTHENABLED
              value: "true"
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-live
              - -bird-live
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - /bin/calico-node
              - -felix-ready
              - -bird-ready
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
              readOnly: true
            - mountPath: /run/xtables.lock
              name: xtables-lock
              readOnly: false
            - mountPath: /var/run/calico
              name: var-run-calico
              readOnly: false
            - mountPath: /var/lib/calico
              name: var-lib-calico
              readOnly: false
            - name: policysync
              mountPath: /var/run/nodeagent
      volumes:
        # Used by calico-node.
        - name: lib-modules
          hostPath:
            path: /lib/modules
        - name: var-run-calico
          hostPath:
            path: /var/run/calico
        - name: var-lib-calico
          hostPath:
            path: /var/lib/calico
        - name: xtables-lock
          hostPath:
            path: /run/xtables.lock
            type: FileOrCreate
        # Used to install CNI.
        - name: cni-bin-dir
          hostPath:
            path: /opt/cni/bin
        - name: cni-net-dir
          hostPath:
            path: /etc/cni/net.d
        # Mount in the directory for host-local IPAM allocations. This is
        # used when upgrading from host-local to calico-ipam, and can be removed
        # if not using the upgrade-ipam init container.
        - name: host-local-net-dir
          hostPath:
            path: /var/lib/cni/networks
        # Used to create per-pod Unix Domain Sockets
        - name: policysync
          hostPath:
            type: DirectoryOrCreate
            path: /var/run/nodeagent
        # Used to install Flex Volume Driver
        - name: flexvol-driver-host
          hostPath:
            type: DirectoryOrCreate
            path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-node
  namespace: kube-system

---
# Source: calico/templates/calico-kube-controllers.yaml

# See https://github.com/projectcalico/kube-controllers
apiVersion: apps/v1
kind: Deployment
metadata:
  name: calico-kube-controllers
  namespace: kube-system
  labels:
    k8s-app: calico-kube-controllers
spec:
  # The controllers can only have a single active instance.
  replicas: 1
  selector:
    matchLabels:
      k8s-app: calico-kube-controllers
  strategy:
    type: Recreate
  template:
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      nodeSelector:
        kubernetes.io/os: linux
      tolerations:
        # Mark the pod as a critical add-on for rescheduling.
        - key: CriticalAddonsOnly
          operator: Exists
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      serviceAccountName: calico-kube-controllers
      priorityClassName: system-cluster-critical
      containers:
        - name: calico-kube-controllers
          image: calico/kube-controllers:v3.13.1
          env:
            # Choose which controllers to run.
            - name: ENABLED_CONTROLLERS
              value: node
            - name: DATASTORE_TYPE
              value: kubernetes
          readinessProbe:
            exec:
              command:
              - /usr/bin/check-status
              - -r

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: calico-kube-controllers
  namespace: kube-system
---
# Source: calico/templates/calico-etcd-secrets.yaml

---
# Source: calico/templates/calico-typha.yaml

---
# Source: calico/templates/configure-canal.yaml