一,基础环境准备

1,实验环境设置

采用1 master,2 node,1 harbor的安装方式,1台安装kubeshare实现官方镜像拉取

主机

ip

master

192.168.17.7

harbor

192.168.17.8

node1

192.168.17.9

node2

192.168.17.10

koolshare

192.168.17.11

2,kube-proxy开启ipvs的前置条件

modprobe br_netfilter

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

3,安装docker软件(由于以下kubernetes版本为V15.1,支持的最新docker版本为18.09,需要安装指定版本)

阿里镜像站 https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.3e221b11DdOf5g

#安装必要的系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
#添加软件源
yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本的docker
# 注意:
# 官方软件源默认启用了最新的软件,您可以通过编辑软件源的方式获取各个版本的软件包。例如官方并没有将测试版本的软件源置为可用,您可以通过以下方式开启。同理可以开启各种测试版本等。
# vim /etc/yum.repos.d/docker-ee.repo
#   将[docker-ce-test]下方的enabled=0修改为enabled=1
#
# 安装指定版本的Docker-CE:
 Step 1: 查找Docker-CE的版本:
 yum list docker-ce.x86_64 --showduplicates | sort -r
 Loading mirror speeds from cached hostfile
  Loaded plugins: branch, fastestmirror, langpacks
  docker-ce.x86_64            17.03.1.ce-1.el7.centos            docker-ce-stable
   docker-ce.x86_64            17.03.1.ce-1.el7.centos            @docker-ce-stable
   docker-ce.x86_64            17.03.0.ce-1.el7.centos            docker-ce-stable
   Available Packages
 
 Step2: 安装指定版本的Docker-CE: (VERSION例如上面的17.03.0.ce.1-1.el7.centos)
 yum -y install docker-ce-18.09.9-3.el7
 systemctl start docker
 systemctl enable docker


## 创建 /etc/docker 目录 
mkdir /etc/docker

# 配置 daemon.(存放docker的配置文件)
cat > /etc/docker/daemon1.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"], 
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "insecure-registries":["https://hub.hly.top"] #创建harbor时声明这是安全的仓库地址,括号内为自己规划的域名
}
EOF
mkdir -p /etc/systemd/system/docker.service.d

附加:
阿里云docker镜像加速配置:
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF' 
{ 
"registry-mirrors": [ 
"https://1nj0zren.mirror.aliyuncs.com", 
"https://docker.mirrors.ustc.edu.cn", 
"http://f1361db2.m.daocloud.io", 
"https://registry.docker-cn.com" 
]
 }
EOF

systemctl daemon-reload
sudo systemctl restart docker


# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

4,安装koolshare软路由(如果可以拉取官方镜像,可以省略安装)

centos安装不再赘述,这里提下koolshare所在虚拟机的安装:

4.1 选择自定义安装

路由器必备docker软件_路由器必备docker软件

4.2 选择安装window系统

路由器必备docker软件_路由器必备docker软件_02

4.3 安装位置

路由器必备docker软件_kubernetes_03

4.4 选择blos进行引导

路由器必备docker软件_centos_04

4.5 内存和cpu

cpu根据实际可以选择2核,内存4G 

路由器必备docker软件_docker_05

4.6 磁盘一定要选择IDE模式,其他模式会报错

路由器必备docker软件_docker_06

4.6 选择老毛桃PE进行安装

路由器必备docker软件_路由器必备docker软件_07

4.7 将光盘源更换为koolshare安装源,安装koolshre

路由器必备docker软件_centos_08

  

路由器必备docker软件_json_09

 

路由器必备docker软件_centos_10

路由器必备docker软件_kubernetes_11

去掉镜像连接,关机并调小资源:  

路由器必备docker软件_centos_12

路由器必备docker软件_路由器必备docker软件_13

 

 

 

4.8虚拟机增加仅主机模式网卡,虚拟机仅主机网络去掉DHCP,本机网卡增加2个仅本机网络连接网关。

  

路由器必备docker软件_docker_14

路由器必备docker软件_json_15

路由器必备docker软件_centos_16

路由器必备docker软件_kubernetes_17

4.9浏览器连接,密码为koolshare

路由器必备docker软件_kubernetes_18

4.9 koolshare设置

路由器必备docker软件_json_19

设置完成后使用192.168.66.1进行登陆,密码koolshare,30s内进行登陆

4.10 koolshare网络设计模式

路由器必备docker软件_kubernetes_20

4.11 koolshare安装离线ssr软件,填写跳接服务器地址,即可上网

路由器必备docker软件_kubernetes_21

4.12 添加ssr节点信息即可(此部分自行设置)

路由器必备docker软件_kubernetes_22

二,kubeadm集群安装

1,说明,如果设置了koolshare,将centos虚拟机网卡地址指向coolshare的地址

2,安装kubeadm、kubectl、kubelet

2.1设置Kubernetes yum源

阿里镜像站:https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11DdOf5g

yum源:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

master和node进行安装:
yum -y  install  kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1 
systemctl enable kubelet && systemctl start kubelet

3,初始化主节点

由于网络访问原因,谷歌的底层镜像初始化过程中无法下载,因此使用提前下载好的安装包进行安装: 将包导入相关的master主机(相关依赖镜像如下如下): 

路由器必备docker软件_docker_23

通过脚本批量导入镜像:

# cat install.sh 
#!/bin/bash
ls /usr/local/kubeadm-basic.images > /tmp/imag-list.txt
cd /usr/local/kubeadm-basic.images
for i in $(cat /tmp/imag-list.txt)
do
    docker load -i $i
done

echo "sucessed install..."
rm -rf /tmp/imag-list.txt
bash intall.sh

将文件导入node节点并安装:

# scp -r kubeadm-basic.images/ install.sh 192.168.17.9:/usr/local
# scp -r kubeadm-basic.images/ install.sh 192.168.17.10:/usr/local
bash intall.sh
#显示config默认的init初始化文件打印到kubeadm-config.yaml
kubeadm config print init-defaults > kubeadm-config.yaml 

显示结果并修改如下:
    # cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.17.7 #修改为本机地址
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1 #修改为当前安装版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16" #修改为和flannel文件相同的地址
  serviceSubnet: 10.96.0.0/12
  scheduler: {}
 
  #添加以下内容使其支持IPVS的调度模式
 --- 
    apiVersion: kubeproxy.config.k8s.io/v1alpha1 
    kind: KubeProxyConfiguration
    featureGates:
      SupportIPVSProxyMode: true
    mode: ipvs
#master节点进行初始化安装,自动颁发证书,日志输出到kubeadm-init.log
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

#输出如下:
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.7:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8c222e7e0f8664ccb4f171ca2559f75e0d24f2612361dda1432425f6486766a1

4,加入主节点以及其余工作节点

查看节点:

[root@k8s-master01 ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   NotReady   master   6m12s   v1.15.1

执行安装日志中的加入命令即可:

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

创建文件夹,将重要文件放入其中,文件夹关系如下:

[root@k8s-master01 ~]# tree
.
├── install-k8s
│   ├── flannel
│   │   └── kube-flannel.yml
│   ├── kubeadm-config.yaml
│   └── kubeadm-init.log
└── kubernetes.conf

5,部署网络

wget https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

kubectl apply -f kube-flannel.yml

网络原因可能flannel文件无法正常加载,附属如下:

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

查看flannel的状态:

# kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-2b4qm               1/1     Running   0          13h
coredns-5c98db65d4-x85s5               1/1     Running   0          13h
etcd-k8s-master01                      1/1     Running   0          13h
kube-apiserver-k8s-master01            1/1     Running   0          13h
kube-controller-manager-k8s-master01   1/1     Running   0          13h
kube-flannel-ds-amd64-68hn9            1/1     Running   0          17m
kube-flannel-ds-amd64-7g6hs            1/1     Running   0          17m
kube-flannel-ds-amd64-wt4r7            1/1     Running   0          17m
kube-proxy-c9pkl                       1/1     Running   0          11h
kube-proxy-m9d2b                       1/1     Running   1          11h
kube-proxy-xmssj                       1/1     Running   0          13h
kube-scheduler-k8s-master01            1/1     Running   0          13h

6,添加node节点

在另外2个node节点执行:
kubeadm join 192.168.17.7:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:8c222e7e0f8664ccb4f171ca2559f75e0d24f2612361dda1432425f6486766a1
[root@k8s-master01 flannel]# kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
k8s-master01   Ready    master   13h   v1.15.1
k8s-node01     Ready    <none>   11h   v1.15.1
k8s-node02     Ready    <none>   11h   v1.15.1
[root@k8s-master01 flannel]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}

持续监视pod的状态

[root@k8s-master01 flannel]#  kubectl get pod -n kube-system -o wide -w

7,构建harbor私有仓库

在192.n68.17.8主机上操作。域名:https://hub.hly.top

[root@k8s-harbor ~]# mkdir /usr/local/harbor
[root@k8s-harbor ~]# tar xf harbor-offline-installer-v1.9.4.tgz -C /usr/local/harbor/
  • 创建https证书文件(使用mkcert创建)

mkcert使用方法链接

[root@k8s-harbor bin]# mkcert hub.hly.top 192.168.17.8
Using the local CA at "/root/.local/share/mkcert" ✨

Created a new certificate valid for the following names 
 - "hub.hly.top"
 - "192.168.17.8"

The certificate is at "./hub.hly.top+1.pem" and the key at "./hub.hly.top+1-key.pem" ✅
[root@k8s-harbor bin]# mv hub.hly.top+1.pem hub.hly.top+1-key.pem /usr/local/harbor/harbor/ssl
  • 本机添加host文件解析
C:\Windows\System32\drivers\etc\hosts
192.168.17.8  hub.hly.top
  • 修改harbor配置文件(使用版本v1.2.0,其他版本自行修改:
[root@k8s-harbor harbor]# cat harbor.cfg |grep -v "#"|grep -v "^$"
hostname = hub.hly.top
ui_url_protocol = https
db_password = root123
max_job_workers = 3 
customize_crt = on
ssl_cert = /usr/local/harbor/ssl/hub.hly.top+1.pem
ssl_cert_key = /usr/local/harbor/ssl/hub.hly.top+1-key.pem
secretkey_path = /data
admiral_url = NA
clair_db_password = root321
email_identity = 
email_server = smtp.mydomain.com
email_server_port = 25
email_username = sample_admin@mydomain.com
email_password = abc
email_from = admin <sample_admin@mydomain.com>
email_ssl = false
harbor_admin_password = Harbor12345
auth_mode = db_auth
ldap_url = ldaps://ldap.mydomain.com
ldap_basedn = ou=people,dc=mydomain,dc=com
ldap_uid = uid 
ldap_scope = 3 
ldap_timeout = 5
self_registration = on
token_expiration = 30
project_creation_restriction = everyone
verify_remote_cert = on
安装docker-compose:
curl -L https://github.com/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

将harbor安装包放入/usr/local/harbor目录下并解压:
[root@k8s-harbor harbor]# pwd
/usr/local/harbor/
[root@k8s-harbor harbor]# ./install.sh

输出如下,表示安装成功:


✔ ----Harbor has been installed and started successfully.----

Now you should be able to visit the admin portal at https://hub.hly.top. 
For more details, please visit https://github.com/vmware/harbor .
  • 浏览器登陆界面如下:

用户名:admin 密码:Harbor12345 

路由器必备docker软件_centos_24

  • 添加仓库: 

路由器必备docker软件_路由器必备docker软件_25

  • 推送镜像格式 

8,Harbor特性

a,基于角色控制,用户和仓库都是基于项目进行组织的,而且户基于项目可以拥有不同的权限;
b,基于镜像的复制策略,镜像可以在多个Harbor之间进行复制;
c,支持LDPA:Harbor用户授权可以使用已经存在的LDAP用户
d,镜像删除&垃圾回收:Image可以删除并且回收image占用的空间,绝大多数用户操作API,方便用户对系统进行扩展
e,用户UI:可以轻松的浏览、搜索镜像仓库,以及对项目进行管理
f、轻松进行部署:Harbor提供online、offine安装,除此之外还提供virtualappliance安装
g、Harbor和docker registry关系;Harbor实质上是对docker registry做了封装,扩展了自己的业务模块

路由器必备docker软件_kubernetes_26

9,harbor认证过程

a、dockerdaemon从dockerregistry拉取镜像;
b、如果dockerregistry需要进行授权时,registry将会返回401 Unauthorize的响应,同时响应中包含了docker client如何进行认证的信息;
c、docker client根据registry返回的信息,想auth server发送请求获取token;
d,auth server则根据自己的业务实现取验证提交的信息是否符合业务要求;
e、用户数据仓库返回用户相关信息;
f、auth server将会根据查询的用户信息,生成token令牌,当用户完成上述过程以后便可以执行相关的pull/push操作,认证信息每次都带在请求中
  • harbor整体架构如下: 

路由器必备docker软件_路由器必备docker软件_27

10,harbor认证流程

a、首先,请求被代理容器监听拦截,并跳转到指定的认证服务器;
b、如果认证服务器配置了权限认证,则会返回401。通知dockerclient在特定的请求中需要带上一个合法的token,而认证的逻辑地址指向架构中的core services;
c、当docker client接受到错误的code,client就会发送认证请求(带有用户名和密码)到coreservices进行basic auth认证。
d、当C的请求发送给nginx以后,nginx会根据配置的认证地址间带有用户名和密码的请求发送到core services
e、coreservices获取用户名和密码以后对用户信息进行认证(自己的数据库或者接入LDAP都可以)。成功以后,返回认证信息

路由器必备docker软件_docker_28