CentOS7 kubernetes/k8s 1.10 离线安装
测试环境采用单master,两个node的结构部署。所有镜像使用离线镜像手动导入。

所需文件百度盘连接

链接:链接: https://pan.baidu.com/s/12tLNBpmdINkmegqw1eERuA 提取码: 5vd4 

1.环境准备

主机名    系统    IP    配置
 k8s-master-1    CentOS7    192.168.1.170    2核4G
 k8s-node-1    CentOS7    192.168.1.171    2核6G
 k8s-node-2    CentOS7    192.168.1.172    2核6G
 1.1 设置统一时区
 timedatectl set-timezone Asia/Shanghai  #都要执行
 hostnamectl set-hostname k8s-master-1   #master执行
 hostnamectl set-hostname k8s-node-1    #node执行
 hostnamectl set-hostname k8s-node-2    #node执行
 1.2 添加hosts配置
 在所有节点/etc/hosts中添加解析127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 192.168.1.170 k8s-master-1 
 192.168.1.171 k8s-node-1 
 192.168.1.172 k8s-node-2 
 1.3关闭seliux及firewalld
 所有节点需关闭firewalld及seliuxsed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
 setenforce 0
 systemctl disable firewalld
 systemctl stop firewalld
 2.docker安装(可选)
 注意:kubernetes1.10官方支持docker版本为17.x,18.05版本亲测安装失败,18.04版本测试安装成功。使用文件docker-packages.tar,每个节点都要安装。
tar -xvf docker-packages.tar
 cd docker-packages
 rpm -Uvh * 或者 yum install local *.rpm  进行安装
 启动docker,并设置为开机自启systemctl start docker && systemctl enable docker
 输入docker info,==记录Cgroup Driver==Cgroup Driver: cgroupfs
 docker和kubelet的cgroup driver需要一致,如果docker不是cgroupfs,则执行cat << EOF > /etc/docker/daemon.json
 {
   "exec-opts": ["native.cgroupdriver=cgroupfs"]
 }
 EOF
 systemctl daemon-reload && systemctl restart docker
 3.安装kubeadm,kubectl,kubelet
 使用文件kube-packages-1.10.1.tar,每个节点都要安装kubeadm是集群部署工具
kubectl是集群管理工具,通过command来管理集群
kubelet的k8s集群每个节点的docker管理服务
tar -xvf kube-packages-1.10.1.tar
 cd kube-packages-1.10.1
 rpm -Uvh * 或者 yum install local *.rpm  
 在所有kubernetes节点上设置kubelet使用cgroupfs,与dockerd保持一致,否则kubelet会启动报错默认kubelet使用的cgroup-driver=systemd,改为cgroup-driver=cgroupfs
 sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
 重设kubelet服务,并重启kubelet服务
 systemctl daemon-reload && systemctl restart kubelet
 关闭swap,及修改iptables,不然后面kubeadm会报错swapoff -a
 vi /etc/fstab   #swap一行注释
 cat <<EOF >  /etc/sysctl.d/k8s.conf
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 EOF
 sysctl --system
 4.导入镜像
 使用文件k8s-images-1.10.tar.gz,每个节点都要执行节点较少,就不搭建镜像仓库服务了,后续要用的应用镜像,每个节点都要导入
docker load -i k8s-images-1.10.tar.gz 
 一共11个镜像,分别是
 k8s.gcr.io/etcd-amd64:3.1.12 
 k8s.gcr.io/kube-apiserver-amd64:v1.10.1 
 k8s.gcr.io/kube-controller-manager-amd64:v1.10.1 
 k8s.gcr.io/kube-proxy-amd64:v1.10.1 
 k8s.gcr.io/kube-scheduler-amd64:v1.10.1 
 k8s.gcr.io/pause-amd64:3.1 
 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8  
 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 
 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 
 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 
 quay.io/coreos/flannel:v0.9.1-amd64
 5.kubeadm init 部署master节点
 只在master执行。此处选用最简单快捷的部署方案。etcd、api、controller-manager、 scheduler服务都会以容器的方式运行在master。etcd 为单点,不带证书。etcd的数据会挂载到master节点/var/lib/etcdinit命令注意要指定版本,和pod范围:
kubeadm init --kubernetes-version=v1.10.1 --pod-network-cidr=10.244.0.0/16
 需要等待几分钟。成功后输出:
Your Kubernetes master has initialized successfully!
 To start using your cluster, you need to run the following as a regular user:
   mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
 You should now deploy a pod network to the cluster.
 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
   https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node
 as root:
   kubeadm join 192.168.1.170:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f
 记下join的命令,后续node节点加入的时候要用到执行提示的命令,保存kubeconfig
  mkdir -p $HOME/.kube
   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
   sudo chown $(id -u):$(id -g) $HOME/.kube/config
 此时执行kubectl get node 已经可以看到master节点,notready是因为还未部署网络插件[root@k8s-master-1 kubernetes1.10]# kubectl get node
 NAME      STATUS     ROLES     AGE       VERSION
 k8s-master-1   NotReady   master    3m        v1.10.1
 查看所有的pod,kubectl get pod --all-namespaceskubedns也依赖于容器网络,此时pending是正常的
[root@k8s-master-1 kubernetes1.10]# kubectl get pod --all-namespaces
 NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
 kube-system   etcd-master1                      1/1       Running   0          3m
 kube-system   kube-apiserver-master1            1/1       Running   0          3m
 kube-system   kube-controller-manager-master1   1/1       Running   0          3m
 kube-system   kube-dns-86f4d74b45-5nrb5         0/3       Pending   0          4m
 kube-system   kube-proxy-ktxmb                  1/1       Running   0          4m
 kube-system   kube-scheduler-master1            1/1       Running   0          3m
 配置KUBECONFIG变量echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
 source /etc/profile
 echo $KUBECONFIG    #应该返回/etc/kubernetes/admin.conf
 6.部署flannel网络
 k8s支持多种网络方案,flannel,calico,openvswitch此处选择flannel。 在熟悉了k8s部署后,可以尝试其他网络方案。使用kubectl执行kube-flannel.yml文件
kubectl apply -f kube-flannel.yml
 网络就绪后,节点的状态会变为ready
 [root@k8s-master-1 kubernetes1.10]# kubectl get node
 NAME      STATUS    ROLES     AGE       VERSION
 master1   Ready     master    18m       v1.10.1
 7.kubeadm join 加入node节点
 node节点使用之前kubeadm init 生产的join命令,加入成功后,回到master节点查看是否成功。kubeadm join 192.168.1.171:6443 --token wct45y.tq23fogetd7rp3ck --discovery-token-ca-cert-hash sha256:c267e2423dba21fdf6fc9c07e3b3fa17884c4f24f0c03f2283a230c70b07772f
 [root@k8s-master-1 ~]# kubectl get nodes
 NAME           STATUS    ROLES     AGE       VERSION
 k8s-master-1   Ready     master    10d       v1.10.1
 k8s-node-1     Ready     <none>    10d       v1.10.1
 k8s-node-2     Ready     <none>    10d       v1.10.1
 如果忘了join命令,加入节点方法。首先master节点获取token,如果token list内容为空,则kubeadm token create创建一个,记录下token数据
[root@k8s-master-1 kubernetes1.10]# kubeadm token list
 TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
 wct45y.tq23fogetd7rp3ck   22h       2018-07-26T21:38:57+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
 node节点执行如下,把token部分进行替换kubeadm join --token wct45y.tq23fogetd7rp3ck 192.168.1.170:6443 --discovery-token-unsafe-skip-ca-verification
 8.部署k8s ui界面,dashboard
 dashboard是官方的k8s 管理界面,可以查看应用信息及发布应用。dashboard的语言是根据浏览器的语言自己识别的。一共需要导入3个yaml文件
kubectl apply -f kubernetes-dashboard-http.yaml
 kubectl apply -f admin-role.yaml
 kubectl apply -f kubernetes-dashboard-admin.rbac.yaml
 [root@k8s-master-1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-http.yaml 
 serviceaccount "kubernetes-dashboard" created
 role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
 rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
 deployment.apps "kubernetes-dashboard" created
 service "kubernetes-dashboard" created
 [root@k8s-master-1 kubernetes1.10]# kubectl apply -f admin-role.yaml 
 clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" created
 [root@k8s-master-1 kubernetes1.10]# kubectl apply -f kubernetes-dashboard-admin.rbac.yaml 
 clusterrolebinding.rbac.authorization.k8s.io "dashboard-admin" created
 创建完成后,通过 http:// 任意节点的IP:31000即可访问ui。

总结
1.docker版本采用18.05最新版本,导致kubeadm init失败。

2.镜像都为国外镜像,直接拉取需要。安装heapster镜像采用国外服务器docker pull,docker save,下载镜像,docker load的方式。

3.添加证书进行kubeadm init没有成功。

4.如果安装中途出现错误,需要重新安装。重置kubernetes服务,重置网络。删除网络配置,link

kubeadm reset
 systemctl stop kubelet
 systemctl stop docker
 rm -rf /var/lib/cni/
 rm -rf /var/lib/kubelet/*
 rm -rf /etc/cni/
 ifconfig cni0 down
 ifconfig flannel.1 down
 ifconfig docker0 down
 ip link delete cni0
 ip link delete flannel.1
 systemctl start docker