准备了三台机器用于部署k8s的运行环境,细节如下: 节点即功能 主机名 I P master 192.168.217.136 node 192.168.217.137 设置三台机器的主机名: Master上执行: [root@localhost ~]# hostnamectl--static set-hostname master Node上执行: [root@localhost ~]# hostnamectl --static set-hostname node 在三台机器上设置hosts,均执行如下命令: echo 'master 192.168.217.136 node 192.168.217.137' >>/etc/hosts 3、关闭三台机器上的防火墙 systemctl disable firewalld.service systemctl stop firewalld.service
部署etcd k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:(master和node节点都装etcd) [root@localhost ~]# yum install etcd -y yum安装的etcd默认配置文件在/etc/etcd/etcd.conf [root@localhost ~]# vi /etc/etcd/etcd.conf #[member] ETCD_NAME=master (node节点backup) ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #ETCD_WAL_DIR="" #ETCD_SNAPSHOT_COUNT="10000" #ETCD_HEARTBEAT_INTERVAL="100" #ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5" #ETCD_MAX_WALS="5" #ETCD_CORS=""
#[cluster] #ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380" #if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTERvalue for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" #ETCD_INITIAL_CLUSTER_STATE="new" #ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001" #ETCD_DISCOVERY="" #ETCD_DISCOVERY_SRV="" #ETCD_DISCOVERY_FALLBACK="proxy" #ETCD_DISCOVERY_PROXY="" 启动并验证状态(三台都启动) [root@localhost ~]# systemctl start etcd [root@localhost ~]# etcdctl settestdir/testkey0 0 0[root@localhost ~]# etcdctl gettestdir/testkey0 0[root@localhost ~]# etcdctl -C http://etcd:4001 cluster-health member 8e9e05c52164694d is healthy: got healthy result fromhttp://0.0.0.0:2379 cluster is healthy [root@localhost ~]# etcdctl -C http://etcd:2379 cluster-health member 8e9e05c52164694d is healthy: got healthy result fromhttp://0.0.0.0:2379 cluster is healthy 部署master
1、安装Docker [root@k8s-master ~]# yum install docker 搭建Docker私有仓库 #yum install docker docker-registry -y systemctl startdocker-distribution netstat -antp | grep 5000 systemctl enable docker-distribution 私有仓库构建完成。 修改master节点与Node节点的docker配置文件 [root@k8s-master ~]# vim/etc/sysconfig/docker #/etc/sysconfig/docker #Modify these options if you want to change the way the docker daemonruns OPTIONS='--selinux-enabled --log-driver=journald--signature-verification=false' if [ -z "${DOCKER_CERT_PATH}" ]; then DOCKER_CERT_PATH=/etc/docker fi OPTIONS='--insecure-registry 192.168.217.136:5000' 设置开机自启动并开启服务 [root@k8s-master ~]# chkconfig docker on [root@k8s-master ~]# service docker start 安装kubernets [root@k8s-master ~]# yum install kubernetes 3、配置并启动kubernetes 在kubernetesmaster上需要运行以下组件: · KubernetsAPI Server
· KubernetsController Manager
· KubernetsScheduler
3.1/etc/kubernetes/apiserver [root@k8s-master ~]# vim /etc/kubernetes/apiserver #kubernetes system config #The following values are used to configure the kube-apiserver The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" #The port on the local server to listen on. KUBE_API_PORT="--port=8080" #Port minions listen on #KUBELET_PORT="--kubelet-port=10250" #Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379" #Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" #default admission control policies #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" #Add your own! KUBE_API_ARGS="" 3.2/etc/kubernetes/config [root@k8s-master ~]# vim /etc/kubernetes/config #kubernetes system config #The following values are used to configure various aspects of all #kubernetes services, including #kube-apiserver.service #kube-controller-manager.service #kube-scheduler.service #kubelet.service #kube-proxy.service #logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" #journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" #Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" #How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://k8s-master:8080" 启动服务并设置开机自启动 [root@k8s-master ~]# systemctl enable kube-apiserver.service [root@k8s-master ~]# systemctl start kube-apiserver.service [root@k8s-master ~]# systemctl enable kube-controller-manager.service [root@k8s-master ~]# systemctl start kube-controller-manager.service [root@k8s-master ~]# systemctl enable kube-scheduler.service [root@k8s-master ~]# systemctl start kube-scheduler.service 四、部署node
1、安装docker
参见3.1
2、安装kubernets
参见3.2
3、配置并启动kubernetes
在kubernetes node上需要运行以下组件:
· Kubelet
· Kubernets Proxy
3.1/etc/kubernetes/config [root@K8s-node-1 ~]# vim /etc/kubernetes/config #kubernetes system config #The following values are used to configure various aspects of all #kubernetes services, including #kube-apiserver.service #kube-controller-manager.service #kube-scheduler.service #kubelet.service #kube-proxy.service #logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" #journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" #Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" #How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://master:8080" 3.2/etc/kubernetes/kubelet [root@K8s-node-1 ~]# vim /etc/kubernetes/kubelet #kubernetes kubelet (minion) config #The address for the info server to serve on (set to 0.0.0.0 or"" for all interfaces) KUBELET_ADDRESS="--address=0.0.0.0" #The port for the info server to serve on #KUBELET_PORT="--port=10250" #You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=node" #location of the api-server KUBELET_API_SERVER="--api-servers=http://master:8080" #pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest" #Add your own! KUBELET_ARGS="" 启动服务并设置开机自启动 [root@k8s-master ~]# systemctl enable kubelet.service [root@k8s-master ~]# systemctl start kubelet.service [root@k8s-master ~]# systemctl enable kube-proxy.service [root@k8s-master ~]# systemctl start kube-proxy.service 4、查看状态 在master上查看集群中节点及节点状态 [root@k8s-master ~]# kubectl -shttp://master:8080 get node
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 16s
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
k8s-node-1 Ready 3m
k8s-node-2 Ready 43s 至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。 五、创建覆盖网络——Flannel
1、安装Flannel 在master、node上均执行如下命令,进行安装 [root@k8s-master ~]# yum install flannel
版本为0.0.5
2、配置Flannel master、node上均编辑/etc/sysconfig/flanneld, [root@k8s-master ~]# vi /etcs/sysconfig/flanneld #Flanneld configuration options #etcd url location. Point this tothe server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://etcd:2379,http://192.168.8.72:2379,http://192.168.8.74:2379" #etcd config key. This is theconfiguration key that flannel queries #For address range assignment FLANNEL_ETCD_PREFIX="/coreos.com/network" #Any additional options that you want to pass FLANNEL_OPTIONS="--logtostderr=false --log_dir=/var/log/k8s/flannel/--etcd-endpoints=http://etcd:2379,http://192.168.8.72:2379,http://192.168.8.74:2379--iface=ens37"
3、配置etcd中关于flannel的key Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/coreos.com /network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)只需要在master做 [root@k8s-master ~]# etcdctl set /coreos.com/network/config '{"Network": "10.1.0.0/16" }'
{ "Network": "10.1.0.0/16" } 4、启动 启动Flannel之后,需要依次重启docker、kubernete,并且关闭防火墙 启动flannel之后会自动开启防火墙。 在master执行: systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kube-apiserver.service systemctl restart kube-controller-manager.service systemctl restart kube-scheduler.service 在node上执行: systemctl enable flanneld.service systemctl start flanneld.service service docker restart systemctl restart kubelet.service systemctl restart kube-proxy.service 6.配置nginx 写nginx pod的创建文件(yaml) [root@k8s-master ~]# cat nginx-rc.yaml piVersion: v1 kind: ReplicationController metadata:
name: nginx-controller
spec:
replicas: 2
selector:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
2.创建nodeprot.yaml
[root@k8s-master ~]# cat nginx-server-nodeport.yaml piVersion: v1 kind: Service metadata: name: nginx-service-nodeport
spec:
ports:
- port: 8000
targetPort: 80
protocol: TCP
type: NodePort
selector:
name: nginx 3.创建pod(一定要确认pod的状态为running)
[root@k8s-master ~]# kubectl create -f nginx-rc.yaml
replicationcontroller "nginx-controller" created
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-controller-72rfp 0/1 Container Creating 0 6s
nginx-controller-lsbrt 0/1 ContainerCreating 0 6s
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-controller-72rfp 1/1 Running 0 39s
nginx-controller-lsbrt 1/1 Running 0 39s 注意在创建pod的时候如果pod的状态一直为ContainerCreating,请使用kubectl describe pod 你的podname 查看详细 其中最主要的问题是:details:(open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no suchfile or directory) 解决方案:
查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt(该链接就是上图中的说明)是一个软链接,但是链接过去后并没有真实的/etc/rhsm,所以需要使用yum安装:(注这个链接仿佛需要翻墙,为以放万一,还是翻墙吧)
yum install rhsm
安装完成后,执行一下docker `js pullregistry.access.redhat.com/rhel7/pod-infrastructure:latest
如果依然报错,可参考下面的方案: wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm
rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv--to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
这两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件.
顺得的话会得到下面的结果。
[root@k8s-master]# docker pullregistry.access.redhat.com/rhel7/pod-infrastructure:latest
Trying to pull repositoryregistry.access.redhat.com/rhel7/pod-infrastructure ...
latest: Pulling from registry.access.redhat.com/rhel7/pod-infrastructure
26e5ed6899db: Pull complete
66dbe984a319: Pull complete
9138e7863e08: Pull complete
Digest:sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931
Status: Downloaded newer image for registry.access.redhat.com/rhel7/pod-infrastructure:latest
删除原来创建的rc [root@k8s-master]# kubectl delete -f mysql-rc.yaml
重新创建 [root@lk8s-master/]# kubectl create -f mysql-rc.yaml
replicationcontroller "mysql" created
再次查看状态 [root@k8s-master ~]# kubectl getpod
NAME READY STATUS RESTARTS AGE
nginx-controller-72rfp 1/1 Running 0 39s
nginx-controller-lsbrt 1/1 Running 0 `
4.查看信息(上方配置没错,输出信息如下)
[root@k8s-master ~]# kubectl describe service nginx-service-nodeport
Name: nginx-service-nodeport
Namespace: default
Labels: <none>
Selector: name=nginx
Type: NodePort
IP: 10.254.112.105
Port: <unset> 8000/TCP
NodePort: <unset> 30519/TCP
Endpoints: 10.1.30.2:80,10.1.68.2:80
Session Affinity: None
No events. 蓝色是docker的IP,红色是映射的端口(映射到真是物理主机的端口),黄色是虚拟IP
5.测试
[root@k8s-master ~]# ping 10.1.30.2
PING 10.1.30.2 (10.1.30.2) 56(84) bytes of data.
64 bytes from 10.1.30.2: icmp_seq=1 ttl=61 time=1.02 ms
^C
--- 10.1.30.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.024/1.024/1.024/0.000 ms
[root@k8s-master ~]# ping 10.1.68.2
PING 10.1.68.2 (10.1.68.2) 56(84) bytes of data.
64 bytes from 10.1.68.2: icmp_seq=1 ttl=61 time=0.793 ms
64 bytes from 10.1.68.2: icmp_seq=2 ttl=61 time=1.37 ms
^C
--- 10.1.68.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.793/1.083/1.373/0.290 ms
[root@k8s-master ~]# curl -i 10.1.30.2
HTTP/1.1 200 OK
Server: nginx/1.15.0
Date: Sun, 17 Jun 2018 16:48:22 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 05 Jun 2018 12:00:18 GMT
Connection: keep-alive
ETag: "5b167b52-264"
Accept-Ranges: bytes 39s
[root@k8s-master ~]# curl -i 10.1.68.2
HTTP/1.1 200 OK
Server: nginx/1.15.0
Date: Sun, 17 Jun 2018 12:24:48 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 05 Jun 2018 12:00:18 GMT
Connection: keep-alive
ETag: "5b167b52-264"
Accept-Ranges: bytes