实验目的:
为了更好理解configmap、endpoint、svc、clustip
实验过程:
从代理服务nginx 访问nginx 集群
1 、为了让curl 访问到集群内部返回不是默认的nginx 主页,便于看清楚负载均衡,所以用configmap 更改web nignx 默认访问主页
2 、为了从代理服务器能访问到web nginx 集群, 用configmap 给代理nignx 配置反向代理,代理的地址写集群svc 的地址,端口写集群的端口
1、准备svc
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
clusterIP: 10.96.173.64
ports:
- protocol: TCP
port: 8000
targetPort: 80
selector:
app: nginx
2、准备代理nginx 的configmap
[root@k8s-master1 test]# cat proxy-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: conf
namespace: default
data:
default.conf: |-
server {
listen 80;
server_name localhost;
location / {
proxy_pass http://10.96.173.64:8000/;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
3、准备代理nginx pod
[root@k8s-master1 test]# cat nginx-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: proxy-nginx
spec:
containers:
- name: proxy-nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /etc/nginx/conf.d/
volumes:
- name: config-volume
configMap:
name: conf
4、 准备集群pod nginx1 、nginx2 的configmap
[root@k8s-master1 test]# cat config1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx1
namespace: default
data:
index.html: |
nginx1
[root@k8s-master1 test]# cat config2.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx2
namespace: default
data:
index.html: |
nginx2
5、准备pod 集群 nginx1,nginx2
[root@k8s-master1 test]# cat nginx1.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx1
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html
volumes:
- name: config-volume
configMap:
name: nginx1
[root@k8s-master1 test]# cat nginx2.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx2
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html
volumes:
- name: config-volume
configMap:
name: nginx2
测试:
curl 访问代理nginx ,已经访问到集群内部pod 了

因为svc 中的clust ip 是我们写死的,所以即使svc 重拉,代理nginx 依然能访问到,svc 中使用了标签选择器,所以集群扩展后,pod 会自动加入svc中,对应的endpoint 会自动生成

验证扩展:
创建pod3 及对应的configmap
[root@k8s-master1 test]# cat config3.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx3
namespace: default
data:
index.html: |
nginx3
[root@k8s-master1 test]# cat nginx3.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx3
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html
volumes:
- name: config-volume
configMap:
name: nginx3


备注:如果用deploy 部署pod ,则
svc 中的标签选择器 也需要跟deploy 中的一致,不然生成不了endpoint
他特指标签选择器里面定义的标签, 不是容器内的名称,也不是deploy 本身元数据的名称或标签
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx"}: `selector` does not match template `labels
意思是deploy 里面的选择器标签和模板不一致必须一致才能创建该deploy

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80