StatefulSet
有状态服务sts比较常见的mongo复制集 ,redis cluster,rabbitmq cluster等等,这些服务基本都会用StatefulSet模式来运行,当然除了这个,它们内部集群的关系还需要一系列脚本或controller来维系它们间的状态,这些会在后面进阶课程专门来讲,现在为了让大家先更好的明白StatefulSet,我这里直接还是用nginx服务来实战
一、准备好NFS服务器
确保nfs可以正常工作,创建持久化需要的目录。【前面的验证过程中已部署好NFS服务器这里直接引用】
path: /opt/data/nginx
server: 10.0.19.127
二、开启rbac权限
RBAC基于角色的访问控制–全拼Role-Based Access Control
根据rbac.yaml 文件创建Service Account
apiVersion: v1
kind: ServiceAccount #创建一个账户,主要用来管理NFS provisioner在k8s集群中运行的权限
metadata:
name: nfs-client-provisioner
namespace: kube-system
---
kind: ClusterRole #创建集群角色
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner #角色名
rules: #角色权限
- apiGroups: [""]
resources: ["persistentvolumes"] # 操作的资源
verbs: ["get", "list", "watch", "create", "delete"] # 对该资源的操作权限
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding # 集群角色绑定
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects: # 角色绑定对象
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole # 集群角色
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
这个文件是创建授权账户。为什么要授权?在K8S中,我们知道有 ApiServer 组件,它可以管理我们创建的 deployment, pod,service等资源,但是有些资源它是管不到的,比如说 K8S本身运行需要的组件等等,同样StorageClass这种资源它也管不到,所以,需要授权账户。
我们在master节点执行
[root@k8s-master1 ~]# kubectl apply -f rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
三、 创建StorageClass,指定provisioner
nfs-client-provisioner镜像包下载链接: https://pan.baidu.com/s/1jQc8pPj2piQjtaVG4od6UA?pwd=uu82 提取码: uu82 复制这段内容后打开百度网盘手机App,操作更方便哦
创建nfs-client-provisioner.yaml文件
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-provisioner-01
namespace: kube-system #与RBAC文件中的namespace保持一致
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-provisioner-01
template:
metadata:
labels:
app: nfs-provisioner-01
spec:
serviceAccountName: nfs-client-provisioner # 指定serviceAccount!
containers:
- name: nfs-client-provisioner
image: jmgao1983/nfs-client-provisioner:latest #镜像地址
imagePullPolicy: IfNotPresent
volumeMounts: # 挂载数据卷到容器指定目录
- name: nfs-client-root
mountPath: /persistentvolumes #不需要修改
env:
- name: PROVISIONER_NAME
value: nfs-provisioner-01 # 此处供应者名字供storageclass调用
- name: NFS_SERVER
value: 10.0.19.127 # 填入NFS的地址
- name: NFS_PATH
value: /opt/data/nginx # 填入NFS挂载的目录
volumes:
- name: nfs-client-root
nfs:
server: 10.0.19.127 # 填入NFS的地址
path: /opt/data/nginx # 填入NFS挂载的目录
---
apiVersion: storage.k8s.io/v1
kind: StorageClass # 创建StorageClass
metadata:
name: nfs-boge
provisioner: nfs-provisioner-01 #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
# Supported policies: Delete、 Retain , default is Delete
reclaimPolicy: Retain #清除策略
PS:nfs-client-provisioner这个镜像的作用,它通过k8s集群内置的NFS驱动,挂载远端的NFS服务器到本地目录,然后将自身作为storageprovisioner,然后关联到storageclass资源。
在master上创建
[root@k8s-master1 ~]# kubectl apply -f nfs-client-provisioner1.yaml
deployment.apps/nfs-provisioner-01 created
storageclass.storage.k8s.io/nfs-boge created
四、创建Service 和 StatefulSet和pvc
service的名称可以随意取,但是statefulset的名称已经定死了,为web,
StatefulSet创建的第一个Pod的name应该为web-0,第二个为web-1。
这里StatefulSet中的Pod与PVC之间的绑定关系是通过名称来匹配的,即:
PVC_name = volumeClaimTemplates_name + "-" + pod_name
www-web-0 = www + "-" + web-0
www-web-1 = www + "-" + web-1
root@node1:~# cat web.yaml
apiVersion: v1
kind: Service
metadata:
name: web-headless #无头服务的名称
namespace: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
namespace: nginx
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "web-headless" #填写无头服务的名称
replicas: 2 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates: #这步自动创建pvc
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: nfs-boge
resources:
requests:
storage: 5Gi
五、测试
[root@k8s-master1 ~]# kubectl exec -it web-0 sh -n nginx
# ping web-0.web-headless.nginx
PING web-0.web-headless.nginx.svc.cluster.local (10.244.0.102): 56 data bytes
64 bytes from 10.244.0.102: icmp_seq=0 ttl=64 time=0.025 ms
64 bytes from 10.244.0.102: icmp_seq=1 ttl=64 time=0.053 ms
64 bytes from 10.244.0.102: icmp_seq=2 ttl=64 time=0.042 ms
^C--- web-0.web-headless.nginx.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.025/0.040/0.053/0.000 ms
# ping web-1.web-headless.nginx
PING web-1.web-headless.nginx.svc.cluster.local (10.244.1.80): 56 data bytes
64 bytes from 10.244.1.80: icmp_seq=0 ttl=62 time=0.401 ms
64 bytes from 10.244.1.80: icmp_seq=1 ttl=62 time=0.313 ms
64 bytes from 10.244.1.80: icmp_seq=2 ttl=62 time=0.369 ms
^C--- web-1.web-headless.nginx.svc.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.313/0.361/0.401/0.036 ms
有状态服务好处
1、pod名不变
2、pod直接可以通过 pod名.svc名.namespace名 ;进行通信