创建StorageClass和存储池 Block存储不能在不同的Pod之间共享。但是在提供块存储之前,需要先创建StorageClass和存储池。K8S需要这两类资源,才能和Rook交互,进而分配持久卷(PV)。
准备YAML文件 [root@K8S-PROD-M1 rook]# cd cluster/examples/kubernetes/ceph/csi/rbd [root@K8S-PROD-M1 rbd]# vi storageclass.yaml 创建SC资源 [root@K8S-PROD-M1 rbd]# kubectl apply -f storageclass.yaml 查看结果 查看K8S集群SC资源:
[root@K8S-PROD-M1 rbd]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 95s 查看运行日志:
[root@K8S-PROD-M1 rbd]# kubectl logs -f -n rook-ceph pod/rook-ceph-operator-f45d597d7-r2bkq ... 2020-11-03 06:54:36.096825 I | op-mon: parsing mon endpoints: b=10.110.45.116:6789,d=10.96.157.194:6789,a=10.97.84.218:6789 2020-11-03 06:54:36.097112 I | ceph-spec: adding finalizer "cephblockpool.ceph.rook.io" on "replicapool" 2020-11-03 06:54:36.113891 E | ceph-block-pool-controller: failed to reconcile failed to add finalizer: failed to add finalizer "cephblockpool.ceph.rook.io" on "replicapool": Operation cannot be fulfilled on cephblockpools.ceph.rook.io "replicapool": the object has been modified; please apply your changes to the latest version and try again 2020-11-03 06:54:37.137400 I | op-mon: parsing mon endpoints: b=10.110.45.116:6789,d=10.96.157.194:6789,a=10.97.84.218:6789 2020-11-03 06:54:37.137554 I | ceph-spec: adding finalizer "cephblockpool.ceph.rook.io" on "replicapool" 2020-11-03 06:54:40.924952 I | ceph-block-pool-controller: creating pool "replicapool" in namespace "rook-ceph" 2020-11-03 06:54:59.767825 I | cephclient: creating replicated pool replicapool succeeded 创建PVC [root@K8S-PROD-M1 rbd]# kubectl apply -f pvc.yaml
[root@K8S-PROD-M1 rbd]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-0d6574d8-00b7-4964-95e0-60fa256c1659 1Gi RWO rook-ceph-block 82s [root@K8S-PROD-M1 rbd]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0d6574d8-00b7-4964-95e0-60fa256c1659 1Gi RWO Delete Bound default/rbd-pvc rook-ceph-block 81s 使用块设备 [root@K8S-PROD-M1 ceph]# vi csirbd-demo-pod.yaml apiVersion: v1 kind: Pod metadata: name: csirbd-demo-pod spec: restartPolicy: OnFailure containers:
- name: csirbd-demo-container
image: busybox
volumeMounts:
- name: rbd-pvc mountPath: /var/test command: ['sh', '-c', 'echo "Hello World" > /var/test/data; exit 0'] volumes:
- name: rbd-pvc persistentVolumeClaim: claimName: rbd-pvc
[root@K8S-PROD-M1 ceph]# kubectl apply -f csirbd-demo-pod.yaml 测试持久性 [root@K8S-PROD-M1 ceph]# vi csirbd-demo-pod-2.yaml apiVersion: v1 kind: Pod metadata: name: csirbd-demo-pod-2 spec: restartPolicy: OnFailure containers:
- name: csirbd-demo-container
image: busybox
volumeMounts:
- name: rbd-pvc mountPath: /var/test command: ['sh', '-c', 'cat /var/test/data; exit 0'] volumes:
- name: rbd-pvc persistentVolumeClaim: claimName: rbd-pvc
[root@K8S-PROD-M1 ceph]# kubectl apply -f csirbd-demo-pod-2.yaml
查看结果:数据重现
[root@K8S-PROD-M1 ceph]# kubectl logs -f csirbd-demo-pod-2 csirbd-demo-container Hello World