k8s使用动态pv做数据存储(NFS存储)
环境准备:
操作系统:centos 7 x86_64 minal
k8s集群环境
nfs存储服务
应用redis-cluster集群
1.搭建nfs存储服务
#安装
yum install nfs-utils -y
#配置端口
# vim /etc/sysconfig/nfs
#追加端口配置
MOUNTD_PORT=4001
STATD_PORT=4002
LOCKD_TCPPORT=4003
LOCKD_UDPPORT=4003
RQUOTAD_PORT=4004
#权限说明
1、普通用户
当设置all_squash时:访客时一律被映射为匿名用户(nfsnobody)
当设置no_all_squash时:访客被映射为服务器上相同uid的用户,因此在客户端应建立与服务端uid一致的用户,否则也映射为nfsnobody。root除外,因为root_suqash为默认选项,除非指定了no_root_squash
2、root用户
当设置root_squash时:访客以root用户访问NFS服务端时,被映射为nfsnobody用户
当设置no_root_squash时:访客以root用户访问NFS服务端时,被映射为root用户。以其他用户访问时同样映射为对应uid的用户,因为no_all_squash是默认选项
选项说明
ro:共享目录只读
rw:共享目录可读可写
all_squash:所有访问用户都映射为匿名用户或用户组
no_all_squash(默认):访问用户先与本机用户匹配,匹配失败后再映射为匿名用户或用户组
root_squash(默认):将来访的root用户映射为匿名用户或用户组
no_root_squash:来访的root用户保持root帐号权限
anonuid=<UID>:指定匿名访问用户的本地用户UID,默认为nfsnobody(65534)
anongid=<GID>:指定匿名访问用户的本地用户组GID,默认为nfsnobody(65534)
secure(默认):限制客户端只能从小于1024的tcp/ip端口连接服务器
insecure:允许客户端从大于1024的tcp/ip端口连接服务器
sync:将数据同步写入内存缓冲区与磁盘中,效率低,但可以保证数据的一致性
async:将数据先保存在内存缓冲区中,必要时才写入磁盘
wdelay(默认):检查是否有相关的写操作,如果有则将这些写操作一起执行,这样可以提高效率
no_wdelay:若有写操作则立即执行,应与sync配合使用
subtree_check(默认) :若输出目录是一个子目录,则nfs服务器将检查其父目录的权限
no_subtree_check :即使输出目录是一个子目录,nfs服务器也不检查其父目录的权限,这样可以提高效率
以nfsuser(uid=1000)创建共享目录,参数默认rw
复制代码
# mkdir /var/nfs
# chown nfsuser. -R /var/nfs
# vim /etc/exports
/var/nfs 192.168.1.0/24(rw)
# exportfs -r #重载exports配置
# exportfs -v #查看共享参数
/var/nfs 192.168.1.0/24(rw,sync,wdelay,hide,no_subtree_check,sec=sys,secure,root_squash,no_all_squash)
复制代码
exportfs参数说明
-a 全部挂载或卸载 /etc/exports中的内容
-r 重新读取/etc/exports 中的信息 ,并同步更新/etc/exports、/var/lib/nfs/xtab
-u 卸载单一目录(和-a一起使用为卸载所有/etc/exports文件中的目录)
-v 输出详细的共享参数
#防火墙端口开放
(111、2049、4001-4004 tcp/udp协议)
firewall-cmd --zone=public --add-port=111/tcp --permanent
firewall-cmd --zone=public --add-port=111/udp --permanent
firewall-cmd --reload
或者
firewall-cmd --zone=public --add-service=nfs --permanent
firewall-cmd --zone=public --add-service=rpc-bind --permanent
firewall-cmd --zone=public --add-service=mountd --permanent
firewall-cmd --reload
#启动服务
systemctl enable rpcbind;systemctl enable nfs-server
2.k8s nfs-client-provisioner
#nfs-class.yaml
apiVersion: /v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
#nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
---
kind: ClusterRole
apiVersion: /v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: /v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup:
---
kind: Role
apiVersion: /v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: /v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup:
#nfs-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: temp
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 192.168.47.132
- name: NFS_PATH
value: /data
volumes:
- name: nfs-client-root
nfs:
server: 192.168.47.132
path: /data
#nfs-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs-claim
annotations:
/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
3.应用使用动态pv挂载数据
稍后补充
k8s 搭建私有仓库 k8s创建pv
转载本文章为转载内容,我们尊重原作者对文章享有的著作权。如有内容错误或侵权问题,欢迎原作者联系我们进行内容更正或删除文章。

提问和评论都可以,用心的回复会被更多人看到
评论
发布评论
相关文章
-
【k8s】搭建Kubernetes(k8s)集群出现NotReady的处理
k8s安装,节点一直处于NotReady状态,下载CNI插件处理
服务器 k8s Kubernetes