文章目录
- 一、kubernetes 官方提供的三种部署方式
- 二、使用kubeadm搭建k8s集群
- 2.1 基础环境设置
- 2.2 安装Docker
- 2.3 添加kubernetes软件源
- 2.4 安装kubeadm,kubelet和kubectl
- 2.5 部署Kubernetes Master
- 2.6 加入Kubernetes Node
- 2.7 安装 CNI 网络插件
- 2.8 测试Kubernetes集群
- 2.9 部署 Dashboard
一、kubernetes 官方提供的三种部署方式
- minikube
- Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。部署地址:https://kubernetes.io/docs/setup/minikube/
- kubeadm
- Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
# 创建一个 Master 节点
[root@localhost ~]$ kubeadm init
# 将一个 Node 节点加入到当前集群中
[root@localhost ~]$ kubeadm join <Master节点的IP和端口 >
- 二进制包
- 推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。下载地址:https://github.com/kubernetes/kubernetes/releases
二、使用kubeadm搭建k8s集群
安装要求
- 一台或多台机器,操作系统 CentOS7.x-86_x64
- 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘30GB或更多【注意master需要两核】
- 可以访问外网,需要拉取镜像,如果服务器不能上网,需要提前下载镜像并导入节点
- 禁止swap分区
环境角色
IP | 角色 | 安装软件 |
192.168.88.10 | k8s-Master | kube-apiserver kube-schduler kube-controller-manager docker flannel kubelet |
192.168.88.11 | k8s-node01 | kubelet kube-proxy docker flannel |
192.168.88.12 | k8s-node02 | kubelet kube-proxy docker flannel |
2.1 基础环境设置
在每台机器上执行下面的命令
关闭防火墙及selinux
# 关闭防火墙
[root@localhost ~]$ systemctl stop firewalld && systemctl disable firewalld
# 关闭selinux
[root@localhost ~]$ sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
关闭 swap 分区
[root@localhost ~]$ swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
#swapoff -a # 临时
设置主机名
[root@localhost ~]$ hostnamectl set-hostname k8s-master
#(192.168.88.10主机打命令)
[root@localhost ~]$ hostnamectl set-hostname k8s-node01
#(192.168.88.11主机打命令)
[root@localhost ~]$ hostnamectl set-hostname k8s-node02
#(192.168.88.12主机打命令)
在master添加hosts
#ip和主机名需要换成自己的
[root@localhost ~]$ cat >> /etc/hosts << EOF
192.168.88.10 k8s-master
192.168.88.11 k8s-node01
192.168.88.12 k8s-node02
EOF
内核调整,将桥接的IPv4流量传递到iptables的链,三台都要设置
#在kubernetes中service有两种代理模型,一种是基于iptables的,一种是基于ipvs的,两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块
#以下iptables和ipvs选择其中一个即可
###############配置iptables功能###############
# 将桥接的IPv4流量传递到iptables的链
[root@localhost ~]$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# 生效
[root@localhost ~]$ sysctl -p
设置系统时区并同步时间服务器
[root@localhost ~]$ yum install -y ntpdate
[root@localhost ~]$ ntpdate time.windows.com
#时间还是不准执行下面的命令
[root@localhost ~]$ timedatectl set-timezone Asia/Shanghai
升级系统内核
可升级,也可以不升级
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker不稳定
[root@localhost ~]$ rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装一次!
[root@localhost ~]$ yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动
[root@localhost ~]$ grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)' && reboot
[root@localhost ~]$ uname -r #查看当前内核
#查看系统上的所有可用内核
[root@localhost ~]$ awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
2.2 安装Docker
所有节点安装Docker/kubeadm/kubelet
Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
安装docker
#下载阿里云的docker源
[root@localhost ~]$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
#指定版本安装docker
[root@localhost ~]$ yum -y install docker-ce-18.06.1.ce-3.el7
#启动docker
[root@localhost ~]$ systemctl enable docker && systemctl start docker
[root@localhost ~]$ docker --version
Docker version 18.06.1-ce, build e68fc7a
配置docker的镜像源
[root@localhost ~]$ cat >> /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
#重启docker
[root@localhost ~]$ systemctl restart docker
[root@localhost ~]$ docker info
........
Experimental: false
Insecure Registries:
127.0.0.0/8
Registry Mirrors:
https://b9pmyelo.mirror.aliyuncs.com/ #改成功了
Live Restore Enabled: false
2.3 添加kubernetes软件源
所有主机都需要操作
配置一下yum的k8s软件源
[root@localhost ~]$ cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.4 安装kubeadm,kubelet和kubectl
所有主机都需要操作,由于版本更新频繁,这里指定版本号部署
# 安装kubelet、kubeadm、kubectl,同时指定版本
[root@localhost ~]$ yum -y install kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0
# 设置开机启动
[root@localhost ~]$ systemctl start kubelet && systemctl enable kubelet
2.5 部署Kubernetes Master
只需要在Master 节点执行,这里的apiserve需要修改成自己的master地址
由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址,【执行上述命令会比较慢,因为后台其实已经在拉取镜像了】
#方便直接复制
[root@k8s-master ~]$ kubeadm init --apiserver-advertise-address=192.168.88.10 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
#apiserve需要修改成自己的master地址
#重启kubeadm
[root@k8s-master ~]$ kubeadm reset
========================================================================
--apiserver-advertise-address #当前节点的ip
--kubernetes-version #指定Kubernetes版本
--image-repository #由于kubeadm默认是从官网k8s.grc.io下载所需镜像,国内无法访问,所以这里通过--image-repository指定为阿里云镜像仓库地址
--pod-network-cidr #指定pod网络段
--service-cidr #指定service网络段
--ignore-preflight-errors=Swap #忽略swap报错信息
=========================================================================
#参数详细解释:
--apiserver-advertise-addressstring 设置 apiserver 绑定的 IP.
--apiserver-bind-port int32 设置apiserver 监听的端口. (默认 6443)
--apiserver-cert-extra-sans strings api证书中指定额外的Subject Alternative Names (SANs) 可以是IP 也可以是DNS名称。 证书是和SAN绑定的。
--cert-dir string 证书存放的目录 (默认 "/etc/kubernetes/pki")
--certificate-key string kubeadm-cert secret 中 用于加密 control-plane 证书的key
--config string kubeadm 配置文件的路径.
--cri-socket string CRI socket 文件路径,如果为空 kubeadm 将自动发现相关的socket文件; 只有当机器中存在多个 CRI socket 或者 存在非标准 CRI socket 时才指定.
--dry-run 测试,并不真正执行;输出运行后的结果.
--feature-gates string 指定启用哪些额外的feature 使用 key=value 对的形式。
-h, --help 帮助文档
--ignore-preflight-errors strings 忽略前置检查错误,被忽略的错误将被显示为警告. 例子: 'IsPrivilegedUser,Swap'. Value 'all' ignores errors from all checks.
--image-repository string 选择拉取 control plane images 的镜像repo (default "k8s.gcr.io")
--kubernetes-version string 选择K8S版本. (default "stable-1")
--node-name string 指定node的名称,默认使用 node 的 hostname.
--pod-network-cidr string 指定 pod 的网络, control plane 会自动将 网络发布到其他节点的node,让其上启动的容器使用此网络
--service-cidr string 指定service 的IP 范围. (default "10.96.0.0/12")
--service-dns-domain string 指定 service 的 dns 后缀, e.g. "myorg.internal". (default "cluster.local")
--skip-certificate-key-print 不打印 control-plane 用于加密证书的key.
--skip-phases strings 跳过指定的阶段(phase)
--skip-token-print 不打印 kubeadm init 生成的 default bootstrap token
--token string 指定 node 和control plane 之间,简历双向认证的token ,格式为 [a-z0-9]{6}\.[a-z0-9]{16} - e.g. abcdef.0123456789abcdef
--token-ttl duration token 自动删除的时间间隔。 (e.g. 1s, 2m, 3h). 如果设置为 '0', token 永不过期 (default 24h0m0s)
--upload-certs 上传 control-plane 证书到 kubeadm-certs Secret.
输出结果
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.4.34]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.4.34 127.0.0.1 ::1]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
......(省略)
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
#出现此条,表示成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.88.10:6443 --token vrc4kd.x0yhcsvvxnlcv1ld \
--discovery-token-ca-cert-hash sha256:d25803ca8a3b71fd569a17df87fa3dc5afc7b5b8f1f4dcd9393ab570b3a5da0f
#查看docker镜像
[root@k8s-master ~]$ docker images
要使kubectl为非root用户工作,根据输出提示操作:
[root@k8s-master ~]$ mkdir -p $HOME/.kube
[root@k8s-master ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看所有node列表
[root@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m2s v1.18.0
默认token的有效期为24小时,当过期之后,该token就不可用了,如果后续有nodes节点加入,解决方法如下:
重新生成新的token
#master生成tocken
[root@k8s-master ~]$ kubeadm token create
1xlw8v.0iv91yae7c4yw3t0
[root@k8s-master ~]$ kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
1xlw8v.0iv91yae7c4yw3t0 23h 2019-07-05T18:13:18+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
##token-ca-cert-hash
#获取ca证书sha256编码hash值
[root@k8s-master ~]$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
896ead0cc384e0e41139544e01049948c7b878732216476c2d5608c94c919ed6
#节点加入集群
[root@k8s-node1 ~]$ kubeadm join 192.168.88.10:6443 --token 39p4bn.spf1yvgt3n7qc933 --discovery-token-ca-cert-hash sha256:896ead0cc384e0e41139544e01049948c7b878732216476c2d5608c94c919ed6
#kubeadm join master主机ip --token master主机生成的tocken --discovery-token-ca-cert-hash sha256:node节点生成的tocken
========================================================
#也可以这样
kubeadm join 192.168.100.101:6443 --token add3mn.tnorrntsgfo64tku --discovery-token-unsafe-skip-ca-verification
#如果token过期 先生成
kubeadm token create
# master 查看节点检查token是否有效
kubeadm token list
# 生成新的token和命令。然后在node重新执行
kubeadm token create --print-join-command
kubeadm join 192.168.100.101:6443 --token ijluj4.zkmo9aneom8yw3tz --discovery-token-unsafe-skip-ca-verification
2.6 加入Kubernetes Node
在 Node 节点执行
使用kubeadm join 注册Node节点到Matser kubeadm join 的内容,在上面kubeadm init 已经生成好了
#每个人的都不一样!!!需要复制自己上面master节点init生成的
[root@k8s-node1 ~]$ kubeadm join 192.168.88.10:6443 --token bgoz5x.qift7w4fue97pccu \
--discovery-token-ca-cert-hash sha256:b9216c0d46e4a86cddc2800ba062d0f8c2cbbe1894eb2822840161efe826f5f0
# 如果报错,到node节点进行重置,然后重新加入
#[root@k8s-node1 ~]$ kubeadm reset
输出内容:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
#表示加入master节点成功
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 20m v1.18.0
k8s-node01 NotReady <none> 90s v1.18.0
k8s-node02 NotReady <none> 113s v1.18.0
#NotReady 没有准备就绪
2.7 安装 CNI 网络插件
只需要在Master 节点执行
网络插件实现的最主要的功能就是POD跨宿主机资源互相访问 kube-flannel.yaml 文件https://wwz.lanzoul.com/iO7Zxrnaumh 密码:e7wx
[root@k8s-node1 ~]$ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#有时候可能下载不到,需要科学上网,也可以直接创建这个文件,把内容写进去
#[root@k8s-master ~]$ wget https://www.chenleilei.net/soft/k8s/kube-flannel.yaml
#直接使用地址这个即可
#替换仓库地址,quay.io国内访问不到,需要修改为quay-mirror.qiniu.com
#[root@k8s-master ~]$ sed -i 's#quay.io#quay-mirror.qiniu.com#g' kube-flannel.yaml
[root@k8s-master ~]$ kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
查看集群的node状态,安装完网络工具之后,只有显示如下状态,所有节点全部都Ready好了之后才能继续后面的操作
[root@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 37m v1.18.0
k8s-node01 Ready <none> 5m22s v1.18.0
k8s-node02 Ready <none> 5m18s v1.18.0
[root@k8s-master ~]$ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-bccdc95cf-h2ngj 1/1 Running 0 14m
coredns-bccdc95cf-m78lt 1/1 Running 0 14m
etcd-k8s-master 1/1 Running 0 13m
kube-apiserver-k8s-master 1/1 Running 0 13m
kube-controller-manager-k8s-master 1/1 Running 0 13m
kube-flannel-ds-amd64-j774f 1/1 Running 0 9m48s
kube-flannel-ds-amd64-t8785 1/1 Running 0 9m48s
kube-flannel-ds-amd64-wgbtz 1/1 Running 0 9m48s
kube-proxy-ddzdx 1/1 Running 0 14m
kube-proxy-nwhzt 1/1 Running 0 14m
kube-proxy-p64rw 1/1 Running 0 13m
kube-scheduler-k8s-master 1/1 Running 0 13m
如果上述操作完成后,还存在某个节点处于NotReady状态,可以在Master将该节点删除
# master节点将该节点删除
[root@k8s-master ~]$ kubectl delete node k8s-node01
# 然后到k8snode1节点进行重置
[root@k8s-master ~]$ kubeadm reset
# 重置完后在加入
[root@k8s-node01 ~]$ kubeadm join 192.168.88.10:6443 --token bgoz5x.qift7w4fue97pccu \
--discovery-token-ca-cert-hash sha256:b9216c0d46e4a86cddc2800ba062d0f8c2cbbe1894eb2822840161efe826f5f0
查看组件的状态
[root@k8s-master ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
#发现有个组件状态异常
#确认kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口
#注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的--port=0
#确认kube-scheduler和kube-controller-manager组件配置是否禁用了非安全端口
#如controller-manager组件的配置如下:可以去掉--port=0这个设置,然后重启
#master和node都重启下
[root@k8s-master ~]$ systemctl restart kubelet
2.8 测试Kubernetes集群
在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问
# 下载nginx 【会联网拉取nginx镜像】,需要等一会,可能很慢
[root@k8s-master ~]$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
#创建一个deploy的控制器,名字是nginx
#启动这个容器
#[root@k8s-master ~]$ kubectl run nginx --image=nginx --port=80
# 暴露端口
[root@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
#查看pod
#[root@k8s-master ~]$ kubectl get pod
#NAME READY STATUS RESTARTS AGE
#nginx 1/1 Running 0 9m51s
#nginx-f89759699-ljq5c 1/1 Running 0 2m39s
#删除一个pod
#[root@k8s-master ~]$ kubectl delete pod nginx-f89759699-ljq5c
# 查看一下对外的端口,已经成功暴露了 80端口 到 31745上
[root@k8s-master ~]$ kubectl get pods,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-554b9c67f9-wf5lm 1/1 Running 0 24s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 39m
service/nginx NodePort 10.1.224.251 <none> 80:31745/TCP 9
访问地址:http://Node节点IP:Port ,此例就是:http://192.168.88.11:31745
2.9 部署 Dashboard
[root@k8s-master ~]$ wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master ~]$ vim kubernetes-dashboard.yaml
#修改内容:
109 spec:
110 containers:
111 - name: kubernetes-dashboard
112 image: lizhenliang/kubernetes-dashboard-amd64:v1.10.1 # 修改此行
......
157 spec:
158 type: NodePort # 增加此行
159 ports:
160 - port: 443
161 targetPort: 8443
162 nodePort: 30001 # 增加此行
163 selector:
164 k8s-app: kubernetes-dashboard
[root@k8s-master ~]$ kubectl apply -f kubernetes-dashboard.yaml
在浏览器访问地址: https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
[root@k8s-master ~]$ kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]$ kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin
--serviceaccount=kube-system:dashboard-admin
[root@k8s-master ~]$ kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name: dashboard-admin-token-d9jh2
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 4aa1906e-17aa-4880-b848-8b3959483323
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 11 bytes
token: eyJhbGciOiJ...(省略如下)...AJdQ
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZDlqaDIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNGFhMTkwNmUtMTdhYS00ODgwLWI4NDgtOGIzOTU5NDgzMzIzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.OkF6h7tVQqmNJniCHJhY02G6u6dRg0V8PTiF8xvMuJJUphLyWlWctgmplM4kjKVZo0fZkAthL7WAV5p_AwAuj4LMfo1X5IpxUomp4YZyhqgsBM0A2ksWoKoLDjbizFwOty8TylWlsX1xcJXZjmP9OvNgjjSq5J90N5PnxYIIgwAMP3fawTP7kUXxz5WhJo-ogCijJCFyYBHoqHrgAbk9pusI8DpGTNIZxBMxkwPPwFwzNCOfKhD0c8HjhNeliKsOYLryZObRdmTQXmxsDfxynTKsRxv_EPQb99yW9GXJPQL0OwpYb4b164CFv857ENitvvKEOU6y55P9hFkuQuAJdQ