首先非常感谢blackpiglet的博文《CKA考试心得》,帮我快速了解备考CKA的各种注意事项。考试指南和细节啥的,大家可以去阅读他的文章。 我是于2020年10月23号通过CKA考试的,整个过程还比较顺利。之前一直没时间想起来要写博文,今天刚好有时间给大家分享一下我遇到的问题和解决办法。 我在考试过程中,出现过Chrome浏览器失去响应的情况,大家切记不要去刷新。一直等待即可,我等了十几分钟…
1.建立clusterrole,资源deployment、statefulset、deamonset,verbs是create;在namespace cka里建立名为 sa 的serviceaccount;将clusterrole绑定到serviceaccount
官方相关文档
# 解题思路:
# 在系统已有的clusterrole admin的yaml文件中直接复制内容。
# 需要注意资源deployment等的apiGroups有内容,为apps。
# clusterrole是没有namespace属性的。
kubectl get clusterrole admin -o yaml > clusterrole.yaml
cat clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole
rules:
- apiGroups:
- apps
resources:
- daemonsets
- deployments
- statefulsets
verbs:
- create
kubectl create -f clusterrole.yaml
kubectl create serviceaccount sa -n cka
kubectl create clusterrolebinding bind --clusterrole=clusterrole --serviceaccout=cka:sa
2.建立多容器pod,container:nginx+redis+tomcat
# 解题思路:dry-run一个pod -o yaml到yaml文件后,直接修改该文件
kubectl run multi-pod --image=nginx --dry-run=client -o yaml > multi-pod.yaml
cat multi-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: multi-pod
labels:
app: run
spec:
containers:
- name: nginx
image: nginx
- name: redis
image: redis
- name: tomcat
image: tomcat
kubectl create -f multi-pod.yaml
kubectl get pods
3.deployment scale replicas=5
kubectl scale deployment nginx-deployment --replicas=5
4.创建networkpolicy,允许namespace cka的pod连接到namespace cka下所有pod的80端口官方文档network-policies
# 解题思路:理解题目要求为ingress还是egress后,直接从官方文档copy.
# 需要注意spec.podSelector是必须项
cat networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: networkpolicy
namespace: cka
spec:
podSelector: {}
ingress:
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress
5.查找含有taint:noschedule的node,并将数量写入到指定文件,这里只写数字即可
kubectl describe nodes | grep -i noschedule > /<指定目录>/<指定文件>
6.kubenetes upgrade 升级到1.19.0,先按提示切集群,然后再切到master节点升级,注意:题目只要求升级master,工作节点不需要升级
# [kubeadm-upgrade](https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
apt update
apt-cache madison kubeadm
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm=1.19.x-00 && \
apt-mark hold kubeadm
apt-get update && \
apt-get install -y --allow-change-held-packages kubeadm=1.19.x-00
kubeadm version
kubectl drain <cp-node-name> --ignore-daemonsets
sudo kubeadm upgrade plan
sudo kubeadm upgrade apply v1.19.x
kubectl uncordon <cp-node-name>
apt-mark unhold kubelet kubectl && \
apt-get update && apt-get install -y kubelet=1.19.x-00 kubectl=1.19.x-00 && \
apt-mark hold kubelet kubectl
apt-get update && \
apt-get install -y --allow-change-held-packages kubelet=1.19.x-00 kubectl=1.19.x-00
7.查找消耗CPU资源最多的pod,要求匹配标签-l app=myapp,并将pod的name写入到指定文件
kubectl top pods -l app=myapp --sort-by=cpu
8.创建ingress,servicename:hello,path:/hello,要求curl internal_ip/hello
ingress
# 解题思路:需要注意ingress controller的pod是否run,否则无法正确curl,使用kubectl get ingresses myingress获取到internal_ip验证curl
cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myingress
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /hello
backend:
service:
name: hello
port:
number: 80
9.给已有deployment的nginx container添加port,name为http,container port为80;创建svc,暴露pod的容器端口http,type=nodePort
spec:
containers:
- name: nginx
image: nginx
ports:
- name: http
containerPort: 80
kubectl expose pod nginx --target-port=http --type=nodePort --port=80
10.kubectl drain
kubectl drain node1 --ignore-daemonsets
11.etcd backup 快照文件保存到指定文件,给定crt和key的位置和endpoint,完成后将指定备份文件恢复到etcd
etcd
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt snapshot save /<指定路径>/<指定文件>
ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key --cacert=/etc/kubernetes/pki/etcd/ca.crt snapshot restore /<指定路径>/<指定文件>
12.创建单容器pod使用nodeSelector,disk=ssd
nodePod
cat nodeselector.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
disk: ssd
13.使用sidecar busybox将容器的日志输出到指定文件
logging
logging.yaml
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$i: $(date)" >> /var/log/1.log;
echo "$(date) INFO $i" >> /var/log/2.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
emptyDir: {}
14.建立PVC,capacity为10Mi,accessmode:rwo;调整capacity为70Mi(如果是NFS做后端存储,则不支持动态扩容)
PVC
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
kubectl create -f pvc.yaml
vim pvc.yaml
kubectl apply -f pvc.yaml
15.使用hostPath创建PV,accessmode为rwx,创建pvc,绑定该pv;创建pod使用该pvc
pvc
cat pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
16.node没有ready
ssh node
systemctl status kubelet
systemctl start kubelet
systemctl enable kubelet
17.提取Pod的日志行’error’写入到指定文件中
kubectl logs podname |grep "error"