kubekey1.2.1 安装kubernetes1.21.5 集群
标签(空格分隔): kubernetes系列
一: kubekey 的介绍
kubeykey是KubeSphere基于Go 语言开发的kubernetes集群部署工具,使用 KubeKey,您可以轻松、高效、灵活地单独或整体安装 Kubernetes 和 KubeSphere。
KubeKey可以用于以下三种安装场景:
仅安装 Kubernetes集群
使用一个命令安装 Kubernetes 和 KubeSphere
已有Kubernetes集群,使用ks-installer 在其上部署 KubeSphere
项目地址:
https://github.com/kubesphere/kubekey
二: kubekey 安装单集群 kubernetes
2.1 安装kubekey
集群先安装好docker
准备KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod +x kk
2.2 使用kubekey 来引导集群
yum install scoat conntrack
./kk create cluster --with-kubernetes v1.21.5 --with-kubesphere v3.2.0
查看安装kubesphere 3.2 安装进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
2.3 开启某一项功能
开启devops 功能
查看日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
修改kubeshpere web 登录密钥:
kubectl patch users <username> -p '{"spec":{"password":"<password>"}}' --type='merge' && kubectl annotate users <username> iam.kubesphere.io/password-encrypted-
修改admin 登录密钥:
kubectl patch users admin -p '{"spec":{"password":"admin"}}' --type='merge' && kubectl annotate users admin iam.kubesphere.io/password-encrypted-
三:kubekey 安装 多节点的集群
3.1 准备环境
准备三台机器
cat /etc/hosts
-----
172.16.10.11 flyfishsrvs01
172.16.10.12 flyfishsrvs02
172.16.10.13 flyfishsrvs03
-----
flyfishsrvs01 用作master 其它节点worker 节点
3.2 准备kubekey
准备KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.2.1 sh -
chmod +x kk
3.3 准备安装集群的配置文件
./kk create config --with-kubernetes v1.21.5 --with-kubesphere v3.2.1
准备好集群的配置文件 config-sample.yaml
vim config-sample.yaml
---------------------
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: flyfishsrvs01, address: 172.16.10.11, internalAddress: 172.16.10.11, user: root, password: flyfish225}
- {name: flyfishsrvs02, address: 172.16.10.12, internalAddress: 172.16.10.12, user: root, password: flyfish225}
- {name: flyfishsrvs03, address: 172.16.10.13, internalAddress: 172.16.10.13, user: root, password: flyfish225}
roleGroups:
etcd:
- flyfishsrvs01
master:
- flyfishsrvs01
worker:
- flyfishsrvs02
- flyfishsrvs03
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
#internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.21.5
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.2.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
local_registry: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
containerruntime: docker
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# adapter:
# resources: {}
# node_exporter:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
------------------------
3.4 创建集群
环境缺包:
yum install scoat conntrack
./kk create cluster -f config-sample.yaml
3.5 查看k8s 集群状态
kubectl get node
kubectl get pod -n kube-system
3.6 kubesphere 3.2.1 安装进度
查看安装kubesphere 3.2.1 安装进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
kubectl get pod -A
web 登录访问:
http://172.16.10.11:30880/