grpc一般都是内部服务调用,在k8s集群中进行服务发现和负载均衡的方式我所知道的有三种:
一、直接service nodepod方式部署,缺点就是会占用宿主机port,服务多起来,团队大起来的时候,port端口使用混乱,一不小心就冲突,服务无法访问查都查不到原因
二、使用ngxin-ingress进行服务发现和负载均衡,缺点必须配置证书,只支持https访问
三、traefik-ingress进行服务发现和负载均衡(强烈推荐,生产环境已用)
本篇就讲一讲如何通过traefik-ingress 进行grpc负载均衡。
安装traefik-ingress环境
traefik-rbac.yaml:
## ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: kube-system
name: traefik-ingress-controller
---
## ClusterRole
kind: ClusterRole
apiVersion: /v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups: [""]
resources: ["services","endpoints","secrets"]
verbs: ["get","list","watch"]
- apiGroups: ["extensions",""]
resources: ["ingresses","ingressclasses"]
verbs: ["get","list","watch"]
- apiGroups: ["extensions"]
resources: ["ingresses/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["middlewares"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["ingressroutes"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["ingressroutetcps"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["tlsoptions"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["traefikservices"]
verbs: ["get","list","watch"]
---
## ClusterRoleBinding
kind: ClusterRoleBinding
apiVersion: /v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup:
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
traefik-config.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: traefik-config
data:
traefik.yaml: |-
ping: ## 启用 Ping
entryPoint: "web"
serversTransport:
insecureSkipVerify: true ## Traefik 忽略验证代理服务的 TLS 证书
api:
insecure: true ## 允许 HTTP 方式访问 API
dashboard: true ## 启用 Dashboard
debug: false ## 启用 Debug 调试模式
metrics:
prometheus: "" ## 配置 Prometheus 监控指标数据,并使用默认配置
entryPoints:
web:
address: ":18080" ## 配置 80 端口,并设置入口名称为 web
websecure:
address: ":18443" ## 配置 443 端口,并设置入口名称为 websecure
traefik:
address: ":28080" ## dashboard端口
providers:
kubernetesIngress: "" ## 启动 Kubernetes Ingress 方式来配置路由规则
log:
level: INFO ## 设置调试日志级别
##format: json ## 设置调试日志格式
accessLog:
##format: json ## 设置访问调试日志格式
bufferingSize: 100 ## 设置访问日志缓存行数
traefik-deploy.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
labels:
app: traefik
spec:
selector:
matchLabels:
app: traefik
template:
metadata:
name: traefik
labels:
app: traefik
spec:
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 1
containers:
- image: traefik:v2.3.7
name: traefik-ingress-lb
ports:
- name: web
containerPort: 18080
hostPort: 18080
protocol: TCP
- name: websecure
containerPort: 18443
hostPort: 18443
protocol: TCP
- name: admin
containerPort: 28080 ## Traefik Dashboard 端口
hostPort: 28080
protocol: TCP
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
privileged: true
readOnlyRootFilesystem: false
runAsNonRoot: false
runAsUser: 0
args:
- --configfile=/config/traefik.yaml
volumeMounts:
- mountPath: "/config"
name: "config"
readinessProbe:
httpGet:
path: /ping
port: 18080
initialDelaySeconds: 10
periodSeconds: 5
volumes:
- name: config
configMap:
name: traefik-config
tolerations: ## 设置容忍所有污点,防止节点被设置污点
- operator: "Exists"
nodeSelector:
/worker: "true"
/os: "linux"
一共三个部署文件:依次导入到集群中:traefik-rbac.yaml,traefik-config.yaml,traefik-deploy.yaml,
导入方式:rancher管理端或者kubectl (后面介绍如何用kubectl在集群中执行安装)
完成安装后可以查看traefik管理端:http://任何一个workerIP:28080/
准备测试程序:
打开vs2022新建项目:
依旧添加一个健康检查控制器
安装GRPC依赖 Grpc.core和Grpc.Tools
添加google.protobu依赖
添加Grpc.AspNetCore依赖
syntax = "proto3";
package K8sGrpcDemo.Grpc;
import "google/protobuf/empty.proto";
service GrpcDemoService {
rpc GetLocalIP (google.protobuf.Empty) returns (GetLocalIPResponse);
}
message GetLocalIPResponse {
string localIP=1;//当前服务ip
}
生成操作选择Protobuf compiler
实现grpc服务:返回当前服务器ip
Dockerfile
.rancher-pipeline.yml
stages:
- name: build
steps:
- publishImageConfig:
dockerfilePath: ./K8sGrpcDemo/Dockerfile
buildContext: .
tag: 192.168.21.4:8081/test/k8sgrpcdemo:${CICD_GIT_BRANCH}-${CICD_GIT_COMMIT}
pushRemote: true
registry: 192.168.21.4:8081
env:
PLUGIN_DEBUG: "true"
PLUGIN_INSECURE: "true"
- name: deploy
steps:
- applyYamlConfig:
path: ./deployment.yaml
timeout: 60
notification: {}
deployment.yaml:(和上一篇nginx-ingress不一样的地方我注释了下)
apiVersion: v1
kind: Service
metadata:
name: k8sgrpcdemo
namespace: default
labels:
app: k8sgrpcdemo
service: k8sgrpcdemo
annotations:
/service.serversscheme: h2c #ingress负载均衡
spec:
ports:
- port: 80
name: http
- port: 18080 #ingress代理grpc的端口
targetPort: 8080 #容器暴露出来grpc端口
protocol: TCP
name: grpc
selector:
app: k8sgrpcdemo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8sgrpcdemo
namespace: default
labels:
app: k8sgrpcdemo
version: v1
spec:
replicas: 3 #三个副本
selector:
matchLabels:
app: k8sgrpcdemo
version: v1
template:
metadata:
labels:
app: k8sgrpcdemo
version: v1
spec:
containers:
- name: k8sgrpcdemo
image: 192.168.21.4:8081/test/k8sgrpcdemo:${CICD_GIT_BRANCH}-${CICD_GIT_COMMIT}
readinessProbe:
httpGet:
path: /health/status
port: 80
initialDelaySeconds: 10
periodSeconds: 5
ports:
- containerPort: 80
imagePullSecrets:
- name: mydockerhub
健康检查走的是http 80端口,grpc对外是8080端口 所以应用需要同时监听两个端口。需要修改代码 Program.cs:
到此,项目已经准备完成。提交代码。然后到rancher流水线进行部署 (我已经在私有仓库创建好项目)
,
准备ingress文件:k8sgrpcdemo_ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: k8sgrpcdemo
namespace: default
spec:
rules:
- host: www.k8sgrpcdemo.cn
http:
paths:
- path: / #所有www.k8sgrpcdemo.cn:80过来的http请求都会打到服务名为k8sgrpcdemo 的80上 其实这里走的是ingress-nginx
backend:
serviceName: k8sgrpcdemo
servicePort: 80
- path: /K8sGrpcDemoProto #grpc proto 包名 所有www.k8sgrpcdemo.cn:18080的请求都会打到k8sgrpcdemo 的8080上
backend:
serviceName: k8sgrpcdemo
servicePort: 18080 #ingress-traefik代理端口
导入ingress:
我本机hosts文件(C:\Windows\System32\drivers\etc\hosts)配置的:192.168.21.233 www.k8sgrpcdemo.cn,也可以是其他两台。因为三台worker都是有ingress-traefik的。
测试grpc访问:使用的grpc测试工具:BloomRPC