目录

  • Ingress && Ingress Controller
  • Ingress
  • Ingress为弥补NodePort不足而生
  • Pod与Ingress的关系
  • Pod漂移问题
  • 端口管理问题
  • 域名分配及动态更新问题
  • ingress资源清单解析
  • Ingress Controller
  • 部署Ingress Controller
  • 示例1(HTTP访问)
  • 示例2(HTTP访问)
  • 知识总结
  • 部署的流程
  • 工作流程
  • 构建TLS站点(HTTPS访问)
  • 根据URL路由到多个服务
  • Annotations对Ingress个性化配置
  • Ingress Controller高可用方案

Ingress && Ingress Controller

Ingress

Ingress为弥补NodePort不足而生

NodePort存在的不足:

  • 一个端口只能一个服务使用,端口需提前规划
  • 只支持4层负载均衡

Pod与Ingress的关系

  • 通过Service相关联
  • 通过Ingress Controller实现Pod的负载均衡
  • 支持TCP/UDP 4层和HTTP 7层

ingress 限速 ingress tag_ingress 限速

从前面的学习,我们可以了解到Kubernetes暴露服务的方式目前只有三种:LoadBlancer Service、ExternalName、NodePort Service;而我们需要将集群内服务提供外界访问就会产生以下几个问题:

Pod漂移问题

Kubernetes 具有强大的副本控制能力,能保证在任意副本(Pod)挂掉时自动从其他机器启动一个新的,还可以动态扩容等,通俗地说,这个 Pod 可能在任何时刻出现在任何节点上,也可能在任何时刻死在任何节点上;那么自然随着 Pod 的创建和销毁,Pod IP 肯定会动态变化;那么如何把这个动态的 Pod IP 暴露出去?这里借助于 Kubernetes 的 Service 机制,Service 可以以标签的形式选定一组带有指定标签的 Pod,并监控和自动负载他们的 Pod IP,那么我们向外暴露只暴露 Service IP 就行了;这就是 NodePort 模式:即在每个节点上开起一个端口,然后转发到内部 Pod IP 上,如下图所示:
此时的访问方式:http://nodeip:nodeport/

ingress 限速 ingress tag_nginx_02

端口管理问题

采用 NodePort 方式暴露服务面临问题是,服务一旦多起来,NodePort 在每个节点上开启的端口会及其庞大,而且难以维护;这时,我们可以能否使用一个Nginx直接对内进行转发呢?众所周知的是,Pod与Pod之间是可以互相通信的,而Pod是可以共享宿主机的网络名称空间的,也就是说当在共享网络名称空间时,Pod上所监听的就是Node上的端口。那么这又该如何实现呢?简单的实现就是使用 DaemonSet 在每个 Node 上监听 80,然后写好规则,因为 Nginx 外面绑定了宿主机 80 端口(就像 NodePort),本身又在集群内,那么向后直接转发到相应 Service IP 就行了,如下图所示:

ingress 限速 ingress tag_ingress 限速_03

域名分配及动态更新问题

从上面的方法,采用 Nginx-Pod 似乎已经解决了问题,但是其实这里面有一个很大缺陷:当每次有新服务加入又该如何修改 Nginx 配置呢??我们知道使用Nginx可以通过虚拟主机域名进行区分不同的服务,而每个服务通过upstream进行定义不同的负载均衡池,再加上location进行负载均衡的反向代理,在日常使用中只需要修改nginx.conf即可实现,那在K8S中又该如何实现这种方式的调度呢???

假设后端的服务初始服务只有ecshop,后面增加了bbs和member服务,那么又该如何将这2个服务加入到Nginx-Pod进行调度呢?此时 Ingress 出现了,如果不算上面的Nginx,Ingress 包含两大组件:Ingress Controller 和 Ingress。

ingress 限速 ingress tag_nginx_04

  • Ingress 简单的理解就是你原来需要改 Nginx 配置,然后配置各种域名对应哪个 Service,现在把这个动作抽象出来,变成一个 Ingress 对象,你可以用 yaml 创建,每次不要去改 Nginx 了,直接改 yaml 然后创建/更新就行了;那么问题来了:”Nginx 该怎么处理?”
  • Ingress Controller 这东西就是解决 “Nginx 的处理方式” 的;Ingress Controoler 通过与 Kubernetes API 交互,动态的去感知集群中 Ingress 规则变化,然后读取他,按照他自己模板生成一段 Nginx 配置,再写到 Nginx Pod 里,最后 reload 一下,工作流程如下图:

ingress 限速 ingress tag_Nginx_05

ingress 限速 ingress tag_Pod_06

实际上Ingress也是Kubernetes API的标准资源类型之一,它其实就是一组基于DNS名称(host)或URL路径把请求转发到指定的Service资源的规则。用于将集群外部的请求流量转发到集群内部完成的服务发布。我们需要明白的是,Ingress资源自身不能进行“流量穿透”,仅仅是一组规则的集合,这些集合规则还需要其他功能的辅助,比如监听某套接字,然后根据这些规则的匹配进行路由转发,这些能够为Ingress资源监听套接字并将流量转发的组件就是Ingress Controller。

PS:Ingress 控制器不同于Deployment 控制器的是,Ingress控制器不直接运行为kube-controller-manager的一部分,它仅仅是Kubernetes集群的一个附件,类似于CoreDNS,需要在集群上单独部署。

ingress资源清单解析

~]# kubectl explain ingress
KIND:     Ingress
VERSION:  extensions/v1beta1

DESCRIPTION:
     Ingress is a collection of rules that allow inbound connections to reach
     the endpoints defined by a backend. An Ingress can be configured to give
     services externally-reachable urls, load balance traffic, terminate SSL,
     offer name based virtual hosting etc. DEPRECATED - This group version of
     Ingress is deprecated by networking.k8s.io/v1beta1 Ingress. See the release
     notes for more information.

FIELDS:
   apiVersion	<string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind	<string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata	<Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec	<Object>
     Spec is the desired state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status	<Object>
     Status is the current state of the Ingress. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

.spec.rules:用于定义当前Ingress资源的转发规则列表;由rules定义规则,或没有匹配到规则时,所有的流量会转发到由backend定义的默认后端。

.spec.backend:默认的后端用于服务那些没有匹配到任何规则的请求;定义Ingress时,必须要定义backend或rules两者之一,该字段用于让负载均衡器指定一个全局默认的后端

.spec.tls:TLS配置,目前仅支持通过默认端口443提供服务,如果要配置指定的列表成员指向不同的主机,则需要通过SNI TLS扩展机制来支持该功能。

.backend.serviceName.backend.servicePort,分别用于指定流量转发的后端目标Service资源名称和端口。

spec.rules对象由一系列的配置的Ingress资源的host规则组成,这些host规则用于将一个主机上的某个URL映射到相关后端Service对象,其定义格式如下:

spec:
  rules:
  - hosts: <string>
    http:
      paths:
      - path:
        backend:
          serviceName: <string>
          servicePort: <string>

需要注意的是,.spec.rules.host属性值,目前暂不支持使用IP地址定义,也不支持IP:Port的格式,该字段留空,代表着通配所有主机名。
tls对象由2个内嵌的字段组成,仅在定义TLS主机的转发规则上使用。

  • .tls.hosts: 包含 于 使用 的 TLS 证书 之内 的 主机 名称 字符串 列表, 因此, 此处 使用 的 主机 名 必须 匹配 tlsSecret 中的 名称。
  • .tls.secretName: 用于 引用 SSL 会话 的 secret 对象 名称, 在 基于 SNI 实现 多 主机 路 由 的 场景 中, 此 字段 为 可选。

Ingress Controller

为了使Ingress资源正常工作,集群必须运行一个Ingress Controller(负载均衡实现)。

所以要想通过ingress暴露你的应用,大致分为

  1. 部署Ingress Controller
  2. 创建Ingress规则

部署Ingress Controller

ingress 限速 ingress tag_ingress 限速_07

Ingress Controller有很多实现,我们这里采用官方维护的Nginx控制器。

GitHub地址:https://github.com/kubernetes/ingress-nginx

部署文档:https😕/github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

注意事项:

  • 镜像提前准备好,或地址修改成国内的
  • 使用宿主机网络:hostNetwork: true
# kubectl apply -f ingress-controller.yaml
# kubectl get pods -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-5r4wg   1/1     Running   0          13s
nginx-ingress-controller-x7xdf   1/1     Running   0          13s

80和443端口就是接收来自外部访问集群中应用流量,转发对应的Pod上。

Kubernetes在安装部署中,需要从k8s.grc.io仓库中拉取所需镜像文件,但由于国内网络防火墙问题导致无法正常拉取。
docker.io仓库对google的容器做了镜像,可以通过下列命令下拉取相关镜像:

[root@k8s-node01 ~]# docker pull mirrorgooglecontainers/defaultbackend-amd64:1.5
1.5: Pulling from mirrorgooglecontainers/defaultbackend-amd64
9ecb1e82bb4a: Pull complete 
Digest: sha256:d08e129315e2dd093abfc16283cee19eabc18ae6b7cb8c2e26cc26888c6fc56a
Status: Downloaded newer image for mirrorgooglecontainers/defaultbackend-amd64:1.5

[root@k8s-node01 ~]# docker tag mirrorgooglecontainers/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5
[root@k8s-node01 ~]# docker image ls
REPOSITORY                                    TAG                 IMAGE ID            CREATED             SIZE
mirrorgooglecontainers/defaultbackend-amd64   1.5                 b5af743e5984        34 hours ago        5.13MB
k8s.gcr.io/defaultbackend-amd64               1.5                 b5af743e5984        34 hours ago        5.13MB

其他主流控制器:

Traefik: HTTP反向代理、负载均衡工具

Istio:服务治理,控制入口流量

示例1(HTTP访问)

部署后端服务

~]# vim ingress-myapp.yaml 
#创建service为myapp
apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  selector:
    app: myapp
    release: canary
  ports:
  - name: http
    targetPort: 80
    port: 80
---
#创建后端服务的pod
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-backend-pod
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      release: canary
  template:
    metadata:
      labels:
        app: myapp
        release: canary
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        ports:
        - name: http
          containerPort: 80
~]# kubectl apply -f ingress-myapp.yaml
service/myapp configured
deployment.apps/myapp-backend-pod created

查看新建后端服务pod

~]# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
myapp-backend-pod-798dc9b584-8d25s   1/1     Running   0          92s
myapp-backend-pod-798dc9b584-pxcsz   1/1     Running   0          92s
myapp-backend-pod-798dc9b584-w9g24   1/1     Running   0          93s

通过ingress-controller对外提供服务,现在还需要手动给ingress-controller建立一个service,接收集群外部流量。方法如下:
如若使用mandatory.yaml创建ingress、则无需手工执行

[root@k8s-master ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
[root@k8s-master ingress]# vim service-nodeport.yaml 
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      protocol: TCP
      nodePort: 30080
    - name: https
      port: 443
      targetPort: 443
      protocol: TCP
      nodePort: 30443
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

创建ingress-contoller的service,并测试

~]# kubectl apply -f service-nodeport.yaml 
service/ingress-nginx configured

~]# kubectl get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.0.0.68    <none>        80:30080/TCP,443:30443/TCP   14d

访问192.168.121.82:30080

ingress 限速 ingress tag_Nginx_08

部署ingress

~]# vim ingress-myapp2.yaml
apiVersion: extensions/v1beta1      #api版本
kind: Ingress       #清单类型
metadata:           #元数据
  name: ingress-myapp    #ingress的名称
  namespace: default     #所属名称空间
  annotations:           #注解信息
    kubernetes.io/ingress.class: "nginx"
spec:      #规格
  rules:   #定义后端转发的规则
  - host: myapp.gms.com  #通过域名进行转发
    http:
      paths:
      - path:       #配置访问路径,如果通过url进行转发,需要修改;空默认为访问的路径为"/"
        backend:    #配置后端服务
          serviceName: myapp
          servicePort: 80
          

~]# kubectl apply -f ingress-myapp2.yaml 
ingress.extensions/ingress-myapp created

~]# kubectl get ingress
NAME            HOSTS           ADDRESS   PORTS   AGE
ingress-myapp   myapp.gms.com             80      19s

查看ingress-myapp的详细信息

~]# kubectl describe ingress ingress-myapp
Name:             ingress-myapp
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host           Path  Backends
  ----           ----  --------
  myapp.gms.com  
                    myapp:80 (10.244.0.16:80,10.244.1.11:80,10.244.2.16:80)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-myapp","namespace":"default"},"spec":{"rules":[{"host":"myapp.gms.com","http":{"paths":[{"backend":{"serviceName":"myapp","servicePort":80},"path":null}]}}]}}

  kubernetes.io/ingress.class:  nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  53s   nginx-ingress-controller  Ingress default/ingress-myapp
  Normal  CREATE  53s   nginx-ingress-controller  Ingress default/ingress-myapp
  Normal  CREATE  53s   nginx-ingress-controller  Ingress default/ingress-myapp
  
  
~]# kubectl get pods -n ingress-nginx
NAME                             READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-84hw6   1/1     Running   0          14d
nginx-ingress-controller-nhm4p   1/1     Running   0          14d
nginx-ingress-controller-tdc69   1/1     Running   0          14d

进入nginx-ingress-controller进行查看是否注入了nginx的配置文件

~]# kubectl exec -n ingress-nginx -it nginx-ingress-controller-84hw6 -- /bin/bash
www-data@k8s-node1:/etc/nginx$ cat nginx.conf
......
## start server myapp.gms.com
	server {
		server_name myapp.gms.com ;
		
		listen 80;
		
		set $proxy_upstream_name "-";
		
		location / {
			
			set $namespace      "default";
			set $ingress_name   "ingress-myapp";
			set $service_name   "myapp";
			set $service_port   "80";
			set $location_path  "/";
			
			rewrite_by_lua_block {
				
				balancer.rewrite()
				
			}
			
			log_by_lua_block {
				
				balancer.log()
				
				monitor.call()
			}
			
			port_in_redirect off;
......

修改本地host文件,进行访问

生产环境:myapp.gms.com 域名是在你购买域名的运营商上进行解析,A记录值为K8S Node的公网IP(该Node必须运行了Ingress controller)。

测试环境:可以绑定hosts模拟域名解析("C:\Windows\System32\drivers\etc\hosts"),对应IP是K8S Node的内网IP。例如:

192.168.121.82 myapp.gms.com

ingress 限速 ingress tag_Nginx_09

示例2(HTTP访问)

编写配置Tomcat资源清单

]# vim tomcat-demo.yaml 
apiVersion: v1
kind: Service
metadata:
  name: tomcat
  namespace: default
spec:
  selector:
    app: tomcat
    release: canary
  ports:
  - name: http
    targetPort: 8080
    port: 8080
  - name: ajp
    targetPort: 8009
    port: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tomcat
      release: canary
  template:
    metadata:
      labels:
        app: tomcat
        release: canary
    spec:
      containers:
      - name: tomcat
        image: tomcat:8.5.34-jre8-alpine   
        #此镜像在dockerhub上进行下载,需要查看版本是否有变化,hub.docker.com
        ports:
        - name: http
          containerPort: 8080
          name: ajp
          containerPort: 8009

~]# kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
tomcat-deploy-56c494fc77-27rgm       1/1     Running   0          2m49s
tomcat-deploy-56c494fc77-7drpr       1/1     Running   0          2m49s
tomcat-deploy-56c494fc77-mdzhx       1/1     Running   0          2m49s

进入tomcat的pod中进行查看是否监听8080和8009端口,并查看tomcat的svc

~]# kubectl exec tomcat-deploy-56c494fc77-27rgm -- netstat -tnl
netstat: /proc/net/tcp6: No such file or directory
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       
tcp        0      0 0.0.0.0:8080            0.0.0.0:*               LISTEN      
tcp        0      0 127.0.0.1:8005          0.0.0.0:*               LISTEN      
tcp        0      0 0.0.0.0:8009            0.0.0.0:*               LISTEN      

~]# kubectl get svc 
NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)             AGE
kubernetes       ClusterIP   10.0.0.1     <none>        443/TCP             15d
myapp            ClusterIP   10.0.0.229   <none>        80/TCP              2d
myapp-headless   ClusterIP   None         <none>        80/TCP              47h
redis            ClusterIP   10.0.0.137   <none>        6379/TCP            2d
tomcat           ClusterIP   10.0.0.77    <none>        8080/TCP,8009/TCP   4m7s

编写tomcat的ingress规则,并创建ingress资源

~]# vim ingress-tomcat.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tomcat    #ingress的名称
  namespace: default     #所属名称空间
  annotations:           #注解信息
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:   #定义后端转发的规则
  - host: tomcat.gms.com  #通过域名进行转发
    http:
      paths:
      - path:       #配置访问路径,如果通过url进行转发,需要修改;空默认为访问的路径为"/"
        backend:    #配置后端服务
          serviceName: tomcat
          servicePort: 8080
~]# kubectl apply -f ingress-tomcat.yaml 
ingress.extensions/tomcat created

查看ingress具体信息

]# kubectl get ingress
NAME            HOSTS            ADDRESS   PORTS   AGE
ingress-myapp   myapp.gms.com              80      45m
tomcat          tomcat.gms.com             80      8s

~]# kubectl describe ingress
......
Name:             tomcat
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
Rules:
  Host            Path  Backends
  ----            ----  --------
  tomcat.gms.com  
                     tomcat:8080 (10.244.0.17:8080,10.244.1.12:8080,10.244.2.17:8080)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"tomcat","namespace":"default"},"spec":{"rules":[{"host":"tomcat.gms.com","http":{"paths":[{"backend":{"serviceName":"tomcat","servicePort":8080},"path":null}]}}]}}

  kubernetes.io/ingress.class:  nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  37s   nginx-ingress-controller  Ingress default/tomcat
  Normal  CREATE  37s   nginx-ingress-controller  Ingress default/tomcat
  Normal  CREATE  37s   nginx-ingress-controller  Ingress default/tomcat

测试访问

ingress 限速 ingress tag_Pod_10

知识总结

部署的流程

从前面的部署过程中,可以再次进行总结部署的流程如下:

  1. 下载Ingress-controller相关的YAML文件,并给Ingress-controller创建独立的名称空间;
  2. 部署后端的服务,如myapp,并通过service进行暴露;
  3. 部署Ingress-controller的service,以实现接入集群外部流量;
  4. 部署Ingress,进行定义规则,使Ingress-controller和后端服务的Pod组进行关联。

本次部署后的说明图如下:

ingress 限速 ingress tag_nginx_11

工作流程

  1. 用户创建ingress yaml 提交到apiserver(master)
  2. ingress controller获取pod信息列表
  3. 借助lua动态更新nginx ingress controller
  4. 经由upstream代理到后端pod(不经过iptables或ipvs)

构建TLS站点(HTTPS访问)

准备自签CA整数

~]# openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus
......+++
..........+++
e is 65537 (0x10001)

~]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=tomcat.gms.com

将证书保存在secret里

~]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
secret/tomcat-ingress-secret created
~]# kubectl get secret
NAME                    TYPE                                  DATA   AGE
default-token-pftqr     kubernetes.io/service-account-token   3      15d
tomcat-ingress-secret   kubernetes.io/tls                     2      7s
[root@k8s-master2 ~]# kubectl describe secret tomcat-ingress-secret
Name:         tomcat-ingress-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1285 bytes
tls.key:  1675 bytes

创建ingress

~]# vim ingress-tomcat-tls.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-tomcat-tls
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  tls:
  - hosts:
    - tomcat.gms.com
    secretName: tomcat-ingress-secret
  rules:
  - host: tomcat.gms.com  #通过域名进行转发
    http:
      paths:
      - path:       #配置访问路径,如果通过url进行转发,需要修改;空默认为访问的路径为"/"
        backend:    #配置后端服务
          serviceName: tomcat
          servicePort: 8080

~]# kubectl apply -f ingress-tomcat-tls.yaml
ingress.extensions/ingress-tomcat-tls created

验证查看

]# kubectl get ingress
NAME                 HOSTS            ADDRESS   PORTS     AGE
ingress-myapp        myapp.gms.com              80        6h3m
ingress-tomcat-tls   tomcat.gms.com             80, 443   8s
tomcat               tomcat.gms.com             80        5h18m

~]# kubectl describe ingress ingress-tomcat-tls
Name:             ingress-tomcat-tls
Namespace:        default
Address:          
Default backend:  default-http-backend:80 (<none>)
TLS:
  tomcat-ingress-secret terminates tomcat.gms.com
Rules:
  Host            Path  Backends
  ----            ----  --------
  tomcat.gms.com  
                     tomcat:8080 (10.244.0.17:8080,10.244.1.12:8080,10.244.2.17:8080)
Annotations:
  kubectl.kubernetes.io/last-applied-configuration:  {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-tomcat-tls","namespace":"default"},"spec":{"rules":[{"host":"tomcat.gms.com","http":{"paths":[{"backend":{"serviceName":"tomcat","servicePort":8080},"path":null}]}}],"tls":[{"hosts":["tomcat.gms.com"],"secretName":"tomcat-ingress-secret"}]}}

  kubernetes.io/ingress.class:  nginx
Events:
  Type    Reason  Age   From                      Message
  ----    ------  ----  ----                      -------
  Normal  CREATE  20s   nginx-ingress-controller  Ingress default/ingress-tomcat-tls
  Normal  CREATE  20s   nginx-ingress-controller  Ingress default/ingress-tomcat-tls
  Normal  CREATE  20s   nginx-ingress-controller  Ingress default/ingress-tomcat-tls

访问测试:https://tomcat.gms.com:30443

ingress 限速 ingress tag_ingress 限速_12

根据URL路由到多个服务

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: url-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: foobar.ctnrs.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: service1
          servicePort: 80
  - host: foobar.ctnrs.com
    http:
      paths:
      - path: /bar
        backend:
          serviceName: service2
          servicePort: 80

工作流程:

foobar.ctnrs.com -> 178.91.123.132 -> / foo    service1:80
                                      / bar    service2:80

4、基于名称的虚拟主机(同示例1&示例2)

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: foo.ctnrs.com
    http:
      paths:
      - backend:
          serviceName: service1
          servicePort: 80
  - host: bar.ctnrs.com
    http:
      paths:
      - backend:
          serviceName: service2
          servicePort: 80

工作流程:

foo.bar.com --|                 |-> service1:80
              | 178.91.123.132  |
bar.foo.com --|                 |-> service2:80

Annotations对Ingress个性化配置

参考文档 :https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

HTTP:配置Nginx常用参数

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
     kubernetes.io/ingress.class: "nginx“
     nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
     nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
     nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
     nginx.ingress.kubernetes.io/proxy-body-size: "10m"
spec:
  rules:
  - host: example.ctnrs.com
    http:
      paths:
      - path: /
        backend:
          serviceName: web
          servicePort: 80

HTTPS:禁止访问HTTP强制跳转到HTTPS(默认开启)

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tls-example-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx“
    nginx.ingress.kubernetes.io/ssl-redirect: 'false'
spec:
  tls:
  - hosts:
    - sslexample.ctnrs.com
    secretName: secret-tls
  rules:
    - host: sslexample.ctnrs.com
      http:
        paths:
        - path: /
          backend:
            serviceName: web
            servicePort: 80

Ingress Controller高可用方案

如果域名只解析到一台Ingress controller,是存在单点的,挂了就不能提供服务了。这就需要具备高可用,有两种常见方案:

ingress 限速 ingress tag_Pod_13

左边:双机热备,选择两台Node专门跑Ingress controller,然后通过keepalived对其做主备。用户通过VIP访问。

右边:高可用集群(推荐),使用daemonset部署到每个节点,前面加一个负载均衡器,转发请求到后端多台Ingress controller。