参考文档:

多节点OpenStack Charms 部署指南0.0.1.dev223–2-安装MAAS多节点OpenStack Charms 部署指南0.0.1.dev223–3-安装Juju

多节点OpenStack Charms 部署指南0.0.1.dev223–5--使bundle安装openstack,及openstack创建网络和实例

分别使用juju在maas云和opensatck云上部署了charmed kubernetes,做下简单的对比。

作为后端的云,在国内,juju主要支持以下几种后端云:

maas
lxd
openstack
aws
asura

1 使用场景:

1.1 可以直接让使用juju在maas管理的kvm上部署,kvm可以用两种方式提供,一种是lxd,一种是virsh。可能lxd的更方便些。这时,kvm可以看成maas管理的金属主机。

1.2 也可以让juju直接管理maas管理的金属云主机,此时lxd在maas里不可管理。

1.3 juju先在maas里部署openstack,再用juju管理部署好的openstack,配置好后,juju可以直接管理openstack里的vm。

2 对比:

2.1 易用性: 1.2>1.1>1.3

说明:1.2基本不需要其他配置,直接配置好maas和juju就可以使用。

参见文档:

多节点OpenStack Charms 部署指南0.0.1.dev223–2-安装MAAS多节点OpenStack Charms 部署指南0.0.1.dev223–3-安装Juju

多节点OpenStack Charms 部署指南0.0.1.dev223–5--使bundle安装openstack,及openstack创建网络和实例

1.1 需要配置vm自动供给,所以比1.2略麻烦。

参见文档:

多节点OpenStack Charms 部署指南0.0.1.dev223–2-安装MAAS多节点OpenStack Charms 部署指南0.0.1.dev223–3-安装Juju

以下四章是部署vm自动供给

多节点OpenStack Charms 部署指南0.0.1.–31–vm hosting-1

多节点OpenStack Charms 部署指南0.0.1.–32–vm hosting-2-VM host networking (snap/2.9/UI)

多节点OpenStack Charms 部署指南0.0.1.–33–vm hosting-3-Adding a VM host (snap/2.9/UI)

多节点OpenStack Charms 部署指南0.0.1.–34–vm hosting-4-VM host存储池和创建和删除vm (snap/2.9/UI)

然后可以在vm上部署openstack base或charmed kubernetes

多节点OpenStack Charms 部署指南0.0.1.dev223–5--使bundle安装openstack,及openstack创建网络和实例

ubuntu20.04下使用juju+maas环境部署k8s-2-部署charmed kubernetes #679

1.3 最麻烦,先使用juju+maas部署openstack base,再在部署好的openstack上,做先导配置,再在openstack base配置charmed kubernetes。

参考文档:

多节点OpenStack Charms 部署指南0.0.1.dev223–2-安装MAAS多节点OpenStack Charms 部署指南0.0.1.dev223–3-安装Juju多节点OpenStack Charms 部署指南0.0.1.dev223–5--使bundle安装openstack,及openstack创建网络和实例多节点OpenStack Charms 部署指南0.0.1.dev–41–配置openstack-base-73作为juju管理的openstack云

多节点OpenStack Charms 部署指南0.0.1.dev–42–部署bundle openstack-base-78,单网口openstack网络,及注意:千万不能以数字开头命名主机名

多节点OpenStack Charms 部署指南0.0.1.dev–43–使用juju将charmed k8s部署在openstack上

多节点OpenStack Charms 部署指南0.0.1.dev–44–访问openstack中vm所需要的配置,访问部署在openstack上的kubernetes

2.2 部署的灵活性

1.2>1.1>1.3

1.2 可以方便的虚拟出很多vm,使用比较灵活,且可以根据需求,结合constraints很方便的构造符合需求的vm。

1.1 也可以虚拟出很多lxd,但是不是很容易构造出符合需求的的vm,如果有特殊需求的vm,如elastic charm,需要类似独立的金属机,并不容易实现。

需要至少三台服务器做node。其他的需要的lxd需要建在金属机里。

1.3 需要至少3台物理机作为node构建。然后就可以很方便的在openstack上,根据flavor作为constraints,自动构建vm。

2.3 可用性

1.2=1.1>1.3

1.1和1.2的vm的可用性差不多,但是1.3在openstack上部署的kubernetes系统开销太大,可用性略差,节点经常处于pending状态,经常需要手工重建,估计需要配置更高的设备。

juju status
Model  Controller                 Cloud/Region               Version  SLA          Timestamp
k8s    openstack-cloud-regionone  openstack-cloud/RegionOne  2.9.18   unsupported  09:56:55+08:00

App                    Version   Status   Scale  Charm                  Store       Channel   Rev  OS      Message
containerd             go1.13.8  active       4  containerd             charmstore  stable    178  ubuntu  Container runtime available
easyrsa                3.0.1     active       1  easyrsa                charmstore  stable    420  ubuntu  Certificate Authority connected.
etcd                   3.4.5     active       3  etcd                   charmstore  stable    634  ubuntu  Healthy with 3 known peers
flannel                0.11.0    active       4  flannel                charmstore  stable    597  ubuntu  Flannel subnet 10.1.19.1/24
kubeapi-load-balancer  1.18.0    active       1  kubeapi-load-balancer  charmstore  stable    844  ubuntu  Loadbalancer ready.
kubernetes-master      1.22.4    active       2  kubernetes-master      charmstore  stable   1078  ubuntu  Kubernetes master running.
kubernetes-worker                waiting    2/3  kubernetes-worker      charmstore  stable    816  ubuntu  waiting for machine
openstack-integrator   xena      active       1  openstack-integrator   charmstore  stable    182  ubuntu  Ready

Unit                      Workload  Agent       Machine  Public address  Ports             Message
easyrsa/0*                active    idle        0        192.168.0.135                     Certificate Authority connected.
etcd/0*                   active    idle        1        192.168.0.107   2379/tcp          Healthy with 3 known peers
etcd/1                    active    idle        2        192.168.0.177   2379/tcp          Healthy with 3 known peers
etcd/3                    active    idle        12       192.168.0.122   2379/tcp          Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle        4        192.168.0.11    443/tcp,6443/tcp  Loadbalancer ready.
kubernetes-master/0*      active    idle        5        192.168.0.54    6443/tcp          Kubernetes master running.
  containerd/1            active    idle                 192.168.0.54                      Container runtime available
  flannel/1               active    idle                 192.168.0.54                      Flannel subnet 10.1.71.1/24
kubernetes-master/1       active    idle        6        192.168.0.178   6443/tcp          Kubernetes master running.
  containerd/3            active    idle                 192.168.0.178                     Container runtime available
  flannel/3               active    idle                 192.168.0.178                     Flannel subnet 10.1.31.1/24
kubernetes-worker/0       active    idle        7        192.168.0.90    80/tcp,443/tcp    Kubernetes worker running.
  containerd/2            active    idle                 192.168.0.90                      Container runtime available
  flannel/2               active    idle                 192.168.0.90                      Flannel subnet 10.1.66.1/24
kubernetes-worker/1*      active    idle        8        192.168.0.91    80/tcp,443/tcp    Kubernetes worker running.
  containerd/0*           active    idle                 192.168.0.91                      Container runtime available
  flannel/0*              active    idle                 192.168.0.91                      Flannel subnet 10.1.19.1/24
kubernetes-worker/5       waiting   allocating  14                                         waiting for machine
openstack-integrator/0*   active    idle        10       192.168.0.128                     Ready

Machine  State    DNS            Inst id                               Series  AZ    Message
0        started  192.168.0.135  4eff153f-2099-433c-b56b-2f4187bc62ac  focal   nova  ACTIVE
1        started  192.168.0.107  d0cd12e6-c4b5-44dd-85ea-158840197bb6  focal   nova  ACTIVE
2        started  192.168.0.177  a3d9c148-bfbd-4476-89a0-264bd12fbf2e  focal   nova  ACTIVE
4        started  192.168.0.11   66762751-e9d9-4c52-a5e0-7b41b9873302  focal   nova  ACTIVE
5        started  192.168.0.54   fc488c25-ae44-4edd-aa1a-5676081df950  focal   nova  ACTIVE
6        started  192.168.0.178  8bbc21a0-2967-4b55-a4ec-c68116943640  focal   nova  ACTIVE
7        started  192.168.0.90   b68734e3-2dcf-4008-9ec5-e957014b4a63  focal   nova  ACTIVE
8        started  192.168.0.91   ef3898a4-2524-4967-b023-2a6754556aec  focal   nova  ACTIVE
10       started  192.168.0.128  94857d4b-2513-411d-b39b-06d35bdfe63a  focal   nova  ACTIVE
11       pending  192.168.0.64   0ada8965-1b72-4c6c-8476-83f9d7f649f4  focal   nova  ACTIVE
12       started  192.168.0.122  a16ddcae-e929-49b8-9574-ab499a4be459  focal   nova  ACTIVE
13       pending  192.168.0.41   f282e8af-f571-4792-9ef4-f4388cb3c059  focal   nova  ACTIVE
14       pending                 178d5ced-fd46-4249-b1f8-32ebdea7ad99  focal   nova  instance "178d5ced-fd46-4249-b1f8-32ebdea7ad99" has status BUILD, wait 10 seconds before retry, attempt 2
2.4 易获得性

这个不好说到底是自己部署maas+juju系统方便还是购买aws或者asura方便。具体情况具体对待吧。

2.5 使用舒适性

用起来最爽肯定是金属云最爽了,基本在机房配好网线和加电后,配好ipmi的IP地址和用户名和账号,基本就不用去机房了。

一定要有电源管理模块。