写在前面的话
这次部署的OpenStack环境比较特殊,是ZStack私有云上的两个虚机部署的。在部署前确定了ZStack是支持嵌套虚拟化的(cpu标签),但ZStack技术人员不建议在该环境搭建OpenStack,一个是嵌套虚拟化可能在起虚机时可能会产生某些问题,另一个是性能折损很高。因为只是作为搭建OpenStack的实验环境,可以先不考虑这些。以下命令均在私有云环境搭建,最后创建实例时报错不能绑定网卡,怀疑是嵌套虚拟化导致的网络问题。所以最后的创建实例的命令,是在vmware中搭建的OpenStack环境中执行的,作为示例。
Minimal deployment
Identity服务 | keystone |
Image服务 | glance |
Placement服务 | placement |
Compute服务 | nova |
Networking服务 | neutron |
最小化安装完成后,建议安装Dashboard服务(horizon)和Block Storage服务(cinder)。
Environment
1、准备两台虚机
controller | 2核4G或2核8G | 20G | ens:10.0.0.11 | ens:192.168.142.14 |
compute | 2核8G或4核8G | 20G | ens:10.0.0.21 | ens:192.168.142.15 |
查看操作系统版本号:
[root@openstack0 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
查看cpu,内存,硬盘,以及是否支持嵌套虚拟化:
[root@openstack0 ~]# cat /proc/meminfo
[root@openstack0 ~]# lsblk
[root@openstack0 ~]# cat /proc/cpuinfo | grep vmx
查看网卡:
[root@controller ~]# ip a
[root@compute01 ~]# ip a
2、配置域名解析,设置主机名
在controller节点
[root@openstack0 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# controller
10.0.0.11 controller
# compute01
10.0.0.21 compute01
[root@openstack0 ~]# hostnamectl set-hostname controller
[root@openstack0 ~]# exec bash
在compute节点
[root@openstack1 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# controller
10.0.0.11 controller
# compute01
10.0.0.21 compute01
[root@openstack1 ~]# hostnamectl set-hostname compute01
[root@openstack1 ~]# exec bash
3、关闭firewall服务
在controller和compute节点
# systemctl stop firewalld
# systemctl status firewalld
4、关闭selinux
在controller和compute节点
# vim /etc/selinux/config
...
SELINUX=permissive
...
# setenforce 0
# sestatus
SELinux status: enabled
SELinuxfs mount: /sys/fs/selinux
SELinux root directory: /etc/selinux
Loaded policy name: targeted
Current mode: permissive
Mode from config file: permissive
Policy MLS status: enabled
Policy deny_unknown status: allowed
Max kernel policy version: 31
5、配置NTP服务
在controller节点
- 安装配置Chrony
[root@controller ~]# yum install chrony -y
- 配置chrony.conf,allow其他节点与Controller同步
[root@controller ~]# vim /etc/chrony.conf
...
# Allow NTP client access from local network.
#allow 192.168.0.0/16
allow 10.0.0.0/24
...
- 重启chronyd
[root@controller ~]# systemctl enable --now chronyd.service
[root@controller ~]# systemctl status chronyd.service
配置文件解释:
server: 该参数可以多次用于添加时钟服务器,必须以"server "格式使用。可以添加任意个时间服务器。
driftfile: 根据实际时间计算出计算机增减时间的比率,将它记录到一个文件中是最合理的, 在重启后为系统时钟作出补偿,甚至可能的话,会从时钟服务器获得较好的估值。
rtcsync: rtcsync指令将启用一个内核模式,在该模式中,系统时间每11分钟会拷贝到实时时钟(RTC)。
allow/deny: 这里你可以指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器。 例: allow 192.168.0.10 deny 192.168.2.10
makestep: chronyd将根据需求通过减慢或加速时钟,使得系统逐步纠正所有时间偏差。在某些特定情况下,系统时钟可能会漂移过快,导致该调整过程消耗很长的时间来纠正系统时钟。该指令强制chronyd在调整期大于某个阀值时步进调整系统时钟,但只有在因为chronyd启动时间超过指定限制(可使用负值来禁用限制),没有更多时钟更新时才生效。
在compute节点
- 安装配置Chrony
[root@compute01 ~]# yum install chrony -y
- 配置chrony.conf,与Controller同步
[root@compute01 ~]# vim /etc/chrony.conf
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
...
- 重启chronyd
[root@compute01 ~]# systemctl enable --now chronyd.service
[root@compute01 ~]# systemctl status chronyd.service
验证
# chronyc sources
6、YUM仓库源
在controller节点
- 安装预备包
[root@controller ~]# yum install centos-release-openstack-train
[root@controller ~]# yum install https://rdoproject.org/repos/rdo-release.rpm
- 升级包(建议重启)
[root@controller ~]# yum upgrade -y
[root@controller ~]# reboot
yum upgrade和yum update功能一样,upgrade删除旧版本的package,而update保留。生产环境建议使用yum update,防止因为版本替换导致依赖包出现问题。
- 安装客户端(不要装成python3-openstackclient)
[root@controller ~]# yum install -y python-openstackclient
- 安装openstack-selinux包
[root@controller ~]# yum install -y openstack-selinux
在compute节点
[root@compute01 ~]# yum install centos-release-openstack-train -y
[root@compute01 ~]# yum install https://rdoproject.org/repos/rdo-release.rpm -y
[root@compute01 ~]# yum -y upgrade
[root@compute01 ~]# reboot
[root@compute01 ~]# yum install -y python-openstackclient
[root@compute01 ~]# yum install -y openstack-selinux
7、安装配置MySQL数据库
在controller节点
- 安装数据库包
[root@controller ~]# yum install -y mariadb mariadb-server python2-PyMySQL
- 修改配置文件
bind-address为Controller节点的管理IP
[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.0.0.11
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
- 启动数据库服务并开机自启
[root@controller ~]# systemctl enable --now mariadb.service
[root@controller ~]# systemctl status mariadb.service
- 初始化MySQL
这步要设置mysql数据库root密码,注意保存。
[root@controller ~]# mysql_secure_installation
Enter current password for root (enter for none): //直接回车
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n] Y //设置root密码
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
production environment.
Remove anonymous users? [Y/n] Y
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n] Y
By default, MariaDB comes with a database named 'test' that anyone can
before moving into a production environment.
Remove test database and access to it? [Y/n] Y
- Dropping test database...
- Removing privileges on test database...
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n] Y
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
8、安装RabbitMQ消息队列
在controller节点
- 安装包
[root@controller ~]# yum install -y rabbitmq-server
- 启动服务并开机自启
[root@controller ~]# systemctl enable --now rabbitmq-server.service
[root@controller ~]# systemctl status rabbitmq-server.service
- 添加openstack用户
这步要设置消息队列的openstack用户密码,注意保存。把RABBIT_PASS替换为密码。
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
Creating user "openstack"
- 开放配置和读写权限
[root@controller ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/"
9、安装配置memcached服务
用于缓存Identity服务的token
在controller节点
- 安装包
[root@controller ~]# yum install memcached python-memcached -y
- 修改配置文件,配置Controller节点
[root@controller ~]# vim /etc/sysconfig/memcached
...
OPTIONS="-l 127.0.0.1,::1,controller"
- 启动服务并开机自启
[root@controller ~]# systemctl enable --now memcached.service
[root@controller ~]# systemctl status memcached.service
10、安装配置Etcd
在controller节点
- 安装包
[root@controller ~]# yum install etcd -y
- 修改配置文件,配置Controller节点
[root@controller ~]# vim /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://10.0.0.11:2380"
ETCD_LISTEN_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://10.0.0.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://10.0.0.11:2379"
ETCD_INITIAL_CLUSTER="controller=http://10.0.0.11:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
- 启动服务并开机自启
[root@controller ~]# systemctl enable --now etcd
[root@controller ~]# systemctl status etcd