[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
api_servers = http://Controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
将配置写入到数据库中
# su -s /bin/sh -c \# su -s /bin/sh -c \
nova服务自启动与启动
# systemctl enable openstack-nova-api.service \\
openstack-nova-consoleauth.service openstack-nova-scheduler.service \\ openstack-nova-conductor.service openstack-nova-novncproxy.service #systemctl start openstack-nova-api.service \\
openstack-nova-consoleauth.service openstack-nova-scheduler.service \\ openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点上安装与配置
在安装计算节点前需要设置OpenStack的源码包,参考系统环境?OpenStack安装包的步骤。
安装nova compute包
# yum install openstack-nova-compute
修改配置文件
# vim /etc/nova/nova.conf
[DEFAULT]
rpc_backend = rabbit auth_strategy = keystone my_ip = 192.168.100.82 use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[oslo_messaging_rabbit] rabbit_host = Controller rabbit_userid = openstack
21
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = nova password = 123456 [vnc]
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url =http://Controller:6080/vnc_auto.html
[glance]
api_servers =http://Controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
判断是否支持虚拟机硬件加速 $ egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回0,标识不支持硬件加速,修改配置文件中[libvirt]选项为qemu,如果返回1或者大于1的数字,修改[libvirt]选项为kvm
[libvirt]
virt_type = kvm 服务启动
# systemctl enable libvirtd.service openstack-nova-compute.service #systemctl start libvirtd.service openstack-nova-compute.service
问题1:无法执行完成openstack-nova-compute.service命令,一直卡住不动。
Compute1节点无法访问Controller节点,停掉防火墙 systemctl stop firewalld.service
systemctl disable firewalld.service #禁止firewall开机启动 或者
systemctl stop iptables.service #防火墙使配置失效
22
systemctl disable iptables.service #删除防火墙开机启动
问题2:利用scp命令将配置文件拷贝到其他计算节点,修改IP后启动时,发现无法启动服务,错误为:Failed to open some config files: /etc/nova/nova.conf,主要原因是配置权限错误,注意权限:chown root:nova nova.conf
验证操作
$ . admin-openrc
列出服务组件,用于验证每个过程是否成功 $openstack compute service list
网络服务
管理节点上安装与配置
基本配置
$ mysql -u root –p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \\ IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \\ IDENTIFIED BY 'NEUTRON_DBPASS'; 执行脚本
$ . admin-openrc
创建neutron用户及增加admin角色
$openstack user create --domain default --password-prompt neutron User Password:(123456) Repeat User Password:
23
添加角色
$ openstack role add --project service --user neutron admin 创建neutron实体
$ openstack service create --name neutron \\
--description \$ openstack endpoint create --region RegionOne \\ network public http://Controller:9696
$openstack endpoint create --region RegionOne \\ network internal http://Controller:9696
$ openstack endpoint create --region RegionOne \\ network admin http://Controller:9696
配置网络选项
配置网络:有Provider networks和Self-service networks两种类型可选,此处选择Self-service networks。
安装组件
# yum install openstack-neutron openstack-neutron-ml2 \\ openstack-neutron-linuxbridge ebtables
服务器组件配置(注意替换红色部分)
# vim /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@Controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True rpc_backend = rabbit auth_strategy = keystone
notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True
24
[oslo_messaging_rabbit] rabbit_host = Controller rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123456
[nova]
auth_url = http://Controller:35357 auth_type = password
project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = flat,vlan,vxlan tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vxlan] vni_ranges = 1:1000
[securitygroup]
25
OpenStack Mitaka集成Ceph Jewel安装部署文档
1
目录
系统部署环境 ........................................................................................................................... 4
系统配置 ........................................................................................................................... 4 网络配置 ........................................................................................................................... 4 基本配置 ........................................................................................................................... 4
时间同步 ................................................................................................................... 4 OpenStack安装包 .................................................................................................... 6 其他安装配置 ........................................................................................................... 6 Memcached安装配置 .............................................................................................. 8
认证服务 ................................................................................................................................... 8
基本配置 ........................................................................................................................... 8 创建服务体和API端点 ................................................................................................. 10 创建域、工程、用户及角色 ......................................................................................... 12 验证操作 ......................................................................................................................... 13 创建OpenStack客户端环境脚本 ................................................................................. 14 镜像服务 ................................................................................................................................. 15
基本配置 ......................................................................................................................... 15 安装配置 ......................................................................................................................... 16 验证操作 ......................................................................................................................... 17 计算服务 ................................................................................................................................. 18
管理节点上安装与配置 ................................................................................................. 18
基本配置 ................................................................................................................. 18 安装配置 ................................................................................................................. 20 计算节点上安装与配置 ................................................................................................. 21 验证操作 ......................................................................................................................... 23 网络服务 ................................................................................................................................. 23
管理节点上安装与配置 ................................................................................................. 23
基本配置 ................................................................................................................. 23 配置网络选项 ......................................................................................................... 24 配置元数据代理 ..................................................................................................... 26 结束安装 ................................................................................................................. 27 计算节点上安装与配置 ................................................................................................. 28
安装组件 ................................................................................................................. 28 配置通用组件 ......................................................................................................... 28 配置网络选项 ......................................................................................................... 29 结束安装 ................................................................................................................. 29 验证操作 ......................................................................................................................... 30 安装Ceph-deploy ................................................................................................................... 30
给普通用户增加sudo权限 ........................................................................................... 30 禁用 requiretty .............................................................................................................. 31 用户无密码登录 ............................................................................................................. 31 关闭防火墙 ..................................................................................................................... 31 禁用SELinux ................................................................................................................... 31
2
Pip执行安装Ceph-deploy ............................................................................................. 32 Ceph存储集群快速安装 ....................................................................................................... 32
创建集群 ......................................................................................................................... 32 安装ceph ........................................................................................................................ 33 配置初始 monitor(s) ...................................................................................................... 33 添加OSD ......................................................................................................................... 33 块存储服务 ............................................................................................................................. 35
管理节点上安装与配置 ................................................................................................. 35
基本配置 ................................................................................................................. 35 安装配置组件 ......................................................................................................... 35 存储节点上安装与配置 ................................................................................................. 37 验证操作 ......................................................................................................................... 38 Dashboard服务 ...................................................................................................................... 38
安装配置 ......................................................................................................................... 38 结束安装 ......................................................................................................................... 40 验证操作 ......................................................................................................................... 40 集成Ceph相关配置 .............................................................................................................. 40
创建Pool ........................................................................................................................ 40 复制Ceph配置文件 ...................................................................................................... 40 安装Ceph Client包 ........................................................................................................ 40 安装Ceph客户端认证 .................................................................................................. 41 Openstack配置 ............................................................................................................... 43 云硬盘分配与挂载 ......................................................................................................... 45 创建实例 ................................................................................................................................. 46
创建虚拟网 ..................................................................................................................... 46
创建provider network ............................................................................................ 46 创建self-service network ....................................................................................... 48 验证操作 ................................................................................................................. 49 创建虚拟机 ..................................................................................................................... 50 后期操作 ................................................................................................................................. 52
创建CentOS虚拟机 ....................................................................................................... 52 创建window虚拟机 ...................................................................................................... 54 制作CentOS 7镜像 ........................................................................................................ 54 创建实例 ......................................................................................................................... 57
3
系统部署环境
系统配置
OpenStack部署平台包含4个计算节点,系统配置如下表所示。 序号 1 IP 192.168.100.81 101.101.101.81 192.168.100.82 101.101.101.82 192.168.100.83 101.101.101.83 192.168.100.84 101.101.101.84 主机名 CPU 2*8Core 2.4GHz Intel(R) Xeon(R) CPU E5-2630 2*8Core 2.4GHz Intel(R) Xeon(R) CPU E5-2630 2*8Core 2.4GHz Intel(R) Xeon(R) CPU E5-2630 2*8Core 2.4GHz Intel(R) Xeon(R) CPU E5-2630 内存 磁盘 操作系统 CentOS Linux release 7.2.1511 CentOS Linux release 7.2.1511 CentOS Linux release 7.2.1511 CentOS Linux release 7.2.1511 Controller 64GB 1.7TB 2 Computer01 64GB 1.7TB 3 Computer02 64GB 1.7TB 4 Computer03 64GB 1.7TB 其中,我们选取Controller节点作为OpenStack的管理节点和Ceph的部署节点,选取Computer01~Computer03作为计算节点。所有节点部署Ceph OSD
OpenStack安装版本为Mitaka
安装参考:http://docs.openstack.org/mitaka/install-guide-rdo/
http://docs.openstack.org/mitaka/zh_CN/install-guide-rdo/index.html。
Ceph安装版本为Jewel
安装参考:http://docs.ceph.com/docs/master/start/
两者集成参考:http://docs.ceph.com/docs/master/rbd/rbd-openstack/。
网络配置
本部署采用self-service网络架构。
基本配置 时间同步
安装NTP(Network Time Protocol)协议实现软件Chrony,保证管理节点与计算节点之间的时间同步。
A. 管理节点Controller上执行以下操作:
4
安装chrony组件 # yum install chrony 修改配置文件
#vim /etc/chrony.conf
修改时钟服务器
server 0.centos.pool.ntp.org iburst server 1.centos.pool.ntp.org iburst server 2.centos.pool.ntp.org iburst server 3.centos.pool.ntp.org iburst
追加子网段,允许其他节点可以连接到Controller的chrony daemon allow 192.168.100.0/24
开机自启动并启动NTP服务 # systemctl enable chronyd.service #systemctl restart chronyd.service
B. 其他节点执行以下操作 安装chrony组件 # yum install chrony
修改配置文件
# vim /etc/chrony.conf
修改时钟服务器为Controller server Controller iburst
开机自启动并启动NTP服务 # systemctl enable chronyd.service # systemctl start chronyd.service
C. 验证时间同步服务
在Controller节点上验证NTP服务 # chronyc sources
在其他节点上验证NTP服务 # chronyc sources
5
注:如果其他节点上时间服务器未参考Controller服务器,尝试重启(systemctl restart chronyd.service)后查看。
注意要设置时区一致: 查看时区 # date +%z 如果不一致,可修改时区为东八区
#cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
OpenStack安装包
所有节点上需要执行以下操作,完成OpenStack安装包的下载与更新。 OpenStack RPM软件仓库
#yum install centos-release-openstack-mitaka
更新包
# yum upgrade
安装OpenStack python客户端 # yum install python-openstackclient
安装openstack-selinux包用于OpenStack服务的自动管理 #yum install openstack-selinux
其他安装配置
以下将安装配置SQL数据库、NoSQL数据库、消息队列、缓存等组件,通常这些组件安装在Controller管理节点上,为OpenStack认证、镜像、计算等服务的基本组件。
A. SQL数据库安装配置 安装mariadb mysql
# yum install mariadb mariadb-server python2-PyMySQL
创建并修改mysql配置文件 # vim /etc/my.cnf.d/openstack.cnf 绑定 Controller节点的IP地址 [mysqld]
bind-address = 192.168.100.81
6
指定其他选项 [mysqld] ...
default-storage-engine = innodb innodb_file_per_table
collation-server = utf8_general_ci character-set-server = utf8
启动mariadb服务
# systemctl enable mariadb.service #systemctl start mariadb.service
MariaDB的安全性配置 # mysql_secure_installation
Set root password? [Y/n]Y
设置数据库密码为123456
B. NoSQL数据库安装配置
Telemetry服务需要用到NoSQL数据库保存信息,Controller 节点上需要安装MongoDB
# yum install mongodb-server mongodb
修改配置文件
# vim /etc/mongod.conf
指定Controller 节点IP bind_ip = 192.168.100.81 日志小文件选项 smallfiles = true
MongoDB服务启动
# systemctl enable mongod.service # systemctl start mongod.service
C. MQ安装配置
安装RabbitMQ消息队列服务 #yum install rabbitmq-server
服务自启动
# systemctl enable rabbitmq-server.service #systemctl start rabbitmq-server.service
创建消息队列用户openstack
#rabbitmqctl add_user openstack RABBIT_PASS
7
设置openstack用户的配置、读写权限
# rabbitmqctl set_permissions openstack \
Memcached安装配置
Identity身份认证服务需要用到缓存 安装
# yum install memcached python-memcached 服务启动
# systemctl enable memcached.service # systemctl start memcached.service
认证服务
Identity集中管理认证、授权、目录服务,其他服务与Identity服务协同,将利用其作为通用统一的API。
Identity包含Server、Drivers、Modules组件。Server是一个集中化服务器,通过RESTful借口提供认证和授权服务。Drivers又称后端服务,集成到Server中,用于获取OpenStack的身份信息。Modules运行于OpenStack组件的地址空间,这些模块拦截服务请求、提取用户证书、并发送给认证服务器来进行认证,利用Python Web Server Gateway接口将中间件模块和OpenStack集成。
基本配置
A. 数据库配置
创建数据库实例和数据库用户 $ mysql -u root –p
在数据库客户端中执行以下操作: 执行创建命令
CREATE DATABASE keystone; 执行数据库用户授权
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \\ IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \\ IDENTIFIED BY 'KEYSTONE_DBPASS';
openssl产生随机数作为初始化配置Token $openssl rand -hex 10
8
db8a90c712a682517585
B. 安装和配置组件
认证服务需要开启Apache服务器的5000和35357的端口 安装软件包
#yum install openstack-keystone httpd mod_wsgi
编辑配置文件
# vim /etc/keystone/keystone.conf 编辑以下选项 [DEFAULT] ...
admin_token =db8a90c712a682517585
注明:此处token为openssl生成的随机数
[database] ...
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@Controller/keystone
[token] ...
provider = fernet
将认证服务填入数据库
# su -s /bin/sh -c \
初始化Fernet键
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
配置Apache HTTP服务器 修改服务器主机名
# vim /etc/httpd/conf/httpd.conf ServerName Controller
创建配置文件
# vim /etc/httpd/conf.d/wsgi-keystone.conf
Listen 5000 Listen 35357
9
WSGIDaemonProcess keystone-public processes=5 threads=1 group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On ErrorLogFormat \
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
WSGIDaemonProcess keystone-admin processes=5 threads=1 group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin WSGIApplicationGroup %{GLOBAL} WSGIPassAuthorization On ErrorLogFormat \
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined
Require all granted
启动服务
# systemctl enable httpd.service #systemctl start httpd.service
浏览器可访问localhost:5000 和localhost:35357
user=keystone
user=keystone
创建服务体和API端点
设置临时环境变量
$ export OS_TOKEN=db8a90c712a682517585
注明:其中OS_TOKEN为之前openssl生成的随机数。
10
$export OS_URL=http://Controller:35357/v3 $ export OS_IDENTITY_API_VERSION=3
创建服务实体和API端点 $openstack service create \\
--name keystone --description \
$ openstack endpoint create --region RegionOne \\ identity public http://Controller:5000/v3
$ openstack endpoint create --region RegionOne \\ identity internal http://Controller:5000/v3
$ openstack endpoint create --region RegionOne \\ identity admin http://Controller:35357/v3
11
创建域、工程、用户及角色
创建默认域
$openstack domain create --description \
创建管理工程
$ openstack project create --domain default \\ --description \
创建admin用户
$openstack user create --domain default \\ --password-prompt admin User Password:(123456)
Repeat User Password:(123456)
12
创建角色
$openstack role create admin
增加管理角色至admin工程和用户
$ openstack role add --project admin --user admin admin
与以上操作类似,创建一个包含一个唯一用户的服务工程 $ openstack project create --domain default \\ --description \
$ openstack project create --domain default \\ --description \ $ openstack user create --domain default \\ --password-prompt demo (passwd:123456)
$ openstack role create user
$openstack role add --project demo --user demo user
验证操作
在管理节点上执行以下操作
A. 鉴于安全因素,移除临时的token认证授权机制
#vim /etc/keystone/keystone-paste.ini
将[pipeline:public_api], [pipeline:admin_api],和[pipeline:api_v3]中的admin_token_auth移除,注意不是删除或注释该行。 B. 取消设置环境变量
$ unset OS_TOKEN OS_URL C. 请求认证环
利用admin用户请求
$openstack --os-auth-url http://Controller:35357/v3 \\
--os-project-domain-name default --os-user-domain-name default \\ --os-project-name admin --os-username admin token issue
13
利用demo用户请求
$openstack --os-auth-url http://Controller:5000/v3 \\
--os-project-domain-name default --os-user-domain-name default \\ --os-project-name demo --os-username demo token issue
创建OpenStack客户端环境脚本
前面操作是利用环境变量及OpenStack客户端命令行方式与认证服务交互,OpenStack可以通过OpenRC脚本文件来提高认证效率。
$ vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=123456
export OS_AUTH_URL=http://Controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
$ vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=default export OS_USER_DOMAIN_NAME=default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=123456
export OS_AUTH_URL=http://Controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
增加脚本执行权限
$ chmod +x admin-openrc $ chmod +x demo-openrc
可以通过执行以上脚本来快速切换工程及用户,例如: $ . admin-openrc 请求认证Token
$ openstack token issue
14
镜像服务
基本配置
登录mysql客户端,创建表及用户,并授予相应的数据库权限 $ mysql -u root –p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \\ IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \\ IDENTIFIED BY 'GLANCE_DBPASS'; (注:密码采用了默认)
用户认证
利用客户端脚本获取认证 $ . admin-openrc
创建glance用户
$openstack user create --domain default --password-prompt glance User Password:(123456) Repeat User Password:
$ openstack role add --project service --user glance admin $ openstack service create --name glance \\ --description \
15
创建镜像服务API端点
$ openstack endpoint create --region RegionOne \\ image public http://Controller:9292
$ openstack endpoint create --region RegionOne \\ image internal http://Controller:9292
$ openstack endpoint create --region RegionOne \\ image admin http://Controller:9292
安装配置
安装glance包
# yum install openstack-glance
配置/etc/glance/glance-api.conf
# vim /etc/glance/glance-api.conf
按照以下选项进行编辑(红色部分注意替换)
[database] ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@Controller/glance
[keystone_authtoken] ...
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = glance password = 123456
[paste_deploy] ...
16
flavor = keystone
[glance_store] ...
stores = file,http default_store = file
filesystem_store_datadir = /var/lib/glance/images/
配置/etc/glance/glance-registry.conf # vim /etc/glance/glance-registry.conf
按照以下选项进行编辑(红色部分注意替换) [database] ...
connection = mysql+pymysql://glance:GLANCE_DBPASS@Controller/glance
[keystone_authtoken] ...
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = glance password = 123456
[paste_deploy] ...
flavor = keystone
服务填充至数据库
# su -s /bin/sh -c \结束安装
# systemctl enable openstack-glance-api.service \\ openstack-glance-registry.service
# systemctl start openstack-glance-api.service \\ openstack-glance-registry.service
验证操作
切换admin用户 $ . admin-openrc
17
下载源镜像
$ wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img 镜像上传并设置属性
$openstack image create \ --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --public
验证是否成功
$openstack image list
计算服务
OpenStack计算服务主要包括以下组件:nova-api服务、nova-api-metadata服务、nova-compute服务、nova-scheduler服务、nova-conductor模块、nova-cert模块、nova-network worker模块、nova-consoleauth模块、nova-novncproxy守护进程、nova-spicehtml5proxy守护进程、nova-xvpvncproxy守护进程、nova-cert守护进程、nova客户端、队列、SQL数据库。
管理节点上安装与配置 基本配置
创建数据库表及用户 $ mysql -u root –p
18
执行以下SQL命令
CREATE DATABASE nova_api; CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \\ IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \\ IDENTIFIED BY 'NOVA_DBPASS';
切换用户获取认证 $ . admin-openrc
创建nova用户
$ openstack user create --domain default \\ --password-prompt nova User Password:(123456) Repeat User Password:
增加admin角色
$ openstack role add --project service --user nova admin
创建nova服务实体
$ openstack service create --name nova \\
--description \
创建计算服务的API endpoints
$ openstack endpoint create --region RegionOne \\
compute public http://Controller:8774/v2.1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
compute internal http://Controller:8774/v2.1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
compute admin http://Controller:8774/v2.1/%\\(tenant_id\\)s
19
安装配置
安装软件包
# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler
nova配置 修改配置文件
# vim /etc/nova/nova.conf
注:将红色部分修改称个人配置
[DEFAULT]
enabled_apis = osapi_compute,metadata rpc_backend = rabbit auth_strategy = keystone my_ip = 192.168.100.81 use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriveserver_proxyclient_address = $my_ip
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@Controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@Controller/nova
[oslo_messaging_rabbit] rabbit_host = Controller rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = nova password = 123456
20
enable_ipset = True
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge]
physical_interface_mappings = provider:enp2s0f1
注:enp2s0f1为第二块网卡设备
[vxlan]
enable_vxlan = True
local_ip = 101.101.101.81 l2_population = True
注:local_ip为第二个网卡所在网段
[securitygroup] ...
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
#vim /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver external_network_bridge =
# vim //etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = True
配置元数据代理
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = Controller
metadata_proxy_shared_secret = METADATA_SECRET
注:METADATA_SECRET为自定义的字符密码,与下文nova.conf中metadata_proxy_shared_secret配置一致。
# vim /etc/nova/nova.conf
26
[neutron]
url = http://Controller:9696
auth_url = http://Controller:35357 auth_type = password
project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 123456
service_metadata_proxy = True
metadata_proxy_shared_secret = METADATA_SECRET
结束安装
创建配置文件符号连接
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
配置存取到数据库
#su -s /bin/sh -c \ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head\
重启计算API服务
# systemctl restart openstack-nova-api.service
服务启动
# systemctl enable neutron-server.service \\
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service
#systemctl start neutron-server.service \\
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \\ neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service #systemctl start neutron-l3-agent.service
查看所有服务是否正常启动systemctl status *
27
计算节点上安装与配置
安装组件
#yum install openstack-neutron-linuxbridge ebtables ipset
配置通用组件
# vim /etc/neutron/neutron.conf
[DEFAULT]
rpc_backend = rabbit auth_strategy = keystone
[oslo_messaging_rabbit] rabbit_host = Controller rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123456
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
同样方法配置其他节点,利用scp复制配置文件需要修改权限
将配置文件拷贝到其他计算节点,并在其他计算节点上修改文件拥有者权限 # scp /etc/neutron/neutron.conf root@computer02:/etc/neutron/ 切换到其他节点
# chown root:neutron /etc/neutron/neutron.conf
28
配置网络选项
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:enp2s0f1
注:红色部分为PROVIDER_INTERFACE_NAME,应为本被计算节点物理网卡编号
[vxlan]
enable_vxlan = True
local_ip = 101.101.101.82 l2_population = True
注:local_ip为本计算节点IP
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
配置计算节点利用neutron # vim /etc/nova/nova.conf [neutron]
url = http://Controller:9696
auth_url = http://Controller:35357 auth_type = password
project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 123456
结束安装
重启计算服务
# systemctl restart openstack-nova-compute.service
启动Linux桥接代理
# systemctl enable neutron-linuxbridge-agent.service #systemctl start neutron-linuxbridge-agent.service
29
验证操作
在管理节点上执行以下操作 $. admin-openrc $neutron ext-list
$ neutron agent-list
安装Ceph-deploy
给普通用户增加sudo权限
# echo \$ sudo chmod 0440 /etc/sudoers.d/inspur
30
禁用 requiretty
在某些发行版(如 CentOS )上,执行 ceph-deploy 命令时,如果你的 Ceph 节点默认设置了 requiretty 那就会遇到报错。可以这样禁用此功能:
sudo visudo 找到 Defaults requiretty 选项,把它改为Defaults:ceph !requiretty,这样 ceph-deploy 就能用 ceph 用户登录并使用 sudo 了。
用户无密码登录
管理节点必须能够通过 SSH 无密码地访问各 Ceph 节点。 目前管理节点已经存在密钥,无需执行ssh-keygen(创建虚拟机实例时创建)进行生成。 只需拷贝公钥到其他计算节点 ssh-copy-id inspur@Computer01 ssh-copy-id inspur@Computer02 ssh-copy-id inspur@Computer03
vim ~/.ssh/config
添加以下内容 Host Computer01
Hostname Computer01 User inspur Host Computer02
Hostname Computer02 User inspur Host Computer03
Hostname Computer03 User inspur 更改权限
chmod 600 ~/.ssh/config
关闭防火墙
# systemctl stop firewalld.service # systemctl disable firewalld.service
禁用SELinux
查看当前SELinux状态:
31
sestatus –v
如果未关闭SELinux,需要永久关闭所有节点的SELinux。
/etc/selinux/config,找到SELINUX 行修改成为:SELINUX=disabled:
Pip执行安装Ceph-deploy
sudo yum update && sudo yum install ceph-deploy
包冲突问题
--> Processing Dependency: python-distribute for package: ceph-deploy-1.5.34-0.noarch Package python-setuptools-0.9.8-4.el7.noarch is obsoleted by python2-setuptools-22.0.5-1.el7.noarch which is already installed --> Finished Dependency Resolution
Error: Package: ceph-deploy-1.5.34-0.noarch (ceph-noarch) Requires: python-distribute
Available: python-setuptools-0.9.8-4.el7.noarch (base) python-distribute = 0.9.8-4.el7
You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
包冲突
$ rpm -qa|grep setuptools
python2-setuptools-22.0.5-1.el7.noarch 卸载
利用pip安装解决 # yum install python-pip # pip install ceph-deploy
Ceph存储集群快速安装
创建集群
在管理节点上,进入刚创建的放置配置文件的目录,用 ceph-deploy 执行如下步骤。 ceph-deploy new Controller 当前目录下会生成
一个 Ceph 配置文件、一个 monitor 密钥环和一个日志文件
ceph.conf ceph-deploy-ceph.log ceph.mon.keyring
32
修改配置 vim ceph.conf
Ceph 配置文件里的默认副本数从 3 改成 2 osd pool default size = 2
如果你有多个网卡,可以把 public network 写入 Ceph 配置文件的 [global] 段下,此处只有一个网卡,未做以下配置。
public network = {ip-address}/{netmask}
安装ceph
ceph-deploy install Controller Controller01 Controller02 Controller03
出错:[ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph' yum remove ceph-release
rm /etc/yum.repos.d/ceph.repo.rpmsave
配置初始 monitor(s)
ceph-deploy mon create-initial
执行后当前文件夹下会多出密钥环:ceph.bootstrap-mds.keyring、ceph.bootstrap-osd.keyring、ceph.bootstrap-rgw.keyring、ceph.client.admin.keyring。
添加OSD
添加两个 OSD ssh Controller01
sudo mkdir /var/local/osd0
chown –R ceph:ceph /var/local/osd0/ exit
ssh Controller02
sudo mkdir /var/local/osd1
chown -R ceph:ceph /var/local/osd1/ exit
ssh Controller03
sudo mkdir /var/local/osd2
chown -R ceph:ceph /var/local/osd2/ exit
33
从管理节点执行 ceph-deploy 来准备 OSD 。
ceph-deploy osd prepare Controller01:/var/local/osd0 Controller02:/var/local/osd1 Controller03:/var/local/osd2
问题:执行以上命令时,如果未关闭所有节点的SELinux,出现执行该命令等待时间过长
[Controller01][WARNIN] command: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise f18c6ce8-3b03-4ab2-876b-aa70d53b45f3
[Controller01][WARNIN] No data was received after 300 seconds, disconnecting...
解决:永久关闭所有节点的SELinux。 vim /etc/sysconfig/selinux
设置方法:
1 #SELINUX=enforcing #注释掉 2 #SELINUXTYPE=targeted #注释掉 3 SELINUX=disabled #增加 4 :wq #保存,关闭。 5 shutdown -r now #重启
激活 OSD
ceph-deploy osd activate Controller01:/var/local/osd0 Controller02:/var/local/osd1Controller03:/var/local/osd2
用 ceph-deploy 把配置文件和 admin 密钥拷贝到管理节点和 Ceph 节点,这样你每次执行 Ceph 命令行时就无需指定 monitor 地址和 ceph.client.admin.keyring 了
ceph-deploy admin admin-node node1 Controller01Controller02
确保对 ceph.client.admin.keyring 有正确的操作权限 sudo chmod +r /etc/ceph/ceph.client.admin.keyring
检查集群的健康状况 ceph health
常见出错原因:防火墙未关闭,关闭防火墙后,重启集群后ceph集群恢复正常。
34
块存储服务
管理节点上安装与配置 基本配置
$ mysql -u root –p
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \\ IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \\ IDENTIFIED BY 'CINDER_DBPASS';
$ . admin-openrc
$ openstack user create --domain default --password-prompt cinder
User Password:(123456) Repeat User Password:
$ openstack role add --project service --user cinder admin $ openstack service create --name cinder \\
--description \$ openstack service create --name cinderv2 \\
--description \
创建cinder服务的API endpoints
$ openstack endpoint create --region RegionOne \\
volume public http://Controller:8776/v1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volume internal http://Controller:8776/v1/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volume admin http://Controller:8776/v1/%\\(tenant_id\\)s
$ openstack endpoint create --region RegionOne \\
volumev2 public http://Controller:8776/v2/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volumev2 internal http://Controller:8776/v2/%\\(tenant_id\\)s $ openstack endpoint create --region RegionOne \\
volumev2 admin http://Controller:8776/v2/%\\(tenant_id\\)s
安装配置组件
# yum install openstack-cinder
35
# vim /etc/cinder/cinder.conf
[database] ...
connection = mysql+pymysql://cinder:CINDER_DBPASS@Controller/cinder
[DEFAULT] ...
rpc_backend = rabbit auth_strategy = keystone my_ip = 192.168.100.81
[oslo_messaging_rabbit] ...
rabbit_host = Controller rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken] ...
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service username = cinder password = 123456
[oslo_concurrency] ...
lock_path = /var/lib/cinder/tmp
写入数据库
# su -s /bin/sh -c \
修改计算配置
# vim /etc/nova/nova.conf
[cinder]
os_region_name = RegionOne 重新启动nova
# systemctl restart openstack-nova-api.service
启动cinder
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
36
存储节点上安装与配置
要集成Ceph,替代LVM,不执行以上操作!
Cinder安装包
# yum install openstack-cinder targetcli
脚本
ssh Computer01 sudo yum install –yopenstack-cinder targetcli ssh Computer02 sudo yum install –y openstack-cinder targetcli ssh Computer03 sudo yum install –y openstack-cinder targetcli
修改配置文件
# vim /etc/cinder/cinder.conf
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@Controller/cinder
[DEFAULT]
rpc_backend = rabbit auth_strategy = keystone enabled_backends = lvm
glance_api_servers = http://Controller:9292 my_ip =192.168.100.82
注明:IP为当前存储节点的管理网IP
[oslo_messaging_rabbit] ...
rabbit_host = Controller rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken] ...
auth_uri = http://Controller:5000 auth_url = http://Controller:35357
memcached_servers = Controller:11211 auth_type = password
project_domain_name = default user_domain_name = default project_name = service
37
username = cinder password = 123456
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
服务启动
# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service
脚本
ssh Computer01 sudo systemctl enable openstack-cinder-volume.service target.service ssh Computer02 sudo systemctl enable openstack-cinder-volume.service target.service ssh Computer03 sudo systemctl enable openstack-cinder-volume.service target.service ssh Computer01 sudo systemctl start openstack-cinder-volume.service target.service ssh Computer02 sudo systemctl start openstack-cinder-volume.service target.service ssh Computer03 sudo systemctl start openstack-cinder-volume.service target.service
验证操作
# . admin-openrc $ cinder service-list
存储节点服务未启动
主要原因:未执行LVM相关操作。
Dashboard服务
安装配置
安装包
# yum install openstack-dashboard
38
修改配置
#vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = \
ALLOWED_HOSTS = ['*', ]
SESSION_ENGINE = 'django.contrib.sessions.backends.cache' CACHES = { 'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'Controller:11211', }, }
OPENSTACK_KEYSTONE_URL = \
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { \ \ \}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = \
OPENSTACK_KEYSTONE_DEFAULT_ROLE = \如果选择网络模式1 provider,此处采用默认 OPENSTACK_NEUTRON_NETWORK = { ... 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, }
TIME_ZONE = \
39
结束安装
#systemctl restart httpd.service memcached.service
验证操作
http://Controller/dashboard.
域填写default,登录用户可为admin或demo
集成Ceph相关配置
创建Pool
# ceph osd pool create volumes 128 # ceph osd pool create images 128 # ceph osd pool create backups 128 # ceph osd pool create vms 128
复制Ceph配置文件
ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf
将{your-openstack-server}替换成OpenStack所在的glance-api, cinder-volume, nova-compute and cinder-backup的主机名
此处为:
# ssh Controller sudo tee /etc/ceph/ceph.conf
安装Ceph Client包
在glance-api节点上 sudo yum install python-rbd
在nova-compute, cinder-backup 和cinder-volume节点上 sudo yum install ceph-common
40
虚拟机中磁盘挂载如下: 查看分区
# partprobe /dev/vdc # cat /proc/partitions
格式化分区
# mkfs -t ext4 /dev/vdc
挂载到指定目录 # mkdir /data
# mount /dev/vdc /data
验证挂载是否成功 # df -k
设置开机自动挂载 # vi /etc/fstab 追加
/dev/vdc /data ext4 defaults 0
0
创建实例
创建虚拟网
首先创建provider network然后创建self-service network。
创建provider network
$ . admin-openrc 创建网路
$ neutron net-create --shared --provider:physical_network provider \\ --provider:network_type flat provider
46
创建子网
$ neutron subnet-create --name provider \\
--allocation-pool start=192.168.100.85,end=192.168.100.100 \\ --dns-nameserver 124.16.136.254 --gateway 192.168.100.1 \\ provider 192.168.100.0/24
(此处与实际的网关、DNS、网段对应)
47
创建self-service network
$ . demo-openrc 创建网络
$ neutron net-create selfservice
在selfservice上创建子网
$ neutron subnet-create --name selfservice\\
--dns-nameserver 124.16.136.254 --gateway 155.100.1.1 \\ selfservice 155.100.1.0/24 注:该子网段可以任意设置。
创建路由器
Self-service网络需要通过虚拟路由器,并通过NAT的方式连接provider网络,每个虚拟路由器包含至少一个self-service network和provider network上的网管,provider network需包括router:external的选项才能使self-service路由器访问外部网络。
管理节点上执行以下操作: $ . admin-openrc
给provider网络添加router: external选项 $ neutron net-update provider --router:external 切换角色
$ . demo-openrc 创建路由器
$ neutron router-create router
48
添加self-service子网作为路由器接口
$ neutron router-interface-add router selfservice 设置provider上的网管给路由器
$ neutron router-gateway-set router provider
验证操作
$ . admin-openrc $ ip netns
列出路由器的端口和provider网关IP $ neutron router-port-list router
尝试能够ping通provider network,可以ping通,代表成功。 $ ping -c 4 192.168.100.86
$ . demo-openrc
查看基本的虚拟机类型 $ openstack flavor list
如果,没有合适类型,可以创建虚拟机类型
$ openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano 创建密钥对 $ . demo-openrc $ ssh-keygen -q -N \
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub mykey 验证是否生成成功 $ openstack keypair list
49
添加安全组规则 允许ICMP(ping)
$openstack security group rule create --proto icmp default 允许ssh
$openstack security group rule create --proto tcp --dst-port 22 default
创建虚拟机
创建虚拟机,网络类型可以选择provider network和self-service network A. 在provider network上创建
利用以下命令列出当前系统中镜像、网络、安全组等信息,在创建虚拟机时需要指定这些信息
$ openstack flavor list
$ openstack security group list $openstack network list $ openstack image list
执行创建
$ openstack server create --flavor m1.medium --image CentOS-7-x86_64-Generic \\ --nic net-id=786f5072-4db4-47a6-9e47-5cbaa34af961 --security-group default \\ --key-name mykey provider-instance
查看创建
$ openstack server list
获取VNC URL
$ openstack console url show provider-instance
验证网关
$ ping -c 4 192.168.100.1 验证连接外网
$ ping -c 4 openstack.org
产看是否可以ping通 $ ping -c 4 192.168.100.87
Ping不通。????安全规则已经添加
验证登录
50
百度搜索“77cn”或“免费范文网”即可找到本站免费阅读全部范文。收藏本站方便下次阅读,免费范文网,提供经典小说综合文库OpenStack Mitaka集成Ceph Jewel安装部署文档在线全文阅读。
相关推荐: