Ok, Lets get started here: http://docs.openstack.org/liberty/install-guide-ubuntu/overview.html
We need two core nodes for Control, and Compute
The controller node runs the Identity service, Image service, management portions of Compute, management portion of Networking, various Networking agents, and the dashboard. It also includes supporting services such as an SQL database, message queue, and NTP. Optionally, the controller node runs portions of Block Storage, Object Storage, Orchestration, and Telemetry services. The controller node requires a minimum of two network interfaces.
The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the KVM hypervisor. The compute node also runs a Networking service agent that connects instances to virtual networks and provides firewalling services to instances via security groups.
The dashboard requires at least the Image service, Compute, and Networking.
Password | Description |
---|---|
DB_PASS | Root password for the database |
ADMIN_PASS | Password of user admin |
CEILOMETER_DBPASS | Database password for the Telemetry service |
CEILOMETER_PASS | Password of Telemetry service user ceilometer |
CINDER_DBPASS | Database password for the Block Storage service |
CINDER_PASS | Password of Block Storage service user cinder |
DASH_DBPASS | Database password for the dashboard |
DEMO_PASS | Password of user demo |
GLANCE_DBPASS | Database password for Image service |
GLANCE_PASS | Password of Image service user glance |
HEAT_DBPASS | Database password for the Orchestration service |
HEAT_DOMAIN_PASS | Password of Orchestration domain |
HEAT_PASS | Password of Orchestration service user heat |
KEYSTONE_DBPASS | Database password of Identity service |
NEUTRON_DBPASS | Database password for the Networking service |
NEUTRON_PASS | Password of Networking service user neutron |
NOVA_DBPASS | Database password for Compute service |
NOVA_PASS | Password of Compute service user nova |
RABBIT_PASS | Password of user guest of RabbitMQ |
SWIFT_PASS | Password of Object Storage service user swift |
ADMIN_TOKEN | Used for Identify Service |
Use this script to generate passwords: (and keep them in a safe place) :
for n in DB_PASS ADMIN_PASS CEILOMETER_DBPASS CEILOMETER_PASS CINDER_DBPASS \
CINDER_PASS DASH_DBPASS DEMO_PASS GLANCE_PASS GLANCE_DBPASS HEAT_DBPASS \
HEAT_DOMAIN_PASS HEAT_PASS KEYSTONE_DBPASS NEUTRON_DBPASS NEUTRON_PASS \
NOVA_DBPASS NOVA_PASS RABBIT_PASS SWIFT_PASS;do
echo $n=`openssl rand -hex 15`
done
for c in CloudManagment CloudController CloudCompute1 CloudBlock1 CloudObject1; do
echo "Creating $c"
lxc-create -B btrfs -t ubuntu-cloud --name $c
echo "lxc.cgroup.memory.limit_in_bytes = 4096M" >> /var/lib/lxc/$c/config
echo "10.0.3.11 CloudController" >> /var/lib/lxc/$c/rootfs/etc/hosts
echo "10.0.3.31 CloudCompute1" >> /var/lib/lxc/$c/rootfs/etc/hosts
echo "10.0.3.41 CloudBlock1" >> /var/lib/lxc/$c/rootfs/etc/hosts
echo "10.0.3.51 CloudObject1" >> /var/lib/lxc/$c/rootfs/etc/hosts
done
# Configure networking
echo "lxc.network.ipv4 = 10.0.3.05" >> /var/lib/lxc/CloudManagment/config
echo "lxc.network.ipv4 = 10.0.3.11" >> /var/lib/lxc/CloudController/config
echo "lxc.network.ipv4 = 10.0.3.31" >> /var/lib/lxc/CloudCompute1/config
echo "lxc.network.ipv4 = 10.0.3.41" >> /var/lib/lxc/CloudBlock1/config
echo "lxc.network.ipv4 = 10.0.3.51" >> /var/lib/lxc/CloudObject1/config
for c in CloudManagment CloudController CloudCompute1 CloudBlock1 CloudObject1; do
echo "Starting $c"
lxc-start --name $c
done;
lxc-attach --name CloudManagment
adduser cloud
adduser cloud sudo
deluser ubuntu
# edit /etc/ssh/sshd_config and comment out `PasswordAuthentication`
systemctl restart ssh
exit
ssh [email protected]
sudo apt install python-openstackclient
lxc-attach --name CloudController
adduser cloud
adduser cloud sudo
deluser ubuntu
# edit /etc/ssh/sshd_config and comment out `PasswordAuthentication`
systemctl restart ssh
In CloudController:
apt install chrony mariadb-server python-pymysql mongodb-server mongodb-clients python-pymongo rabbitmq-server
http://docs.openstack.org/liberty/install-guide-ubuntu/environment-sql-database.html
vi /etc/mysql/mariadb.conf.d/mysqld.cnf
[mysqld]
...
bind-address = 10.0.3.11
...
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
systemctl restart mysql
mysql_secure_installation
http://docs.openstack.org/liberty/install-guide-ubuntu/environment-nosql-database.html
vi /etc/mongodb.conf
bind_ip = 10.0.3.11
smallfiles = true
systemctl stop mongodb
rm /var/lib/mongodb/journal/prealloc.*
systemctl start mongodb
http://docs.openstack.org/liberty/install-guide-ubuntu/environment-messaging.html
rabbitmqctl add_user openstack **RABBIT_PASS**
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
http://docs.openstack.org/liberty/install-guide-ubuntu/common/get_started_identity.html
http://docs.openstack.org/liberty/install-guide-ubuntu/keystone-install.html
The Identity service contains these components:
- Server A centralized server provides authentication and authorization services using a RESTful interface.
- Drivers Drivers or a service back end are integrated to the centralized server. They are used for accessing identity information in repositories external to OpenStack, and may already exist in the infrastructure where OpenStack is deployed (for example, SQL databases or LDAP servers).
- Modules Middleware modules run in the address space of the OpenStack component that is using the Identity service. These modules intercept service requests, extract user credentials, and send them to the centralized server for authorization. The integration between the middleware modules and OpenStack components uses the Python Web Server Gateway Interface. When installing OpenStack Identity service, you must register each service in your OpenStack installation. Identity service can then track which OpenStack services are installed, and where they are located on the network.
mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '***KEYSTONE_DBPASS***';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '***KEYSTONE_DBPASS***';
exit;
echo "manual" > /etc/init/keystone.override
apt install keystone apache2 libapache2-mod-wsgi memcached python-memcache
vi /etc/keystone/keystone.conf
[DEFAULT]
...
admin_token = ***ADMIN_TOKEN***
[database]
...
connection = mysql+pymysql://keystone:***KEYSTONE_DBPASS***@CloudController/keystone
[memcache]
...
servers = localhost:11211
[token]
...
provider = uuid
driver = memcache
[revoke]
...
driver = sql
[DEFAULT]
...
verbose = True
su -s /bin/sh -c "keystone-manage db_sync" keystone
vi /etc/apache2/apache2.conf
ServerName CloudController
vi /etc/apache2/sites-available/wsgi-keystone.conf
Listen 5000
Listen 35357
<VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
<VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/apache2/keystone.log
CustomLog /var/log/apache2/keystone_access.log combined
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
ln -s /etc/apache2/sites-available/wsgi-keystone.conf /etc/apache2/sites-enabled
systemctl restart keystone
systemctl restart apache2
rm -f /var/lib/keystone/keystone.db
export OS_TOKEN=***ADMIN_TOKEN***
export OS_URL=http://CloudController:35357/v3
export OS_IDENTITY_API_VERSION=3
# Create Identify Service
openstack service create --name keystone --description "OpenStack Identity" identity
# Create 3 api endpoints for public,internal and admin
openstack endpoint create --region RegionOne identity public http://CloudController:5000/v2.0
openstack endpoint create --region RegionOne identity internal http://CloudController:5000/v2.0
openstack endpoint create --region RegionOne identity admin http://CloudController:35357/v2.0
# Create projects, users, and roles
openstack project create --domain default --description "Admin Project" admin
openstack user create --domain default --password-prompt admin
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password-prompt demo
openstack role create user
openstack role add --project demo --user demo user
- Verify
unset OS_TOKEN OS_URL
#Test admin
openstack --os-auth-url http://CloudController:35357/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name admin --os-username admin --os-auth-type password token issue
#Test demo
openstack --os-auth-url http://CloudController:5000/v3 \
--os-project-domain-id default --os-user-domain-id default \
--os-project-name demo --os-username demo --os-auth-type password token issue
vim ~/.openrc
function openrc_admin() {
echo "Switch to user admin"
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=***ADMIN_PASS***
export OS_AUTH_URL=http://CloudController:35357/v3
export OS_IDENTITY_API_VERSION=3
}
echo 'source ~/.openrc' >> ~/.bashrc
source ~/.openrc
Usage:
openrc_admin
#test
openstack token issue
The OpenStack Image service includes the following components:
- glance-api
- Accepts Image API calls for image discovery, retrieval, and storage.
- glance-registry
- Stores, processes, and retrieves metadata about images. Metadata includes items such as size and type.
- *** WARN: The registry is a private internal service meant for use by OpenStack Image service. Do not expose this service to users. *** +Database
- Stores image metadata and you can choose your database depending on your preference. Most deployments use MySQL or SQLite.
- Storage repository for image files
- Various repository types are supported including normal file systems, Object Storage, RADOS block devices, HTTP, and Amazon S3. Note that some repositories will only support read-only usage.
mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '***GLANCE_DBPASS***';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '***GLANCE_DBPASS***';
- On Managment:
openrc_admin
openstack user create --domain default --password-prompt glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image service" image
openstack endpoint create --region RegionOne image public http://CloudController:9292
openstack endpoint create --region RegionOne image internal http://CloudController:9292
openstack endpoint create --region RegionOne image admin http://CloudController:9292
On CloudController Config Glance:
apt-get install glance python-glanceclient
vi /etc/glance/glance-api.conf
[database]
...
connection = mysql+pymysql://glance:***GLANCE_DBPASS***@CloudController/glance
[keystone_authtoken]
...
auth_uri = http://CloudController:5000
auth_url = http://CloudController:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = ***GLANCE_PASS***
[paste_deploy]
...
flavor = keystone
[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[DEFAULT]
...
notification_driver = noop
[DEFAULT]
...
verbose = True
vi /etc/glance/glance-registry.conf
[DEFAULT]
...
notification_driver = noop
verbose = True
[database]
...
connection = mysql+pymysql://glance:***GLANCE_DBPASS***@CloudController/glance
[keystone_authtoken]
...
auth_uri = http://CloudController:5000
auth_url = http://CloudController:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = ***GLANCE_PASS***
[paste_deploy]
...
flavor = keystone
su -s /bin/sh -c "glance-manage db_sync" glance
service glance-registry restart
service glance-api restart
rm -f /var/lib/glance/glance.sqlite
On managment, test image import :
openrc_admin
echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress
glance image-list
http://docs.openstack.org/liberty/install-guide-ubuntu/common/get_started_compute.html http://docs.openstack.org/liberty/install-guide-ubuntu/nova-controller-install.html
On Controller Node:
mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '***NOVA_DBPASS***';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '***NOVA_DBPASS***';
On Managment Node:
openrc_admin
openstack user create --domain default --password-prompt nova
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://CloudController:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://CloudController:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://CloudController:8774/v2/%\(tenant_id\)s
On Controller Node:
sudo apt install nova-api nova-cert nova-conductor nova-consoleauth nova-novncproxy nova-scheduler python-novaclient
vi /etc/nova/nova.conf
[database]
...
connection = mysql+pymysql://nova:***NOVA_DBPASS***@CloudController/nova
[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.3.11
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
verbose = True
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
...
auth_uri = http://CloudController:5000
auth_url = http://CloudController:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
...
host = controller
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
su -s /bin/sh -c "nova-manage db sync" nova
systemctl restart nova-api
systemctl restart nova-cert
systemctl restart nova-consoleauth
systemctl restart nova-scheduler
systemctl restart nova-conductor
systemctl restart nova-novncproxy
rm -f /var/lib/nova/nova.sqlite
lxc-attach --name CloudCompute1
adduser cloud
adduser cloud sudo
deluser ubuntu
# edit /etc/ssh/sshd_config and comment out `PasswordAuthentication`
systemctl restart ssh
exit
ssh [email protected]
sudo apt install nova-compute sysfsutils
vi /etc/nova/nova.conf
[DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
verbose = True
[oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = RABBIT_PASS
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
...
host = controller
[oslo_concurrency]
...
lock_path = /var/lib/nova/tmp
- To Check Virtualization support:
egrep -c '(vmx|svm)' /proc/cpuinfo
vi /etc/nova/nova-compute.conf
[libvirt]
...
virt_type = qemu
sudo systemctl restart nova-compute
rm -f /var/lib/nova/nova.sqlite