Reference:
We are using two nodes
-
- Controller
-
- Compute
Operating System Ubuntu Latest Version
Change Hostname
sudo hostnamectl set-hostname cloud3
Configure Static IPs
nano /etc/netplan/50
network:
version: 2
renderer: networkd
ethernets:
enp0s3:
dhcp4: false
addresses:
- 192.168.0.87/24
routes:
- to: default
via: 192.168.0.1
nameservers:
addresses:
- 8.8.8.8
- 8.8.4.4
enp0s8:
dhcp4: false
addresses:
- 192.168.106.15/24
netplan apply
service networking restart
Update /etc/hosts file
vi /etc/hosts
192.168.106.87 controller
#10.0.2.50 compute
Configure Password less authentication on both vms
ssh-keygen
#go to compute node #change root password
passwd root
vi /etc/ssh/sshd_config
#uncomment
PermitRootLogin yes
service sshd restart
#copy ssh-key from controller to compute node
ssh-copy-id -i root@compute
#Install NTP
apt install chrony -y
vi /etc/chrony/chrony.conf
comment default ntp server #server 0.asia.pool.ntp.org #server 1.asia.pool.ntp.org #server 2.asia.pool.ntp.org #server 3.asia.pool.ntp.org
add own ntp server its host ip
server 192.168.0.89 iburst
allow 192.168.106.15/24
service chrony restart
#go to compute node ntp configure
vi /etc/chrony/chrony.conf
#comment default ntp server #pool 0.ubuntu.pool.ntp.org #pool 1 #pool 2
#add controller server ip as ntp server
server controller iburst
service chrony restart
Verify ntp server
chronyc sources
#Now install OpenStack
openstack.org Docs Installation Guide choose version and Following [Environment] Section step by step:
Add OpenStack Packages on both node
add-apt-repository cloud-archive:epoxy
install openstackclient on both node
apt install python3-openstackclient -y
Install and configure SQL Database
apt install mariadb-server python3-pymysql -y
vi /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 192.168.106.15
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
service mysql restart
choose a suitable password [****] for the database root account
mysql_secure_installation
Type:
- Enter
- y
- y
- type password
- y
- y
- y
- y
Now Verifying the mysql
mysql -u root -p
Type mysql password Then enter mysql terminal like:
MariaDB [(none)]>
show databases;
exit;
Install and configure rabbitmq-server
apt install rabbitmq-server -y
service rabbitmq-server restart
service rabbitmq-server status
rabbitmqctl add_user openstack ubuntu
Permit configuration, write, and read access for the openstack user
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Install and configure memcached
apt install memcached python3-memcache -y
edit /etc/memcached.conf file
vi /etc/memcached.conf
-l 192.168.106.15
service memcached restart
- go to Minimal deployment for your version as my zed
Install and configure keystone
Create Database for keystone
access mysql
mysql
Login mysql console show: MariaDB [(none)]>
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'ubuntu';
EXIT;
exit from sql
Install keystone
apt install keystone -y
Edit the /etc/keystone/keystone.conf
vi /etc/keystone/keystone.conf
add new connection under [database] section and comment default connection.
[database]
connection = mysql+pymysql://keystone:ubuntu@controller/keystone
[token] section, configure the Fernet token provider.
[token]
provider = fernet
Populate the Identity service database.
su -s /bin/sh -c "keystone-manage db_sync" keystone
Verifying in databases
mysql
show databases;
use keystone;
show tables;
exit;
Initialize Fernet key repositories.
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
Bootstrap the Identity service.
keystone-manage bootstrap --bootstrap-password ubuntu \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
Configure the Apache HTTP server
Edit the /etc/apache2/apache2.conf
vi /etc/apache2/apache2.conf
Add your ServerName under '''Global configuration''' section.
ServerName controller
service apache2 restart
Configure the administrative account by setting the proper environmental variables:
export OS_USERNAME=admin
export OS_PASSWORD=ubuntu
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
Create a domain, projects, users, and roles The Identity service provides authentication services for each OpenStack service. The authentication service uses a combination of domains, projects, users, and roles.
create a new domain would be:
openstack domain create --description "An Example Domain" example
Create the service project:
openstack project create --domain default \
--description "Service Project" service
Create the myproject project:
openstack project create --domain default \
--description "Demo Project" myproject
Create the myuser user:
openstack user create --domain default \
--password-prompt myuser
Create the myrole role:
openstack role create myrole
Add the myrole role to the myproject project and myuser user:
openstack role add --project myproject --user myuser myrole
Verify operation
Verify operation of the Identity service before installing other services.
Note
Perform these commands on the controller node.
Unset the temporary OS_AUTH_URL and OS_PASSWORD environment variable:
unset OS_AUTH_URL OS_PASSWORD
As the admin user, request an authentication token:
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name admin --os-username admin token issue
Note
This command uses the password for the admin user.
As the myuser user created in the previous, request an authentication token:
openstack --os-auth-url http://controller:5000/v3 \
--os-project-domain-name Default --os-user-domain-name Default \
--os-project-name myproject --os-username myuser token issue
Create OpenStack client environment scripts
Create and edit the admin-openrc file and add the following content:
Note
The OpenStack client also supports using a clouds.yaml file. For more information, see the os-client-config.
vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace ADMIN_PASS with the password you chose for the admin user in the Identity service.
Create and edit the demo-openrc file and add the following content:
vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
Replace DEMO_PASS with the password you chose for the demo user in the Identity service.
Using the scripts
To run clients as a specific project and user, you can simply load the associated client environment script prior to running them. For example:
Load the admin-openrc file to populate environment variables with the location of the Identity service and the admin project and user credentials:
. admin-openrc
Request an authentication token:
openstack token issue
Step-by-step instructions to install and configure the OpenStack Image Service (Glance) on Ubuntu
This guide walks you through installing and configuring the Glance service on the controller node in an OpenStack environment. Images are stored using the local file system for simplicity.
β Supported Version: OpenStack 2025.1 (Ubuntu)
Before installing Glance, you must set up the database, create service credentials, and register API endpoints.
Log in to your MariaDB/MySQL server as root:
mysqlRun the following SQL commands:
CREATE DATABASE glance;Grant privileges to the glance database user (replace ubuntu with a secure password):
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'ubuntu';π Example: Use
ubuntu=secretpassword123
Exit the database client:
EXIT;Load the admin credentials to gain access to administrative OpenStack commands:
. admin-openrcπ‘ Ensure the
admin-openrcfile exists and contains correct OS credentials.
openstack user create --domain default --password-prompt glanceEnter and confirm a strong password when prompted (e.g.,
ubuntu).
openstack role add --project service --user glance admin
β οΈ This command produces no output β success is silent.
openstack service create --name glance \
--description "OpenStack Image" imageExpected Output:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| name | glance |
| type | image |
+-------------+----------------------------------+
Register public, internal, and admin endpoints for the Glance service:
openstack endpoint create --region RegionOne \
image public http://controller:9292openstack endpoint create --region RegionOne \
image internal http://controller:9292openstack endpoint create --region RegionOne \
image admin http://controller:9292β All URLs point to
http://controller:9292assuming your controller hostname iscontroller.
Verifying openstack endpoint list:
openstack endpoint list
On the controller node:
sudo apt update
sudo apt install glance -yEdit the main configuration file:
sudo vi /etc/glance/glance-api.confIn the [database] section:
[database]
connection = mysql+pymysql://glance:ubuntu@controller/glanceπ Replace
ubuntuwith the actual password used earlier.
β Clear any existing options in this section before adding these.
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = ubuntuπ Replace
ubuntuwith the password you set for theglanceuser.
[paste_deploy]
flavor = keystoneAdd or update the following sections:
[DEFAULT]
enabled_backends = fs:file[glance_store]
default_backend = fs[fs]
filesystem_store_datadir = /var/lib/glance/images/π This sets the local directory where images will be stored.
To find the endpoint ID:
openstack endpoint list --service glance --region RegionOneEnsure the glance user can read system-scope resources like limits:
openstack role add --user glance --user-domain Default --system all readerRun the database synchronization:
sudo su -s /bin/sh -c "glance-manage db_sync" glance
β οΈ You may see deprecation warnings β these can be safely ignored.
Restart the Glance API service to apply all changes:
sudo service glance-api restartβ Glance is now ready to serve image requests.
Step-by-step instructions to verify the Glance (Image Service) installation in OpenStack 2025.1
After installing and configuring the Glance service on the controller node, itβs essential to verify that the service is running correctly and can manage images.
β Supported Version: OpenStack 2025.1 (Ubuntu)
π This guide assumes Glance was installed using the Ubuntu installation guide.
This guide helps you:
- Confirm the Glance service is up and reachable.
- Verify API endpoints are registered.
- Upload a test image.
- Validate that image operations work as expected.
Before verifying Glance:
- The Glance service must be installed and configured.
- You must have access to the controller node.
- The
admin-openrcfile must be available with correct credentials.
Load administrative credentials to use OpenStack CLI commands:
. admin-openrcπ‘ This sets environment variables like
OS_USERNAME,OS_PASSWORD, etc.
Ensure the file exists and contains valid admin credentials.
Check if the glance-api service is running:
sudo systemctl status glance-apiβ Expected Output:
active (running)status- No recent errors in logs
π§ If not running, start it:
sudo systemctl start glance-api
sudo systemctl enable glance-apiEnsure the Image service endpoints were created correctly:
openstack endpoint list --service glance --interface public
openstack endpoint list --service glance --interface internal
openstack endpoint list --service glance --interface adminβ Expected Output:
- Three endpoints (
public,internal,admin) pointing tohttp://controller:9292 - All with
enabled=Trueand correctRegionOne
Example:
+----------------------------------+-----------+--------------+---------------------------+
| ID | Interface | Region | URL |
+----------------------------------+-----------+--------------+---------------------------+
| 340be3625e9b4239a6415d034e98aace | public | RegionOne | http://controller:9292 |
| a6e4b153c2ae4c919eccfdbb7dceb5d2 | internal | RegionOne | http://controller:9292 |
| 0c37ed58103f4300a84ff125a539032d | admin | RegionOne | http://controller:9292 |
+----------------------------------+-----------+--------------+---------------------------+
Check that the glance service is registered in OpenStack:
openstack service list | grep imageβ Expected Output:
| 8c2c7f1b9b5049ea9e63757b5533e6d2 | glance | image |
If missing, re-run the service creation command from the install guide.
Run the following command to list current images:
openstack image listβ Expected Output:
+----+------+--------+------------------+--------+-------+
| ID | Name | Status | Server Type | Schema | Size |
+----+------+--------+------------------+--------+-------+
+----+------+--------+------------------+--------+-------+
πΉ At this stage, the list should be empty β thatβs normal.
β If you get an authentication or connection error, double-check:
keystone_authtokensettings in/etc/glance/glance-api.conf- Network connectivity to
controller:5000(Keystone) andcontroller:9292(Glance)
Use a small test image (like CirrOS) to validate image upload and visibility.
wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.imgπ You can use any version; update the URL accordingly.
openstack image create "cirros" \
--file cirros-0.5.2-x86_64-disk.img \
--disk-format qcow2 \
--container-format bare \
--publicπ Parameters Explained:
--file: Path to the image file--disk-format: Disk format (qcow2,raw,vmdk, etc.)--container-format: Container type (bare,ovf, etc.)--public: Makes the image accessible to all projects
List images again:
openstack image listβ Expected Output:
+--------------------------------------+--------+--------+------------------+-------------+--------+
| ID | Name | Status | Server Type | Schema | Size |
+--------------------------------------+--------+--------+------------------+-------------+--------+
| 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34 | cirros | active | None | None | 12717056 |
+--------------------------------------+--------+--------+------------------+-------------+--------+
πΉ The image should be in
activestatus.
Get detailed info about the uploaded image:
openstack image show cirrosSample Output:
id: 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34
name: cirros
status: active
disk_format: qcow2
container_format: bare
size: 12717056
visibility: publicIf using local file storage, confirm the image is saved on disk:
ls -la /var/lib/glance/images/β
You should see a file matching the image ID (e.g., 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34).
πΉ This confirms Glance is writing images to the configured directory.
| Problem | Possible Cause | Solution |
|---|---|---|
Unable to establish connection to http://controller:9292 |
Glance service not running | Run: sudo systemctl restart glance-api |
HTTP 401 Unauthorized |
Incorrect keystone_authtoken config |
Check password, username, and auth URL in glance-api.conf |
Image stuck in queued or saving state |
Permission issue on image directory | Ensure /var/lib/glance/images/ is owned by glance:glance |
| Endpoint not found | Missing or incorrect endpoint | Recreate endpoints using openstack endpoint create |
No such file or directory during upload |
Image file not found | Confirm path and permissions on the .img file |
Check logs for details:
sudo tail -20 /var/log/glance/glance-api.logNow that Glance is verified:
- Proceed to install Nova (Compute Service).
- Launch your first VM using the uploaded CirrOS image.
- Test image sharing between projects (if using
sharedvisibility).
- Official Docs: https://docs.openstack.org/glance/2025.1/install/verify.html
- Glance CLI Guide: OpenStack Image CLI
β Congratulations! Youβve successfully verified the Glance installation. The Image Service is ready for production use.
This guide provides step-by-step instructions to install and configure the OpenStack Placement service on Ubuntu. The Placement service tracks inventory and usage of resources (like compute, memory, and disk) in an OpenStack cloud.
β Note: This guide is based on the official OpenStack documentation for the 2025.1 release and tailored for Ubuntu systems.
Before installing the Placement service, you must set up a database, create service credentials, and register API endpoints.
-
Connect to the MariaDB/MySQL database as the
rootuser:sudo mysql
-
Create the
placementdatabase:CREATE DATABASE placement;
-
Grant privileges to the
placementdatabase user:Replace
ubuntuwith a strong password.GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'ubuntu'; GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'ubuntu';
-
Exit the database client:
EXIT;
-
Source the
admincredentials to get administrative access:. admin-openrcEnsure the
admin-openrcfile exists and contains the correct admin credentials. -
Create the
placementuser in OpenStack Identity (Keystone):openstack user create --domain default --password-prompt placement
- When prompted, enter a password (e.g.,
ubuntu) and confirm it.
- When prompted, enter a password (e.g.,
-
Add the
placementuser to theserviceproject with theadminrole:openstack role add --project service --user placement admin
This command produces no output on success.
-
Create the Placement service entry in the service catalog:
openstack service create --name placement --description "Placement API" placementExample Output:
+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Placement API | | name | placement | | type | placement | +-------------+----------------------------------+ -
Create the Placement API endpoints (public, internal, admin):
Replace
controllerwith your controller nodeβs hostname if different.openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778
π Note: The default port is
8778. Adjust if your environment uses a different port (e.g.,8780).
Install the Placement API package using APT:
sudo apt update
sudo apt install placement-api -yEdit the main configuration file:
sudo vi /etc/placement/placement.confIn the [placement_database] section, set the database connection string:
[placement_database]
connection = mysql+pymysql://placement:ubuntu@controller/placementπ Replace
ubuntuwith the password you set earlier.
In the [api] section, ensure the auth strategy is set to Keystone:
[api]
auth_strategy = keystoneIn the [keystone_authtoken] section, configure authentication settings:
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = ubuntuπ Replace
ubuntuwith the password you assigned to theplacementuser.
β οΈ Important:
- Comment out or remove any other existing options in
[keystone_authtoken].- Ensure domain names (
Default) match your Keystone configuration (case-sensitive).
Populate the database with initial schema and data:
sudo su -s /bin/sh -c "placement-manage db sync" placementπ‘ You may see deprecation warnings β these can be safely ignored.
The Placement API runs under Apache. Reload the service to apply changes:
sudo service apache2 restartTo verify that the Placement service is working:
-
List available services:
openstack service list | grep placementExpected Output:
| 2d1a27022e6e4185b86adac4444c495f | placement | placement | Placement API | -
List Placement API endpoints:
openstack endpoint list | grep placement -
Test API access (optional):
curl -s http://controller:8778 | python3 -m json.toolYou should see a JSON response listing available versions.
| Issue | Solution |
|---|---|
Unable to connect to database |
Verify MySQL host, user, password, and network access |
Authentication failed |
Double-check keystone_authtoken settings and password |
404 Not Found on API endpoint |
Ensure Apache is running and placement WSGI is configured |
placement-manage: command not found |
Confirm placement-api package is installed |
Check logs for errors:
sudo tail -f /var/log/placement/placement-api.log
sudo tail -f /var/log/apache2/error.logYou have now successfully:
β
Created the Placement database
β
Registered the Placement service and endpoints
β
Installed and configured the Placement API
β
Synced the database and restarted services
The Placement service is now ready to support Compute (Nova) and other resource tracking services in your OpenStack environment.
π Next Steps:
- Proceed to install and configure the Nova (Compute) service.
- Ensure Nova is configured to use the Placement API for resource tracking.
π Official Docs: OpenStack Placement Installation Guide
After installing and configuring the OpenStack Placement service, it's essential to verify that it is functioning correctly. This guide provides clear, step-by-step instructions based on the official OpenStack Placement Verification documentation for the 2025.1 release.
Verify the correct operation of the Placement service by:
- Running upgrade checks
- Installing the
osc-placementCLI plugin - Listing resource classes and traits via the API
Before performing any verification steps, you must authenticate as an administrative user.
. admin-openrcπ‘ Ensure the
admin-openrcfile exists and contains the correct environment variables (e.g., OS_USERNAME, OS_PASSWORD, etc.). If not available, use an equivalent method to source admin credentials.
This command verifies the database schema and checks for potential upgrade issues.
placement-status upgrade check+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
- If you see errors like "Unable to connect to database", verify:
- Database host, username, password in
/etc/placement/placement.conf - Network connectivity to the database server
- Database host, username, password in
- If authentication fails, double-check
[keystone_authtoken]settings.
π Note: You may see deprecation warnings β these are safe to ignore during verification.
The osc-placement plugin enables OpenStack CLI commands to interact with the Placement API.
pip3 install osc-placementβ Recommended if you're using a virtual environment or don't have distribution packages.
sudo apt install python3-osc-placementβ Use this if you prefer system packages managed by APT.
Check if the plugin is loaded:
openstack help | grep -i placementYou should see new commands like:
resource class listtrait listallocation list, etc.
Resource classes represent types of resources tracked by Placement (e.g., disk, memory, CPU).
Run this command to list them:
openstack --os-placement-api-version 1.2 resource class list --sort-column nameπ The
--os-placement-api-versionflag ensures compatibility. Version1.2supports resource class listing.
+----------------------------+
| name |
+----------------------------+
| DISK_GB |
| IPV4_ADDRESS |
| MEMORY_MB |
| VCPU |
| CUSTOM_FPGA_XILINX_VU9P |
| ... |
+----------------------------+
π‘ If you get a 404 or connection error, ensure:
- Apache is running:
sudo systemctl status apache2- Endpoint URLs are correct:
openstack endpoint list --service placement
Traits are metadata tags used to describe capabilities or properties of resource providers (e.g., COMPUTE_HYPRTENSION_ENABLED).
List all available traits:
openstack --os-placement-api-version 1.6 trait list --sort-column nameπ Version
1.6introduces trait support in the API.
+---------------------------------------+
| name |
+---------------------------------------+
| COMPUTE_DEVICE_TAGGING |
| COMPUTE_NET_ATTACH_INTERFACE |
| COMPUTE_VOLUME_MULTI_ATTACH |
| HW_CPU_X86_SSE |
| CUSTOM_TRAIT_EXAMPLE |
| ... |
+---------------------------------------+
π’ Success means:
- The Placement API is reachable
- Authentication works
- Database is synced and populated
| Problem | Solution |
|---|---|
Command 'openstack' not found |
Install OpenStack client: sudo apt install python3-openstackclient |
HTTP 401 Unauthorized |
Check keystone_authtoken credentials in /etc/placement/placement.conf |
HTTP 404 Not Found |
Confirm endpoint URL (http://controller:8778) and Apache configuration |
placement-status: command not found |
Ensure placement-common package is installed |
sudo tail -f /var/log/placement/placement-api.log
sudo tail -f /var/log/apache2/error.logLook for:
- Database connection errors
- Keystone authentication failures
- WSGI application loading issues
| Task | Status |
|---|---|
| βοΈ Source admin credentials | β |
βοΈ Run placement-status upgrade check |
β |
βοΈ Install osc-placement plugin |
β |
| βοΈ List resource classes | β |
| βοΈ List traits | β |
| βοΈ Confirm API accessibility | β |
Now that the Placement service is verified:
- Proceed to install and configure Nova (Compute) Controller Services
- Ensure Nova is configured to use the Placement API
- Later, verify integration using:
openstack hypervisor stats show
π Official Docs:
Verify Placement Installation
Simple & Step-by-Step Deployment Guide
β Based on: OpenStack Nova Install Guide (2025.1)
π₯οΈ Role: Controller Node
π¦ Distribution: Ubuntu
π§ Focus: Clear, easy-to-follow instructions with explanations
This guide walks you through installing and configuring the Nova (Compute) service on the controller node in an OpenStack environment.
Nova manages virtual machines (VMs), including creation, scheduling, and lifecycle management.
π§ You will:
- Set up databases
- Create service users and endpoints
- Install Nova components
- Configure
nova.conf - Sync databases and register cells
- Start services
β οΈ Prerequisites:
- MySQL/MariaDB, RabbitMQ, Keystone (Identity), Glance (Image), and Placement services must already be installed and running.
Connect to your database server and create three databases for Nova.
sudo mysqlCREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';π Replace ubuntu with a strong password (e.g., nova_db_secret).
EXIT;openstack user create --domain default --password-prompt novaWhen prompted, enter a password (e.g., ubuntu) and confirm it.
β Example Output:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| name | nova |
| ... | ... |
+---------------------+----------------------------------+
openstack role add --project service --user nova adminπ‘ No output means success.
openstack service create --name nova --description "OpenStack Compute" computeβ Expected Output:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| name | nova |
| type | compute |
+-------------+----------------------------------+
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1π Port
8774/v2.1is the default for Nova API. Ensurecontrollerresolves correctly.
Install required Nova components on the controller node:
sudo apt update
sudo apt install nova-api nova-conductor nova-novncproxy nova-schedulerπ οΈ Components installed:
nova-api: REST API endpointnova-conductor: Mediates DB interactionsnova-scheduler: Decides where to run VMsnova-novncproxy: Provides VNC console access
Edit the main Nova configuration file:
sudo vi /etc/nova/nova.confAdd or modify the following sections:
[api_database]
connection = mysql+pymysql://nova:ubuntu@controller/nova_api
[database]
connection = mysql+pymysql://nova:ubuntu@controller/novaπ Replace ubuntu with the database password you set earlier.
[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controller:5672/π Replace ubuntu with the password for the openstack user in RabbitMQ.
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntuπ Replace ubuntu with the password you chose for the nova user.
β Important: Comment out or remove any other lines in
[keystone_authtoken].
[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntuπ Use same ubuntu as above.
[DEFAULT]
my_ip = 10.0.0.11π Replace 10.0.0.11 with the management network IP of your controller node.
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ipThis allows VNC console access via the dashboard.
[glance]
api_servers = http://controller:9292Ensure Glance is reachable at port 9292.
[oslo_concurrency]
lock_path = /var/lib/nova/tmpCreate the directory if needed:
sudo mkdir -p /var/lib/nova/tmp[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASSπ Replace PLACEMENT_PASS with the password you set for the placement user.
β Remove or comment out any other options in
[placement].
- An ellipsis (
...) in config examples means keep existing defaults. - Do not duplicate sections β edit existing ones or add if missing.
- Avoid mixing old and new configs.
Run these commands in order:
sudo su -s /bin/sh -c "nova-manage api_db sync" novaπ‘ Ignore deprecation warnings.
sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
cell0holds failed or deleted instances.
sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" novaβ Sample Output:
Created cell with UUID: f690f4fd-2bc5-4f15-8145-db561a7b9d3d
sudo su -s /bin/sh -c "nova-manage db sync" novaThis sets up schemas for
nova,nova_cell0, andnova_cell1.
sudo su -s /bin/sh -c "nova-manage cell_v2 list_cells" novaβ Expected Output:
+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+
| Name | UUID | Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/ | mysql+pymysql://nova:****@controller/nova_cell0 | False |
| cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack@... | mysql+pymysql://nova:****@controller/nova_cell1 | False |
+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+
π’ Success means both
cell0andcell1appear and are not disabled.
Apply all changes by restarting the Nova services:
sudo service nova-api restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restartβ All services should restart without errors.
Now that everything is running, verify Nova works.
openstack compute service listYou should see:
nova-schedulernova-conductornova-compute(once compute nodes are added)- All services in
UPstate
sudo nova-manage cell_v2 list_hostsShould show registered compute nodes.
| Issue | Solution |
|---|---|
nova-api fails to start |
Check keystone_authtoken settings and password |
| Database sync errors | Confirm DB connectivity and credentials in nova.conf |
cell_v2 command not found |
Ensure nova-conductor package is installed |
| 503 Service Unavailable | Make sure Apache/Nginx is running; check /var/log/nova/*.log |
Host not showing in list_hosts |
Wait for compute node to register; check firewall and networking |
Check logs:
sudo tail -f /var/log/nova/nova-api.log
sudo tail -f /var/log/nova/nova-scheduler.log| Task | Status |
|---|---|
| βοΈ Source admin credentials | β |
βοΈ Create nova_api, nova, nova_cell0 DBs |
β |
βοΈ Create nova user and endpoints |
β |
| βοΈ Install Nova packages | β |
βοΈ Configure /etc/nova/nova.conf |
β |
| βοΈ Sync databases and create cells | β |
| βοΈ Restart services | β |
βοΈ Verify with openstack compute service list |
β |
After completing the controller setup:
- β‘οΈ Install Nova Compute Service on compute nodes
- β‘οΈ Install and configure Neutron (Networking)
- β‘οΈ Launch your first instance using:
openstack server create ...
π Official Docs:
Nova Controller Installation (Ubuntu)
π― You're now ready to manage compute resources in OpenStack!
Simple & Step-by-Step Guide for Compute Nodes
β Based on: OpenStack Nova Compute Install Guide (2025.1)
π₯οΈ Role: Compute Node
π¦ Distribution: Ubuntu
π§ Focus: Easy-to-follow, beginner-friendly instructions
This guide walks you through installing and configuring the Nova Compute service (nova-compute) on a compute node in your OpenStack environment.
The compute node runs virtual machines (VMs) using KVM/QEMU and connects to the controller for management.
π§ You will:
- Install
nova-computepackage - Configure
/etc/nova/nova.conf - Enable hardware acceleration (KVM) or fallback to QEMU
- Start the service
- Register the compute node from the controller
β οΈ Prerequisites:
- Controller node must have Keystone, Glance, Placement, and Nova (controller services) already installed and working.
- Network connectivity between controller and compute nodes.
- NTP synchronized on all nodes.
Log in to your compute node (e.g., compute1) and install the Nova compute service.
sudo apt update
sudo apt install nova-computeπ οΈ This installs:
nova-compute: The main service that manages VMs- Dependencies like
libvirt,qemu, andkvm
Edit the main Nova configuration file:
sudo vi /etc/nova/nova.confUpdate the following sections:
In the [DEFAULT] section:
[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controllerπ Replace ubuntu with the password you set for the openstack user in RabbitMQ.
β Example: If your RabbitMQ password is
rabbit_secret, use:transport_url = rabbit://openstack:rabbit_secret@controller
In the [api] and [keystone_authtoken] sections:
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntuπ Replace ubuntu with the password you chose for the nova user in Keystone.
β Remove or comment out any other lines in
[keystone_authtoken].
[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = ubuntuπ Use the same ubuntu as above.
In the [DEFAULT] section:
[DEFAULT]
my_ip = 10.0.0.31π Replace 10.0.0.31 with the management network IP address of your compute node.
β Example: First compute node β
10.0.0.31, second β10.0.0.32, etc.
In the [vnc] section:
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.htmlπ Explanation:
server_listen = 0.0.0.0: Listens on all interfacesnovncproxy_base_url: Where users access VM consoles via browser
π If
controllerhostname is not resolvable from client machines, replacecontrollerwith its IP (e.g.,http://10.0.0.11:6080/vnc_auto.html)
[glance]
api_servers = http://controller:9292Ensure the Image service is reachable.
[oslo_concurrency]
lock_path = /var/lib/nova/tmpCreate the directory if missing:
sudo mkdir -p /var/lib/nova/tmp[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASSπ Replace PLACEMENT_PASS with the password you set for the placement user.
β Comment out or remove any other options in
[placement].
Run this command to check if your CPU supports hardware acceleration (KVM):
egrep -c '(vmx|svm)' /proc/cpuinfo| Output | Meaning | Action |
|---|---|---|
1 or higher |
β KVM supported | No extra config needed |
0 |
β No KVM support | Configure Nova to use QEMU |
Edit the libvirt configuration:
sudo vi /etc/nova/nova-compute.confAdd or modify the [libvirt] section:
[libvirt]
virt_type = qemuπ This tells Nova to use software-based QEMU instead of hardware-accelerated KVM.
Apply all changes:
sudo service nova-compute restartEnsure it starts automatically on boot:
sudo systemctl enable nova-computeCheck the log:
sudo tail -f /var/log/nova/nova-compute.logAMQP server on controller:5672 is unreachable
β Fix:
- Ensure RabbitMQ is running on the controller.
- Open port
5672on the controllerβs firewall:
sudo ufw allow from 10.0.0.0/24 to any port 5672Replace 10.0.0.0/24 with your management network.
Then restart:
sudo service nova-compute restartπ§ This step must be done on the controller node, not the compute node.
. admin-openrcopenstack compute service list --service nova-computeβ Expected Output:
+----+-----------+--------------+------+---------+-------+----------------------------+
| ID | Host | Binary | Zone | Status | State | Updated At |
+----+-----------+--------------+------+---------+-------+----------------------------+
| 1 | compute1 | nova-compute | nova | enabled | up | 2025-04-05T10:00:00.000000 |
+----+-----------+--------------+------+---------+-------+----------------------------+
If state is down, check logs and network/firewall.
Register the compute node(s) in the cell database:
sudo su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" novaβ Sample Output:
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute1': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute1': fe58ddc1-1d65-4f87-9456-bc040dc106b3
π’ Success means your compute node is now registered!
To avoid running discover_hosts manually every time you add a new compute node:
Edit /etc/nova/nova.conf on the controller node:
[scheduler]
discover_hosts_in_cells_interval = 300This will automatically discover new compute nodes every 5 minutes.
Then restart services:
sudo service nova-scheduler restartBack on the controller node, verify everything works:
openstack compute service listAll services should be up and enabled.
Also run:
sudo nova-manage cell_v2 list_hostsYou should see your compute node listed under cell1.
| Task | Status |
|---|---|
βοΈ Install nova-compute on compute node |
β |
βοΈ Configure /etc/nova/nova.conf |
β |
βοΈ Set correct my_ip |
β |
βοΈ Enable KVM or set virt_type = qemu |
β |
βοΈ Restart nova-compute service |
β |
βοΈ Run discover_hosts on controller |
β |
βοΈ Confirm host appears in list_hosts |
β |
- β‘οΈ Install and configure Neutron (Networking) on controller and compute nodes
π Official Docs:
Nova Compute Installation (Ubuntu)
π― Youβre now ready to run virtual machines at scale!
Controller Node Setup with Self-Service Networks (Option 2)
β Based on:
- Neutron Controller Install (Ubuntu)
- Neutron Option 2: Self-Service Networks
π₯οΈ Role: Controller Node
π§ Networking Option: Self-Service (Overlay) + Provider Networks
π Tunnel: VXLAN
π¦ Distribution: Ubuntu
This guide walks you through installing and configuring the OpenStack Neutron (Networking) service on the controller node, using Option 2 β Self-Service Networks.
With this setup:
- β Users can create private (self-service) networks
- β Support for routers, NAT, and floating IPs
- β Instances can access the internet and be reached from outside
- β Uses VXLAN overlay networks for tenant isolation
π§ You will:
- Create Neutron database and service credentials
- Install Neutron packages
- Configure core, ML2, L3, DHCP, metadata agents
- Integrate with Nova
- Start services
β οΈ Prerequisites:
- Controller node must have: MySQL, RabbitMQ, Keystone, Glance, Nova (controller services), and Placement already installed and working.
- At least two network interfaces (management + external) recommended.
Connect to MariaDB/MySQL and create the neutron database.
sudo mysqlCREATE DATABASE neutron;GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ubuntu';π Replace ubuntu with a strong password (e.g., neutron_db_secret).
EXIT;Load admin credentials to run OpenStack CLI commands.
. admin-openrcπ‘ Ensure your environment has the correct OS_* variables set (e.g.,
OS_USERNAME=admin,OS_AUTH_URL=http://controller:5000/v3).
openstack user create --domain default --password-prompt neutronWhen prompted, enter a password (e.g., ubuntu) and confirm it.
β Example Output:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| name | neutron |
| ... | ... |
+---------------------+----------------------------------+
openstack role add --project service --user neutron adminπ‘ No output means success.
openstack service create --name neutron --description "OpenStack Networking" networkβ Expected Output:
+-------------+---------------------------+
| Field | Value |
+-------------+---------------------------+
| description | OpenStack Networking |
| name | neutron |
| type | network |
+-------------+---------------------------+
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696π Port
9696is the default Neutron API port.
Install required Neutron components on the controller node:
sudo apt update -y
sudo apt install neutron-server neutron-plugin-ml2 \
neutron-openvswitch-agent neutron-l3-agent \
neutron-dhcp-agent neutron-metadata-agent \
openvswitch-switch -yπ οΈ Components installed:
neutron-server: REST API and core logicneutron-plugin-ml2: Modular Layer 2 plugin (supports VLAN/VXLAN)neutron-openvswitch-agent: OVS agent for switchingneutron-l3-agent: Provides routing between networksneutron-dhcp-agent: Assigns IPs via DHCPneutron-metadata-agent: Delivers metadata to instancesopenvswitch-switch: OVS kernel module and service
Edit the main Neutron configuration file:
sudo vi /etc/neutron/neutron.confAdd or modify the following sections:
Under [database] Section:
connection = mysql+pymysql://neutron:ubuntu@controller/neutronπ Replace ubuntu with your database password.
Add under the [DEFAULT] section:
transport_url = rabbit://openstack:ubuntu@controller
core_plugin = ml2
service_plugins = router
auth_strategy = keystoneπ Replace ubuntu with the password for the openstack user in RabbitMQ.
Under [keystone_authtoken] Section:
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = ubuntuπ Replace ubuntu with the password you set for the neutron user.
β Comment out or remove any other lines in
[keystone_authtoken].
Add under the [DEFAULT] section:
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = trueThis tells Neutron to notify Nova when ports change.
Add under the [nova] Secton:
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = ubuntu
Add under the [oslo_concurrency] Section:
lock_path = /var/lib/neutron/tmpCreate the directory:
sudo mkdir -p /var/lib/neutron/tmpThe ML2 plugin enables self-service networks using VXLAN.
sudo vi /etc/neutron/plugins/ml2/ml2_conf.iniUnder [ml2] Section:
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_securityπ Explanation:
tenant_network_types = vxlan: Use VXLAN for private networksmechanism_drivers = openvswitch,l2population: Enable OVS and ARP responderl2population: Optimizes VXLAN flooding using controller-based learning
Under [ml2_type_vxlan] Section:
vni_ranges = 1:1000This defines the VXLAN VNI range for tenant networks.
Under [ml2_type_flat] Section:
flat_networks = provider
Under [securitygroup] Section:
enable_ipset = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = trueπ₯ Required for security groups to work with OVS.
Edit:
sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.iniAdd Under [ovs] Section:
local_ip = 10.0.0.31
bridge_mappings = provider:br-providerπ Replace values:
br-provider: Name of the OVS bridge connected to the physical provider network (e.g., external network).10.0.0.31: Management IP address of the compute node (used for VXLAN tunneling).
π Tip: Use the same IP as
my_ipin/etc/nova/nova.conf.
Set Up Provider Network Bridge
You need an OVS bridge (br-provider) that connects to a physical interface (e.g., ens3) for provider network traffic.
- Create the Provider Bridge
sudo ovs-vsctl add-br br-provider- Add Physical Interface to Bridge
sudo ovs-vsctl add-port br-provider ens3π Replace ens3 with your actual physical network interface (e.g., eth1, enp2s0, etc.).
β οΈ Warning: Running this command may disconnect your SSH session ifens3is your management interface.
β Best practice: Use a dedicated interface for provider networks.
Add Under [agent] Section:
tunnel_types = vxlan
l2_population = true
π Replace 10.0.0.31 with the management IP of your compute node.
local_ip: Used for VXLAN tunnel endpointstunnel_types = vxlan: Enables VXLAN overlay networksl2_population: Reduces flooding with ARP responder (recommended)
Add Under [securitygroup] Section:
enable_security_group = true
firewall_driver = openvswitch
# firewall_driver = iptables_hybrid # Alternative optionπΉ Use
openvswitchdriver for better performance with OVS. πΉ If usingiptables_hybrid, ensure kernel bridge filtering is enabled.
sudo modprobe br_netfilter
echo 'br_netfilter' | sudo tee -a /etc/modules-load.d/modules.confSet sysctl values:
echo 'net.bridge.bridge-nf-call-iptables=1' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -psudo vi /etc/neutron/l3_agent.ini[DEFAULT]
interface_driver = openvswitch
external_network_bridge =π
external_network_bridge =(empty) allows multiple external networks.
sudo vi /etc/neutron/dhcp_agent.ini[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = trueπΉ
enable_isolated_metadata = true: Allows instances to get metadata via DHCP.
The metadata agent delivers user data and credentials to instances.
sudo vi /etc/neutron/metadata_agent.ini[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRETπ Replace METADATA_SECRET with a strong secret (e.g., metadata_super_secret).
π‘ This same secret must be configured in Nova later.
Update Nova to use Neutron for networking and metadata.
sudo vi /etc/nova/nova.confIn the [neutron] section:
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = ubuntu
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRETπ Replace:
ubuntuβ password forneutronuserMETADATA_SECRETβ same secret used inmetadata_agent.ini
β Do not skip this step β required for metadata and security groups.
Run the database sync command:
sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronπ‘ Ignore deprecation warnings.
Apply Nova-Neutron integration:
sudo service nova-api restartsudo service neutron-server restart
sudo service neutron-openvswitch-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restartβ All services should start without errors.
Enable them at boot:
sudo systemctl enable neutron-server \
neutron-openvswitch-agent \
neutron-l3-agent \
neutron-dhcp-agent \
neutron-metadata-agentBack on the controller node, verify services are running:
openstack extension list --networkShould show extensions like router, security-group, vxlan, etc.
Check agent list:
openstack network agent listβ Expected Output:
+----+--------------------+------------+-------------------+-------+-------+----------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+----+--------------------+------------+-------------------+-------+-------+----------------------------+
| 1 | L3 agent | controller | nova | :-) | UP | neutron-l3-agent |
| 2 | DHCP agent | controller | nova | :-) | UP | neutron-dhcp-agent |
| 3 | Metadata agent | controller | None | :-) | UP | neutron-metadata-agent |
| 4 | Open vSwitch agent | controller | None | :-) | UP | neutron-openvswitch-agent |
+----+--------------------+------------+-------------------+-------+-------+----------------------------+
π’ All agents should be UP.
| Task | Status |
|---|---|
| βοΈ Source admin credentials | β |
| βοΈ Create Neutron DB and user | β |
| βοΈ Register Neutron service and endpoints | β |
| βοΈ Install Neutron packages | β |
βοΈ Configure neutron.conf, ml2_conf.ini |
β |
| βοΈ Configure L3, DHCP, Metadata agents | β |
| βοΈ Update Nova to use Neutron | β |
| βοΈ Sync database and start services | β |
βοΈ Verify agents with openstack network agent list |
β |
Once compute and network are ready:
- Create a self-service network
- Launch an instance on it
- Create a router to connect to provider network
- Assign a floating IP
- Ping/SSH into the instance
Youβve built a full cloud network!
π Official Docs:
π― You're now ready to enable advanced networking in your OpenStack cloud!
Now go to your compute node(s) and install Neutron components:
Compute Node Setup with Self-Service Networks (Option 2)
β Based on:
- Neutron Compute Install (Ubuntu)
- Option 2: Self-Service Networks for Compute Node
π₯οΈ Role: Compute Node
π§ Networking Option: Self-Service (VXLAN Overlay) + Provider Networks
π¦ Distribution: Ubuntu
This guide walks you through installing and configuring the OpenStack Neutron service on a compute node, using Networking Option 2 β Self-Service Networks.
With this setup:
- β Instances can use private (self-service) networks
- β Support for routers, floating IPs, and NAT
- β Overlay networking via VXLAN tunnels
- β Integration with Open vSwitch (OVS)
- β Security groups are enforced
π§ You will:
- Install
neutron-openvswitch-agent - Configure
neutron.conf,openvswitch_agent.ini - Set up OVS bridges for provider and overlay networks
- Enable security groups
- Restart services
β οΈ Prerequisites:
- Controller node must have Neutron (with ML2, L3, DHCP, Metadata agents) already installed and running.
- RabbitMQ, Keystone, Nova, and Placement services must be accessible.
- The compute node must have network connectivity to the controller.
Log in to your compute node and install the required package:
sudo apt update -y
sudo apt install neutron-openvswitch-agent -yπ οΈ This installs:
neutron-openvswitch-agent: Manages virtual switches and tunnelsopenvswitch-switch: Core OVS support
β Do not install
neutron-server,neutron-l3-agent, orneutron-dhcp-agenton compute nodes unless needed.
Edit the main Neutron configuration file:
sudo nano /etc/neutron/neutron.confUpdate the following sections:
Compute nodes do not access the database directly.
[database]
# connection = sqlite:///neutron.sqlite
# Comment out or leave this line commentedβ
Ensure no connection line is active.
In the [DEFAULT] section:
[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controllerπ Replace RABBIT_PASS with the password for the openstack user in RabbitMQ.
β Example: If RabbitMQ password is
rabbit_secret, use:transport_url = rabbit://openstack:rabbit_secret@controller
[oslo_concurrency]
lock_path = /var/lib/neutron/tmpCreate the directory if missing:
sudo mkdir -p /var/lib/neutron/tmpThis is the key configuration for self-service networks using VXLAN. Configure the Open vSwitch agent
Edit:
sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.iniAdd Under [ovs] Section:
local_ip = 10.0.0.31
bridge_mappings = provider:br-providerπ Replace values:
br-provider: Name of the OVS bridge connected to the physical provider network (e.g., external network).10.0.0.31: Management IP address of the compute node (used for VXLAN tunneling).
π Tip: Use the same IP as
my_ipin/etc/nova/nova.conf.
Set Up Provider Network Bridge
You need an OVS bridge (br-provider) that connects to a physical interface (e.g., ens3) for provider network traffic.
- Create the Provider Bridge
sudo ovs-vsctl add-br br-provider- Add Physical Interface to Bridge
sudo ovs-vsctl add-port br-provider ens3π Replace ens3 with your actual physical network interface (e.g., eth1, enp2s0, etc.).
β οΈ Warning: Running this command may disconnect your SSH session ifens3is your management interface.
β Best practice: Use a dedicated interface for provider networks.
Add Under [agent] Section:
tunnel_types = vxlan
l2_population = true
π Replace 10.0.0.31 with the management IP of your compute node.
local_ip: Used for VXLAN tunnel endpointstunnel_types = vxlan: Enables VXLAN overlay networksl2_population: Reduces flooding with ARP responder (recommended)
Add Under [securitygroup] Section:
enable_security_group = true
firewall_driver = openvswitch
# firewall_driver = iptables_hybrid # Alternative optionπΉ Use
openvswitchdriver for better performance with OVS. πΉ If usingiptables_hybrid, ensure kernel bridge filtering is enabled.
sudo modprobe br_netfilter
echo 'br_netfilter' | sudo tee -a /etc/modules-load.d/modules.confSet sysctl values:
echo 'net.bridge.bridge-nf-call-iptables=1' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -pRestart services to apply changes:
sudo service openvswitch-switch restart
sudo service neutron-openvswitch-agent restartEnable auto-start:
sudo systemctl enable neutron-openvswitch-agentGo back to the controller node and verify the agent is registered.
. admin-openrcopenstack network agent listβ Look for:
+----+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+----+--------------------+------------+-------------------+-------+-------+---------------------------+
| 5 | Open vSwitch agent | compute1 | None | :-) | UP | neutron-openvswitch-agent |
+----+--------------------+------------+-------------------+-------+-------+---------------------------+
π’ If the agent shows UP, your compute node is successfully connected!
Ensure Nova is configured to use Neutron for networking.
Edit Nova config:
sudo nano /etc/nova/nova.confIn the [neutron] section:
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASSπ Replace NEUTRON_PASS with the password you set for the neutron user in Keystone.
Then restart Nova:
sudo service nova-compute restartβ This allows Nova to create ports and request network resources from Neutron.
| Task | Status |
|---|---|
βοΈ Install neutron-openvswitch-agent |
β |
βοΈ Configure neutron.conf (RabbitMQ, lock path) |
β |
βοΈ Configure openvswitch_agent.ini (VXLAN, OVS bridge) |
β |
βοΈ Create br-provider and attach physical interface |
β |
βοΈ Restart openvswitch-switch and neutron-openvswitch-agent |
β |
βοΈ Run openstack network agent list on controller |
β |
βοΈ Confirm OVS agent is UP |
β |
βοΈ Restart nova-compute after Neutron setup |
β |
Now that your compute node is fully integrated:
- β‘οΈ On the controller, create a self-service network, router, and connect to provider network
- β‘οΈ Launch an instance on the self-service network
- β‘οΈ Assign a floating IP and test SSH access
- β‘οΈ Verify security groups (e.g., allow port 22)
Example:
openstack server create --image cirros --flavor m1.tiny --network selfservice-net --security-group default my-instance
openstack floating ip create provider-net
openstack server add floating ip my-instance <floating-ip>| Issue | Solution |
|---|---|
OVS agent shows DOWN |
Check RabbitMQ connectivity, firewall (port 5672) |
| No network connectivity | Verify local_ip matches compute nodeβs IP |
| VXLAN traffic not working | Ensure local_ip uses interface on same subnet as other nodes |
| SSH to instance fails | Check floating IP, security group rules, and metadata agent |
br_netfilter errors |
Load module and set sysctl values as shown above |
Check logs:
sudo tail -f /var/log/neutron/openvswitch-agent.log
sudo tail -f /var/log/nova/nova-compute.logπ― Your OpenStack cloud now supports scalable, secure, multi-tenant networking!
For Ubuntu β Step-by-Step Guide (2025.1 Release)
β Based on:
- Install Horizon on Ubuntu
- Verify Horizon Installation
π₯οΈ Role: Controller Node
π§ Service: Horizon (OpenStack Dashboard)
π¦ Distribution: Ubuntu
This guide provides a clear, step-by-step process to install and verify the OpenStack Dashboard (Horizon) on the controller node using Ubuntu.
Horizon is the web-based interface for OpenStack, allowing users and administrators to manage:
- Instances (VMs)
- Networks
- Volumes
- Images
- Users and projects
π§ You will:
- Install the
openstack-dashboardpackage - Configure
local_settings.pyfor integration - Enable required features (domains, API versions)
- Reload Apache web server
- Verify access via browser
β οΈ Prerequisites:
- Controller node must have: Keystone (Identity), Nova (Compute), Glance (Image), Neutron (Networking) already installed.
- Apache2 and Memcached services must be running.
Log in to your controller node and install the OpenStack dashboard:
sudo apt update
sudo apt install openstack-dashboard -yβ This installs:
- Django-based web dashboard
- Apache configuration (
/etc/apache2/conf-enabled/openstack-dashboard.conf)- Python dependencies
Edit the main Horizon configuration file:
sudo vi /etc/openstack-dashboard/local_settings.pyUpdate the following settings:
Ensure Horizon connects to services on the controller node:
OPENSTACK_HOST = "controller"Configure which hosts can access the dashboard.
Replace ['one.example.com'] with your allowed hosts or use ['*'] for testing:
ALLOWED_HOSTS = ['*']π Production Note: Replace
['*']with specific hostnames like['controller', 'dashboard.example.com'].
Using['*']is insecure in production.
Set up session storage using Memcached:
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
'LOCATION': 'controller:11211',
}
}β Ensure Memcached is running:
sudo systemctl status memcached
β Comment out any other session engine lines.
Set the correct Identity API version:
OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOSTβ Port
5000is required for Keystone v3.
Allow multi-domain user management:
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = TrueUse Default as the default domain for new users:
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"Define the correct API versions for integrated services:
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}Replace TIME_ZONE with your local time zone:
TIME_ZONE = "Asia/Dhaka"π See List of Time Zones for valid values.
If you chose Neutron Option 2 (Self-Service Networks), enable router and floating IP support by uncommenting and enabling these options:
OPENSTACK_NEUTRON_NETWORK = {
'enable_router': True,
'enable_quotas': True,
'enable_ipv6': True,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_fip_topology_check': True,
}β This enables:
- Routers
- Floating IPs
- Network quotas
- IPv6 support
π« If you used Neutron Option 1 (Provider Networks only), leave this section commented or set all to
False.
Ensure the following line exists in the Apache config:
sudo vi /etc/apache2/conf-available/openstack-dashboard.confAdd this line if missing:
WSGIApplicationGroup %{GLOBAL}β This fixes potential import conflicts in mod_wsgi.
Apply all configuration changes:
sudo systemctl reload apache2β No restart is needed unless you made deep changes.
Now that Horizon is installed, verify it works.
Open your web browser and go to:
http://controller/dashboard
π Replace controller with the IP or hostname of your controller node.
Example:
http://192.168.0.87/dashboard
You should see the OpenStack login page.
Use the following credentials:
- Domain:
Default - Username:
admin - Password: (your admin password)
π‘ If login fails:
- Check Keystone is running
- Confirm password in
admin-openrc- View logs:
/var/log/apache2/*error*.log
Once logged in, verify you can:
- View Instances, Networks, Images, Volumes
- See existing services under Admin > System Information
- Switch between Admin and Demo projects (if demo user exists)
If the dashboard doesn't load:
sudo tail -f /var/log/apache2/error.log
sudo tail -f /var/log/apache2/openstack-dashboard_error.logCommon issues:
Memcached not runningβ Start it:sudo systemctl start memcachedALLOWED_HOSTS mismatchβ SetALLOWED_HOSTS = ['*']temporarilyKeystone unreachableβ Checkhttp://controller:5000/v3is accessible
| Task | Status |
|---|---|
βοΈ Install openstack-dashboard package |
β |
βοΈ Set OPENSTACK_HOST = "controller" |
β |
βοΈ Configure ALLOWED_HOSTS |
β |
βοΈ Enable memcached session storage |
β |
βοΈ Set OPENSTACK_KEYSTONE_URL with port 5000 |
β |
| βοΈ Enable domain support and default domain | β |
βοΈ Set correct API versions (identity: 3, etc.) |
β |
| βοΈ Enable Neutron features (if using self-service) | β |
βοΈ Add WSGIApplicationGroup %{GLOBAL} |
β |
βοΈ Reload Apache: systemctl reload apache2 |
β |
βοΈ Access http://controller/dashboard in browser |
β |
βοΈ Log in as admin user |
β |
After successful Horizon installation:
- β‘οΈ Create a demo user and project for testing
- β‘οΈ Upload a cloud image (e.g., Ubuntu 22.04) via Glance
- β‘οΈ Launch your first VM using the dashboard
- β‘οΈ Assign a floating IP and SSH into it
Example commands to create demo user:
openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password-prompt demo
openstack role add --project demo --user demo userThen log in to Horizon as demo.
π― You now have a fully functional web UI for managing your OpenStack cloud!