Skip to content

Latest commit

Β 

History

History
3526 lines (2528 loc) Β· 83.4 KB

File metadata and controls

3526 lines (2528 loc) Β· 83.4 KB

Install OpenStack Manually


Reference:

  1. https://www.youtube.com/playlist?list=PLVV1alynPj3E4s4Nt2VM2J5Q0gl69WDR7

Requirement:

We are using two nodes

    1. Controller
    1. Compute

Operating System Ubuntu Latest Version

Change Hostname

sudo hostnamectl set-hostname cloud3

Configure Static IPs

nano /etc/netplan/50
network:
  version: 2
  renderer: networkd
  ethernets:
    enp0s3:
      dhcp4: false
      addresses:
        - 192.168.0.87/24
      routes:
        - to: default
          via: 192.168.0.1
      nameservers:
        addresses:
          - 8.8.8.8
          - 8.8.4.4
    enp0s8:
      dhcp4: false
      addresses:
        - 192.168.106.15/24

netplan apply
service networking restart

Update /etc/hosts file

vi /etc/hosts
192.168.106.87 controller
#10.0.2.50 compute

Configure Password less authentication on both vms

ssh-keygen

#go to compute node #change root password

passwd root
vi /etc/ssh/sshd_config

#uncomment

PermitRootLogin yes
service sshd restart

#copy ssh-key from controller to compute node

ssh-copy-id -i root@compute

#Install NTP

apt install chrony -y
vi /etc/chrony/chrony.conf

comment default ntp server #server 0.asia.pool.ntp.org #server 1.asia.pool.ntp.org #server 2.asia.pool.ntp.org #server 3.asia.pool.ntp.org

add own ntp server its host ip

server 192.168.0.89 iburst
allow 192.168.106.15/24
service chrony restart

#go to compute node ntp configure

vi /etc/chrony/chrony.conf

#comment default ntp server #pool 0.ubuntu.pool.ntp.org #pool 1 #pool 2

#add controller server ip as ntp server

server controller iburst
service chrony restart

Verify ntp server

chronyc sources

#Now install OpenStack

openstack.org Docs Installation Guide choose version and Following [Environment] Section step by step:

1st Create Environment for OpenStack Installation

Add OpenStack Packages on both node

add-apt-repository cloud-archive:epoxy

install openstackclient on both node

apt install python3-openstackclient -y

Install and configure SQL Database

apt install mariadb-server python3-pymysql -y
vi /etc/mysql/mariadb.conf.d/99-openstack.cnf

[mysqld]
bind-address = 192.168.106.15

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
service mysql restart

choose a suitable password [****] for the database root account

mysql_secure_installation

Type:

  • Enter
  • y
  • y
  • type password
  • y
  • y
  • y
  • y

Now Verifying the mysql

mysql -u root -p

Type mysql password Then enter mysql terminal like:

MariaDB [(none)]>

show databases;
exit;

Install and configure rabbitmq-server

apt install rabbitmq-server -y
service rabbitmq-server restart
service rabbitmq-server status
rabbitmqctl add_user openstack ubuntu

Permit configuration, write, and read access for the openstack user

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

Install and configure memcached

apt install memcached python3-memcache -y

edit /etc/memcached.conf file

vi /etc/memcached.conf
-l 192.168.106.15
service memcached restart

No need to install Etcd

Now Following [ Install OpenStack services ] Option Step by Step

  • go to Minimal deployment for your version as my zed

Install and configure keystone

Create Database for keystone

access mysql

mysql

Login mysql console show: MariaDB [(none)]>

CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'ubuntu';
EXIT;

exit from sql

Install keystone

apt install keystone -y

Edit the /etc/keystone/keystone.conf

vi /etc/keystone/keystone.conf

add new connection under [database] section and comment default connection.

[database]

connection = mysql+pymysql://keystone:ubuntu@controller/keystone

[token] section, configure the Fernet token provider.

[token]

provider = fernet

Populate the Identity service database.

su -s /bin/sh -c "keystone-manage db_sync" keystone

Verifying in databases

mysql
show databases;
use keystone;
show tables;
exit;

Initialize Fernet key repositories.

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Bootstrap the Identity service.

keystone-manage bootstrap --bootstrap-password ubuntu \
  --bootstrap-admin-url http://controller:5000/v3/ \
  --bootstrap-internal-url http://controller:5000/v3/ \
  --bootstrap-public-url http://controller:5000/v3/ \
  --bootstrap-region-id RegionOne

Configure the Apache HTTP server

Edit the /etc/apache2/apache2.conf

vi /etc/apache2/apache2.conf

Add your ServerName under '''Global configuration''' section.

ServerName controller
service apache2 restart

Configure the administrative account by setting the proper environmental variables:

export OS_USERNAME=admin
export OS_PASSWORD=ubuntu
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3

Create a domain, projects, users, and roles The Identity service provides authentication services for each OpenStack service. The authentication service uses a combination of domains, projects, users, and roles.

create a new domain would be:

openstack domain create --description "An Example Domain" example

Create the service project:

openstack project create --domain default \
  --description "Service Project" service

Create the myproject project:

openstack project create --domain default \
  --description "Demo Project" myproject

Create the myuser user:

openstack user create --domain default \
  --password-prompt myuser

Create the myrole role:

openstack role create myrole

Add the myrole role to the myproject project and myuser user:

openstack role add --project myproject --user myuser myrole

Verify operation

Verify operation of the Identity service before installing other services.

Note

Perform these commands on the controller node.

Unset the temporary OS_AUTH_URL and OS_PASSWORD environment variable:

unset OS_AUTH_URL OS_PASSWORD

As the admin user, request an authentication token:

openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name admin --os-username admin token issue

Note

This command uses the password for the admin user.

As the myuser user created in the previous, request an authentication token:

openstack --os-auth-url http://controller:5000/v3 \
  --os-project-domain-name Default --os-user-domain-name Default \
  --os-project-name myproject --os-username myuser token issue

Create OpenStack client environment scripts

Create and edit the admin-openrc file and add the following content:

Note

The OpenStack client also supports using a clouds.yaml file. For more information, see the os-client-config.

vi admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

Replace ADMIN_PASS with the password you chose for the admin user in the Identity service.

Create and edit the demo-openrc file and add the following content:

vi demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

Replace DEMO_PASS with the password you chose for the demo user in the Identity service.

Using the scripts

To run clients as a specific project and user, you can simply load the associated client environment script prior to running them. For example:

Load the admin-openrc file to populate environment variables with the location of the Identity service and the admin project and user credentials:

. admin-openrc

Request an authentication token:

openstack token issue

OpenStack Glance Installation Guide (Ubuntu)

Step-by-step instructions to install and configure the OpenStack Image Service (Glance) on Ubuntu

This guide walks you through installing and configuring the Glance service on the controller node in an OpenStack environment. Images are stored using the local file system for simplicity.

βœ… Supported Version: OpenStack 2025.1 (Ubuntu)


πŸ”§ Prerequisites

Before installing Glance, you must set up the database, create service credentials, and register API endpoints.

1. Create the Glance Database

Log in to your MariaDB/MySQL server as root:

mysql

Run the following SQL commands:

CREATE DATABASE glance;

Grant privileges to the glance database user (replace ubuntu with a secure password):

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'ubuntu';

πŸ” Example: Use ubuntu=secretpassword123

Exit the database client:

EXIT;

2. Source Admin Credentials

Load the admin credentials to gain access to administrative OpenStack commands:

. admin-openrc

πŸ’‘ Ensure the admin-openrc file exists and contains correct OS credentials.


3. Create Glance User and Service

Create the glance user:

openstack user create --domain default --password-prompt glance

Enter and confirm a strong password when prompted (e.g., ubuntu).

Add admin role to the glance user in the service project:

openstack role add --project service --user glance admin

⚠️ This command produces no output β€” success is silent.

Create the glance service entity:

openstack service create --name glance \
  --description "OpenStack Image" image

Expected Output:

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Image                  |
| name        | glance                           |
| type        | image                            |
+-------------+----------------------------------+

4. Create API Endpoints

Register public, internal, and admin endpoints for the Glance service:

openstack endpoint create --region RegionOne \
  image public http://controller:9292
openstack endpoint create --region RegionOne \
  image internal http://controller:9292
openstack endpoint create --region RegionOne \
  image admin http://controller:9292

βœ… All URLs point to http://controller:9292 assuming your controller hostname is controller.

Verifying openstack endpoint list:

openstack endpoint list

πŸ“¦ Install and Configure Glance Components

1. Install Glance Packages

On the controller node:

sudo apt update
sudo apt install glance -y

2. Configure glance-api.conf

Edit the main configuration file:

sudo vi /etc/glance/glance-api.conf

πŸ”Ή [database] – Configure Database Access

In the [database] section:

[database]
connection = mysql+pymysql://glance:ubuntu@controller/glance

πŸ” Replace ubuntu with the actual password used earlier.


πŸ”Ή [keystone_authtoken] – Configure Identity Authentication

❗ Clear any existing options in this section before adding these.

[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = ubuntu

πŸ” Replace ubuntu with the password you set for the glance user.


πŸ”Ή [paste_deploy] – Set Paste Flavor

[paste_deploy]
flavor = keystone

πŸ”Ή [glance_store] – Configure Image Storage (Local File System)

Add or update the following sections:

[DEFAULT]
enabled_backends = fs:file
[glance_store]
default_backend = fs
[fs]
filesystem_store_datadir = /var/lib/glance/images/

πŸ“ This sets the local directory where images will be stored.


To find the endpoint ID:

openstack endpoint list --service glance --region RegionOne

6. Grant Reader Role for System Scope

Ensure the glance user can read system-scope resources like limits:

openstack role add --user glance --user-domain Default --system all reader

πŸ—ƒοΈ Populate the Glance Database

Run the database synchronization:

sudo su -s /bin/sh -c "glance-manage db_sync" glance

⚠️ You may see deprecation warnings β€” these can be safely ignored.


πŸ”„ Finalize Installation

Restart the Glance API service to apply all changes:

sudo service glance-api restart

βœ… Glance is now ready to serve image requests.


OpenStack Glance Verification Guide (Ubuntu)

Step-by-step instructions to verify the Glance (Image Service) installation in OpenStack 2025.1

After installing and configuring the Glance service on the controller node, it’s essential to verify that the service is running correctly and can manage images.

βœ… Supported Version: OpenStack 2025.1 (Ubuntu)
πŸ“Œ This guide assumes Glance was installed using the Ubuntu installation guide.


πŸ” Purpose of Verification

This guide helps you:

  • Confirm the Glance service is up and reachable.
  • Verify API endpoints are registered.
  • Upload a test image.
  • Validate that image operations work as expected.

🧰 Prerequisites

Before verifying Glance:

  • The Glance service must be installed and configured.
  • You must have access to the controller node.
  • The admin-openrc file must be available with correct credentials.

βœ… Step 1: Source Admin Credentials

Load administrative credentials to use OpenStack CLI commands:

. admin-openrc

πŸ’‘ This sets environment variables like OS_USERNAME, OS_PASSWORD, etc.
Ensure the file exists and contains valid admin credentials.


βœ… Step 2: Verify Glance Service Status

Check if the glance-api service is running:

sudo systemctl status glance-api

βœ… Expected Output:

  • active (running) status
  • No recent errors in logs

πŸ”§ If not running, start it:

sudo systemctl start glance-api
sudo systemctl enable glance-api

βœ… Step 3: List Glance API Endpoints

Ensure the Image service endpoints were created correctly:

openstack endpoint list --service glance --interface public
openstack endpoint list --service glance --interface internal
openstack endpoint list --service glance --interface admin

βœ… Expected Output:

  • Three endpoints (public, internal, admin) pointing to http://controller:9292
  • All with enabled=True and correct RegionOne

Example:

+----------------------------------+-----------+--------------+---------------------------+
| ID                               | Interface | Region       | URL                       |
+----------------------------------+-----------+--------------+---------------------------+
| 340be3625e9b4239a6415d034e98aace | public    | RegionOne    | http://controller:9292    |
| a6e4b153c2ae4c919eccfdbb7dceb5d2 | internal  | RegionOne    | http://controller:9292    |
| 0c37ed58103f4300a84ff125a539032d | admin     | RegionOne    | http://controller:9292    |
+----------------------------------+-----------+--------------+---------------------------+

βœ… Step 4: Verify Glance Service Registration

Check that the glance service is registered in OpenStack:

openstack service list | grep image

βœ… Expected Output:

| 8c2c7f1b9b5049ea9e63757b5533e6d2 | glance | image     |

If missing, re-run the service creation command from the install guide.


βœ… Step 5: List Available Images

Run the following command to list current images:

openstack image list

βœ… Expected Output:

+----+------+--------+------------------+--------+-------+
| ID | Name | Status | Server Type      | Schema | Size  |
+----+------+--------+------------------+--------+-------+
+----+------+--------+------------------+--------+-------+

πŸ”Ή At this stage, the list should be empty β€” that’s normal.

❌ If you get an authentication or connection error, double-check:

  • keystone_authtoken settings in /etc/glance/glance-api.conf
  • Network connectivity to controller:5000 (Keystone) and controller:9292 (Glance)

βœ… Step 6: Download and Upload a Test Image

Use a small test image (like CirrOS) to validate image upload and visibility.

1. Download CirrOS Image

wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img

πŸ” You can use any version; update the URL accordingly.


2. Upload Image to Glance

openstack image create "cirros" \
  --file cirros-0.5.2-x86_64-disk.img \
  --disk-format qcow2 \
  --container-format bare \
  --public

πŸ“Œ Parameters Explained:

  • --file: Path to the image file
  • --disk-format: Disk format (qcow2, raw, vmdk, etc.)
  • --container-format: Container type (bare, ovf, etc.)
  • --public: Makes the image accessible to all projects

3. Confirm Image Upload

List images again:

openstack image list

βœ… Expected Output:

+--------------------------------------+--------+--------+------------------+-------------+--------+
| ID                                   | Name   | Status | Server Type      | Schema      | Size   |
+--------------------------------------+--------+--------+------------------+-------------+--------+
| 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34 | cirros | active | None             | None        | 12717056 |
+--------------------------------------+--------+--------+------------------+-------------+--------+

πŸ”Ή The image should be in active status.


βœ… Step 7: View Image Details (Optional)

Get detailed info about the uploaded image:

openstack image show cirros

Sample Output:

id: 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34
name: cirros
status: active
disk_format: qcow2
container_format: bare
size: 12717056
visibility: public

βœ… Step 8: Verify Image Storage Location (Optional)

If using local file storage, confirm the image is saved on disk:

ls -la /var/lib/glance/images/

βœ… You should see a file matching the image ID (e.g., 6a51c3d7-4c32-48bd-9a65-85e9a13f8b34).

πŸ”Ή This confirms Glance is writing images to the configured directory.


πŸ›  Troubleshooting Common Issues

Problem Possible Cause Solution
Unable to establish connection to http://controller:9292 Glance service not running Run: sudo systemctl restart glance-api
HTTP 401 Unauthorized Incorrect keystone_authtoken config Check password, username, and auth URL in glance-api.conf
Image stuck in queued or saving state Permission issue on image directory Ensure /var/lib/glance/images/ is owned by glance:glance
Endpoint not found Missing or incorrect endpoint Recreate endpoints using openstack endpoint create
No such file or directory during upload Image file not found Confirm path and permissions on the .img file

Check logs for details:

sudo tail -20 /var/log/glance/glance-api.log

πŸš€ Next Steps

Now that Glance is verified:

  • Proceed to install Nova (Compute Service).
  • Launch your first VM using the uploaded CirrOS image.
  • Test image sharing between projects (if using shared visibility).

πŸ“Ž References


βœ… Congratulations! You’ve successfully verified the Glance installation. The Image Service is ready for production use.


OpenStack Placement Service Installation Guide for Ubuntu

This guide provides step-by-step instructions to install and configure the OpenStack Placement service on Ubuntu. The Placement service tracks inventory and usage of resources (like compute, memory, and disk) in an OpenStack cloud.

βœ… Note: This guide is based on the official OpenStack documentation for the 2025.1 release and tailored for Ubuntu systems.


πŸ”§ Prerequisites

Before installing the Placement service, you must set up a database, create service credentials, and register API endpoints.

Step 1: Create the Placement Database

  1. Connect to the MariaDB/MySQL database as the root user:

    sudo mysql
  2. Create the placement database:

    CREATE DATABASE placement;
  3. Grant privileges to the placement database user:

    Replace ubuntu with a strong password.

    GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'ubuntu';
    GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'ubuntu';
  4. Exit the database client:

    EXIT;

Step 2: Create Service User and Endpoints

  1. Source the admin credentials to get administrative access:

    . admin-openrc

    Ensure the admin-openrc file exists and contains the correct admin credentials.

  2. Create the placement user in OpenStack Identity (Keystone):

    openstack user create --domain default --password-prompt placement
    • When prompted, enter a password (e.g., ubuntu) and confirm it.
  3. Add the placement user to the service project with the admin role:

    openstack role add --project service --user placement admin

    This command produces no output on success.

  4. Create the Placement service entry in the service catalog:

    openstack service create --name placement --description "Placement API" placement

    Example Output:

    +-------------+----------------------------------+
    | Field       | Value                            |
    +-------------+----------------------------------+
    | description | Placement API                    |
    | name        | placement                        |
    | type        | placement                        |
    +-------------+----------------------------------+
    
  5. Create the Placement API endpoints (public, internal, admin):

    Replace controller with your controller node’s hostname if different.

    openstack endpoint create --region RegionOne placement public http://controller:8778
    openstack endpoint create --region RegionOne placement internal http://controller:8778
    openstack endpoint create --region RegionOne placement admin http://controller:8778

    πŸ”” Note: The default port is 8778. Adjust if your environment uses a different port (e.g., 8780).


πŸ“¦ Install and Configure Placement Components

Step 3: Install the Placement Package

Install the Placement API package using APT:

sudo apt update
sudo apt install placement-api -y

Step 4: Configure the Placement Service

Edit the main configuration file:

sudo vi /etc/placement/placement.conf

1. Configure Database Access

In the [placement_database] section, set the database connection string:

[placement_database]
connection = mysql+pymysql://placement:ubuntu@controller/placement

πŸ” Replace ubuntu with the password you set earlier.

2. Configure API and Authentication

In the [api] section, ensure the auth strategy is set to Keystone:

[api]
auth_strategy = keystone

In the [keystone_authtoken] section, configure authentication settings:

[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = ubuntu

πŸ” Replace ubuntu with the password you assigned to the placement user.

⚠️ Important:

  • Comment out or remove any other existing options in [keystone_authtoken].
  • Ensure domain names (Default) match your Keystone configuration (case-sensitive).

Step 5: Sync the Placement Database

Populate the database with initial schema and data:

sudo su -s /bin/sh -c "placement-manage db sync" placement

🟑 You may see deprecation warnings β€” these can be safely ignored.


πŸ”„ Finalize Installation

Step 6: Restart the Web Server

The Placement API runs under Apache. Reload the service to apply changes:

sudo service apache2 restart

βœ… Verification Steps

To verify that the Placement service is working:

  1. List available services:

    openstack service list | grep placement

    Expected Output:

    | 2d1a27022e6e4185b86adac4444c495f | placement | placement     | Placement API |
    
  2. List Placement API endpoints:

    openstack endpoint list | grep placement
  3. Test API access (optional):

    curl -s http://controller:8778 | python3 -m json.tool

    You should see a JSON response listing available versions.


πŸ›  Troubleshooting Tips

Issue Solution
Unable to connect to database Verify MySQL host, user, password, and network access
Authentication failed Double-check keystone_authtoken settings and password
404 Not Found on API endpoint Ensure Apache is running and placement WSGI is configured
placement-manage: command not found Confirm placement-api package is installed

Check logs for errors:

sudo tail -f /var/log/placement/placement-api.log
sudo tail -f /var/log/apache2/error.log

πŸ“š Summary

You have now successfully:

βœ… Created the Placement database
βœ… Registered the Placement service and endpoints
βœ… Installed and configured the Placement API
βœ… Synced the database and restarted services

The Placement service is now ready to support Compute (Nova) and other resource tracking services in your OpenStack environment.


πŸ“Œ Next Steps:

  • Proceed to install and configure the Nova (Compute) service.
  • Ensure Nova is configured to use the Placement API for resource tracking.

πŸ”— Official Docs: OpenStack Placement Installation Guide


OpenStack Placement Service Verification Guide

After installing and configuring the OpenStack Placement service, it's essential to verify that it is functioning correctly. This guide provides clear, step-by-step instructions based on the official OpenStack Placement Verification documentation for the 2025.1 release.


βœ… Objective

Verify the correct operation of the Placement service by:

  • Running upgrade checks
  • Installing the osc-placement CLI plugin
  • Listing resource classes and traits via the API

πŸ” Step 1: Source Admin Credentials

Before performing any verification steps, you must authenticate as an administrative user.

. admin-openrc

πŸ’‘ Ensure the admin-openrc file exists and contains the correct environment variables (e.g., OS_USERNAME, OS_PASSWORD, etc.). If not available, use an equivalent method to source admin credentials.


πŸ§ͺ Step 2: Run Placement Upgrade Check

This command verifies the database schema and checks for potential upgrade issues.

placement-status upgrade check

βœ… Expected Output (Example):

+----------------------------------+
| Upgrade Check Results            |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success                  |
| Details: None                    |
+----------------------------------+
| Check: Incomplete Consumers      |
| Result: Success                  |
| Details: None                    |
+----------------------------------+

🟑 Troubleshooting Tips:

  • If you see errors like "Unable to connect to database", verify:
    • Database host, username, password in /etc/placement/placement.conf
    • Network connectivity to the database server
  • If authentication fails, double-check [keystone_authtoken] settings.

πŸ“ Note: You may see deprecation warnings β€” these are safe to ignore during verification.


πŸ”Œ Step 3: Install osc-placement CLI Plugin

The osc-placement plugin enables OpenStack CLI commands to interact with the Placement API.

Option A: Install via pip (Python Package Index)

pip3 install osc-placement

βœ… Recommended if you're using a virtual environment or don't have distribution packages.

Option B: Install via Ubuntu/Debian Package

sudo apt install python3-osc-placement

βœ… Use this if you prefer system packages managed by APT.

Verify Installation

Check if the plugin is loaded:

openstack help | grep -i placement

You should see new commands like:

  • resource class list
  • trait list
  • allocation list, etc.

πŸ” Step 4: Test Placement API – List Resource Classes

Resource classes represent types of resources tracked by Placement (e.g., disk, memory, CPU).

Run this command to list them:

openstack --os-placement-api-version 1.2 resource class list --sort-column name

πŸ” The --os-placement-api-version flag ensures compatibility. Version 1.2 supports resource class listing.

βœ… Sample Output:

+----------------------------+
| name                       |
+----------------------------+
| DISK_GB                    |
| IPV4_ADDRESS               |
| MEMORY_MB                  |
| VCPU                       |
| CUSTOM_FPGA_XILINX_VU9P    |
| ...                        |
+----------------------------+

🟑 If you get a 404 or connection error, ensure:

  • Apache is running: sudo systemctl status apache2
  • Endpoint URLs are correct: openstack endpoint list --service placement

🏷️ Step 5: Test Placement API – List Traits

Traits are metadata tags used to describe capabilities or properties of resource providers (e.g., COMPUTE_HYPRTENSION_ENABLED).

List all available traits:

openstack --os-placement-api-version 1.6 trait list --sort-column name

πŸ” Version 1.6 introduces trait support in the API.

βœ… Sample Output:

+---------------------------------------+
| name                                  |
+---------------------------------------+
| COMPUTE_DEVICE_TAGGING                |
| COMPUTE_NET_ATTACH_INTERFACE          |
| COMPUTE_VOLUME_MULTI_ATTACH           |
| HW_CPU_X86_SSE                        |
| CUSTOM_TRAIT_EXAMPLE                  |
| ...                                   |
+---------------------------------------+

🟒 Success means:

  • The Placement API is reachable
  • Authentication works
  • Database is synced and populated

πŸ›  Troubleshooting Common Issues

Problem Solution
Command 'openstack' not found Install OpenStack client: sudo apt install python3-openstackclient
HTTP 401 Unauthorized Check keystone_authtoken credentials in /etc/placement/placement.conf
HTTP 404 Not Found Confirm endpoint URL (http://controller:8778) and Apache configuration
placement-status: command not found Ensure placement-common package is installed

Check Logs for Errors

sudo tail -f /var/log/placement/placement-api.log
sudo tail -f /var/log/apache2/error.log

Look for:

  • Database connection errors
  • Keystone authentication failures
  • WSGI application loading issues

βœ… Summary: Verification Checklist

Task Status
β˜‘οΈ Source admin credentials βœ…
β˜‘οΈ Run placement-status upgrade check βœ…
β˜‘οΈ Install osc-placement plugin βœ…
β˜‘οΈ List resource classes βœ…
β˜‘οΈ List traits βœ…
β˜‘οΈ Confirm API accessibility βœ…

πŸ“š Next Steps

Now that the Placement service is verified:

  • Proceed to install and configure Nova (Compute) Controller Services
  • Ensure Nova is configured to use the Placement API
  • Later, verify integration using:
    openstack hypervisor stats show

πŸ”— Official Docs:
Verify Placement Installation


OpenStack Nova (Compute) Controller Node Installation Guide for Ubuntu

Simple & Step-by-Step Deployment Guide

βœ… Based on: OpenStack Nova Install Guide (2025.1)
πŸ–₯️ Role: Controller Node
πŸ“¦ Distribution: Ubuntu
πŸ”§ Focus: Clear, easy-to-follow instructions with explanations


🧩 Overview

This guide walks you through installing and configuring the Nova (Compute) service on the controller node in an OpenStack environment.

Nova manages virtual machines (VMs), including creation, scheduling, and lifecycle management.

πŸ”§ You will:

  • Set up databases
  • Create service users and endpoints
  • Install Nova components
  • Configure nova.conf
  • Sync databases and register cells
  • Start services

⚠️ Prerequisites:

  • MySQL/MariaDB, RabbitMQ, Keystone (Identity), Glance (Image), and Placement services must already be installed and running.

πŸ—„οΈ Step 2: Create Nova Databases

Connect to your database server and create three databases for Nova.

1. Log in to MariaDB/MySQL as root:

sudo mysql

2. Create databases and grant privileges:

CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'ubuntu';

πŸ” Replace ubuntu with a strong password (e.g., nova_db_secret).

3. Exit the database:

EXIT;

πŸ‘€ Step 3: Create Nova Service User and Endpoints

1. Create the nova user in Keystone

openstack user create --domain default --password-prompt nova

When prompted, enter a password (e.g., ubuntu) and confirm it.

βœ… Example Output:

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| name                | nova                             |
| ...                 | ...                              |
+---------------------+----------------------------------+

2. Assign the admin role to the nova user

openstack role add --project service --user nova admin

🟑 No output means success.

3. Register the Nova service in the catalog

openstack service create --name nova --description "OpenStack Compute" compute

βœ… Expected Output:

+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

4. Create API Endpoints

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

πŸ”” Port 8774/v2.1 is the default for Nova API. Ensure controller resolves correctly.


πŸ“¦ Step 4: Install Nova Packages

Install required Nova components on the controller node:

sudo apt update
sudo apt install nova-api nova-conductor nova-novncproxy nova-scheduler

πŸ› οΈ Components installed:

  • nova-api: REST API endpoint
  • nova-conductor: Mediates DB interactions
  • nova-scheduler: Decides where to run VMs
  • nova-novncproxy: Provides VNC console access

βš™οΈ Step 5: Configure nova.conf

Edit the main Nova configuration file:

sudo vi /etc/nova/nova.conf

Add or modify the following sections:

1. Database Access

[api_database]
connection = mysql+pymysql://nova:ubuntu@controller/nova_api

[database]
connection = mysql+pymysql://nova:ubuntu@controller/nova

πŸ” Replace ubuntu with the database password you set earlier.


2. RabbitMQ Message Queue

[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controller:5672/

πŸ” Replace ubuntu with the password for the openstack user in RabbitMQ.


3. Keystone Authentication

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntu

πŸ” Replace ubuntu with the password you chose for the nova user.

❗ Important: Comment out or remove any other lines in [keystone_authtoken].


4. Service User Token (Optional but Recommended)

[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntu

πŸ” Use same ubuntu as above.


5. Controller Node IP Address

[DEFAULT]
my_ip = 10.0.0.11

πŸ” Replace 10.0.0.11 with the management network IP of your controller node.


6. VNC Configuration

[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip

This allows VNC console access via the dashboard.


7. Glance (Image Service) Access

[glance]
api_servers = http://controller:9292

Ensure Glance is reachable at port 9292.


8. Lock Path for Concurrency

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Create the directory if needed:

sudo mkdir -p /var/lib/nova/tmp

9. Placement Service Access

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

πŸ” Replace PLACEMENT_PASS with the password you set for the placement user.

❗ Remove or comment out any other options in [placement].


βœ… Final Notes on Configuration

  • An ellipsis (...) in config examples means keep existing defaults.
  • Do not duplicate sections β€” edit existing ones or add if missing.
  • Avoid mixing old and new configs.

πŸ›  Step 6: Populate and Initialize Nova Databases

Run these commands in order:

1. Sync the nova-api database

sudo su -s /bin/sh -c "nova-manage api_db sync" nova

🟑 Ignore deprecation warnings.


2. Register the cell0 database

sudo su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

cell0 holds failed or deleted instances.


3. Create cell1 (Primary Compute Cell)

sudo su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

βœ… Sample Output:

Created cell with UUID: f690f4fd-2bc5-4f15-8145-db561a7b9d3d

4. Sync the main Nova database

sudo su -s /bin/sh -c "nova-manage db sync" nova

This sets up schemas for nova, nova_cell0, and nova_cell1.


5. Verify Cells Are Registered

sudo su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

βœ… Expected Output:

+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+
| Name  | UUID                                 | Transport URL              | Database Connection                                | Disabled |
+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 | none:/                     | mysql+pymysql://nova:****@controller/nova_cell0    | False    |
| cell1 | f690f4fd-2bc5-4f15-8145-db561a7b9d3d | rabbit://openstack@...     | mysql+pymysql://nova:****@controller/nova_cell1    | False    |
+-------+--------------------------------------+----------------------------+----------------------------------------------------+----------+

🟒 Success means both cell0 and cell1 appear and are not disabled.


πŸ” Step 7: Restart Nova Services

Apply all changes by restarting the Nova services:

sudo service nova-api restart
sudo service nova-scheduler restart
sudo service nova-conductor restart
sudo service nova-novncproxy restart

βœ… All services should restart without errors.


βœ… Step 8: Verify Installation

Now that everything is running, verify Nova works.

1. List OpenStack services

openstack compute service list

You should see:

  • nova-scheduler
  • nova-conductor
  • nova-compute (once compute nodes are added)
  • All services in UP state

2. Check cells communication (Optional Advanced Test)

sudo nova-manage cell_v2 list_hosts

Should show registered compute nodes.


πŸ›  Troubleshooting Tips

Issue Solution
nova-api fails to start Check keystone_authtoken settings and password
Database sync errors Confirm DB connectivity and credentials in nova.conf
cell_v2 command not found Ensure nova-conductor package is installed
503 Service Unavailable Make sure Apache/Nginx is running; check /var/log/nova/*.log
Host not showing in list_hosts Wait for compute node to register; check firewall and networking

Check logs:

sudo tail -f /var/log/nova/nova-api.log
sudo tail -f /var/log/nova/nova-scheduler.log

πŸ“Œ Summary Checklist

Task Status
β˜‘οΈ Source admin credentials βœ…
β˜‘οΈ Create nova_api, nova, nova_cell0 DBs βœ…
β˜‘οΈ Create nova user and endpoints βœ…
β˜‘οΈ Install Nova packages βœ…
β˜‘οΈ Configure /etc/nova/nova.conf βœ…
β˜‘οΈ Sync databases and create cells βœ…
β˜‘οΈ Restart services βœ…
β˜‘οΈ Verify with openstack compute service list βœ…

πŸš€ Next Steps

After completing the controller setup:

  1. ➑️ Install Nova Compute Service on compute nodes
  2. ➑️ Install and configure Neutron (Networking)
  3. ➑️ Launch your first instance using:
    openstack server create ...

πŸ”— Official Docs:
Nova Controller Installation (Ubuntu)

🎯 You're now ready to manage compute resources in OpenStack!



OpenStack Nova (Compute) Service Installation Guide for Ubuntu

Simple & Step-by-Step Guide for Compute Nodes

βœ… Based on: OpenStack Nova Compute Install Guide (2025.1)
πŸ–₯️ Role: Compute Node
πŸ“¦ Distribution: Ubuntu
πŸ”§ Focus: Easy-to-follow, beginner-friendly instructions


🧩 Overview

This guide walks you through installing and configuring the Nova Compute service (nova-compute) on a compute node in your OpenStack environment.

The compute node runs virtual machines (VMs) using KVM/QEMU and connects to the controller for management.

πŸ”§ You will:

  • Install nova-compute package
  • Configure /etc/nova/nova.conf
  • Enable hardware acceleration (KVM) or fallback to QEMU
  • Start the service
  • Register the compute node from the controller

⚠️ Prerequisites:

  • Controller node must have Keystone, Glance, Placement, and Nova (controller services) already installed and working.
  • Network connectivity between controller and compute nodes.
  • NTP synchronized on all nodes.

πŸ“¦ Step 1: Install Nova Compute Package

Log in to your compute node (e.g., compute1) and install the Nova compute service.

sudo apt update
sudo apt install nova-compute

πŸ› οΈ This installs:

  • nova-compute: The main service that manages VMs
  • Dependencies like libvirt, qemu, and kvm

βš™οΈ Step 2: Configure nova.conf

Edit the main Nova configuration file:

sudo vi /etc/nova/nova.conf

Update the following sections:

1. RabbitMQ Message Queue

In the [DEFAULT] section:

[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controller

πŸ” Replace ubuntu with the password you set for the openstack user in RabbitMQ.

βœ… Example: If your RabbitMQ password is rabbit_secret, use:

transport_url = rabbit://openstack:rabbit_secret@controller

2. Keystone Authentication

In the [api] and [keystone_authtoken] sections:

[api]
auth_strategy = keystone

[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = ubuntu

πŸ” Replace ubuntu with the password you chose for the nova user in Keystone.

❗ Remove or comment out any other lines in [keystone_authtoken].


3. Service User Token (Optional but Recommended)

[service_user]
send_service_user_token = true
auth_url = http://controller:5000/v3
auth_strategy = keystone
auth_type = password
project_domain_name = Default
project_name = service
user_domain_name = Default
username = nova
password = ubuntu

πŸ” Use the same ubuntu as above.


4. Management IP Address

In the [DEFAULT] section:

[DEFAULT]
my_ip = 10.0.0.31

πŸ” Replace 10.0.0.31 with the management network IP address of your compute node.

βœ… Example: First compute node β†’ 10.0.0.31, second β†’ 10.0.0.32, etc.


5. VNC Console Access

In the [vnc] section:

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html

πŸ“Œ Explanation:

  • server_listen = 0.0.0.0: Listens on all interfaces
  • novncproxy_base_url: Where users access VM consoles via browser

πŸ” If controller hostname is not resolvable from client machines, replace controller with its IP (e.g., http://10.0.0.11:6080/vnc_auto.html)


6. Glance (Image Service) Access

[glance]
api_servers = http://controller:9292

Ensure the Image service is reachable.


7. Lock Path for Concurrency

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

Create the directory if missing:

sudo mkdir -p /var/lib/nova/tmp

8. Placement Service Access

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = PLACEMENT_PASS

πŸ” Replace PLACEMENT_PASS with the password you set for the placement user.

❗ Comment out or remove any other options in [placement].


πŸ’‘ Step 3: Check for Hardware Virtualization Support

Run this command to check if your CPU supports hardware acceleration (KVM):

egrep -c '(vmx|svm)' /proc/cpuinfo

Interpret the Result:

Output Meaning Action
1 or higher βœ… KVM supported No extra config needed
0 ❌ No KVM support Configure Nova to use QEMU

If No Hardware Support (Output = 0): Use QEMU

Edit the libvirt configuration:

sudo vi /etc/nova/nova-compute.conf

Add or modify the [libvirt] section:

[libvirt]
virt_type = qemu

πŸ“ This tells Nova to use software-based QEMU instead of hardware-accelerated KVM.


πŸ” Step 4: Restart and Enable Nova Compute Service

Apply all changes:

sudo service nova-compute restart

Ensure it starts automatically on boot:

sudo systemctl enable nova-compute

πŸ›  Troubleshooting Tips

Common Issue: nova-compute fails to start

Check the log:

sudo tail -f /var/log/nova/nova-compute.log

If you see:

AMQP server on controller:5672 is unreachable

βœ… Fix:

  • Ensure RabbitMQ is running on the controller.
  • Open port 5672 on the controller’s firewall:
sudo ufw allow from 10.0.0.0/24 to any port 5672

Replace 10.0.0.0/24 with your management network.

Then restart:

sudo service nova-compute restart

βž• Step 5: Add Compute Node to Cell Database (On Controller)

πŸ”§ This step must be done on the controller node, not the compute node.

1. Source Admin Credentials

. admin-openrc

2. Verify Compute Service is Running

openstack compute service list --service nova-compute

βœ… Expected Output:

+----+-----------+--------------+------+---------+-------+----------------------------+
| ID | Host      | Binary       | Zone | Status  | State | Updated At                 |
+----+-----------+--------------+------+---------+-------+----------------------------+
|  1 | compute1  | nova-compute | nova | enabled | up    | 2025-04-05T10:00:00.000000 |
+----+-----------+--------------+------+---------+-------+----------------------------+

If state is down, check logs and network/firewall.


3. Discover Compute Hosts

Register the compute node(s) in the cell database:

sudo su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

βœ… Sample Output:

Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute1': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute1': fe58ddc1-1d65-4f87-9456-bc040dc106b3

🟒 Success means your compute node is now registered!


πŸ” Optional: Automate Host Discovery

To avoid running discover_hosts manually every time you add a new compute node:

Edit /etc/nova/nova.conf on the controller node:

[scheduler]
discover_hosts_in_cells_interval = 300

This will automatically discover new compute nodes every 5 minutes.

Then restart services:

sudo service nova-scheduler restart

βœ… Final Verification

Back on the controller node, verify everything works:

openstack compute service list

All services should be up and enabled.

Also run:

sudo nova-manage cell_v2 list_hosts

You should see your compute node listed under cell1.


πŸ“Œ Summary Checklist

Task Status
β˜‘οΈ Install nova-compute on compute node βœ…
β˜‘οΈ Configure /etc/nova/nova.conf βœ…
β˜‘οΈ Set correct my_ip βœ…
β˜‘οΈ Enable KVM or set virt_type = qemu βœ…
β˜‘οΈ Restart nova-compute service βœ…
β˜‘οΈ Run discover_hosts on controller βœ…
β˜‘οΈ Confirm host appears in list_hosts βœ…

πŸš€ Next Steps

  1. ➑️ Install and configure Neutron (Networking) on controller and compute nodes

πŸ”— Official Docs:
Nova Compute Installation (Ubuntu)

🎯 You’re now ready to run virtual machines at scale!


OpenStack Neutron (Networking) Installation Guide for Ubuntu

Controller Node Setup with Self-Service Networks (Option 2)

βœ… Based on:


🧩 Overview

This guide walks you through installing and configuring the OpenStack Neutron (Networking) service on the controller node, using Option 2 – Self-Service Networks.

With this setup:

  • βœ… Users can create private (self-service) networks
  • βœ… Support for routers, NAT, and floating IPs
  • βœ… Instances can access the internet and be reached from outside
  • βœ… Uses VXLAN overlay networks for tenant isolation

πŸ”§ You will:

  • Create Neutron database and service credentials
  • Install Neutron packages
  • Configure core, ML2, L3, DHCP, metadata agents
  • Integrate with Nova
  • Start services

⚠️ Prerequisites:

  • Controller node must have: MySQL, RabbitMQ, Keystone, Glance, Nova (controller services), and Placement already installed and working.
  • At least two network interfaces (management + external) recommended.

πŸ—„οΈ Step 1: Create Neutron Database

Connect to MariaDB/MySQL and create the neutron database.

1. Log in as root:

sudo mysql

2. Create the neutron database:

CREATE DATABASE neutron;

3. Grant privileges:

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'ubuntu';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'ubuntu';

πŸ” Replace ubuntu with a strong password (e.g., neutron_db_secret).

4. Exit:

EXIT;

πŸ” Step 2: Source Admin Credentials

Load admin credentials to run OpenStack CLI commands.

. admin-openrc

πŸ’‘ Ensure your environment has the correct OS_* variables set (e.g., OS_USERNAME=admin, OS_AUTH_URL=http://controller:5000/v3).


πŸ‘€ Step 3: Create Neutron Service User and Endpoints

1. Create the neutron user in Keystone

openstack user create --domain default --password-prompt neutron

When prompted, enter a password (e.g., ubuntu) and confirm it.

βœ… Example Output:

+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| name                | neutron                          |
| ...                 | ...                              |
+---------------------+----------------------------------+

2. Assign the admin role to the neutron user

openstack role add --project service --user neutron admin

🟑 No output means success.

3. Register the Neutron service in the catalog

openstack service create --name neutron --description "OpenStack Networking" network

βœ… Expected Output:

+-------------+---------------------------+
| Field       | Value                     |
+-------------+---------------------------+
| description | OpenStack Networking      |
| name        | neutron                   |
| type        | network                   |
+-------------+---------------------------+

4. Create API Endpoints

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

πŸ”” Port 9696 is the default Neutron API port.


πŸ“¦ Step 4: Install Neutron Packages

Install required Neutron components on the controller node:

sudo apt update -y
sudo apt install neutron-server neutron-plugin-ml2 \
neutron-openvswitch-agent neutron-l3-agent \
neutron-dhcp-agent neutron-metadata-agent \
openvswitch-switch -y

πŸ› οΈ Components installed:

  • neutron-server: REST API and core logic
  • neutron-plugin-ml2: Modular Layer 2 plugin (supports VLAN/VXLAN)
  • neutron-openvswitch-agent: OVS agent for switching
  • neutron-l3-agent: Provides routing between networks
  • neutron-dhcp-agent: Assigns IPs via DHCP
  • neutron-metadata-agent: Delivers metadata to instances
  • openvswitch-switch: OVS kernel module and service

βš™οΈ Step 5: Configure neutron.conf

Edit the main Neutron configuration file:

sudo vi /etc/neutron/neutron.conf

Add or modify the following sections:

1. Database Access

Under [database] Section:

connection = mysql+pymysql://neutron:ubuntu@controller/neutron

πŸ” Replace ubuntu with your database password.


2. RabbitMQ, (ML2)plugin, auth_strategy

Add under the [DEFAULT] section:

transport_url = rabbit://openstack:ubuntu@controller
core_plugin = ml2
service_plugins = router
auth_strategy = keystone

πŸ” Replace ubuntu with the password for the openstack user in RabbitMQ.


3. Keystone Authentication

Under [keystone_authtoken] Section:

www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = neutron
password = ubuntu

πŸ” Replace ubuntu with the password you set for the neutron user.

❗ Comment out or remove any other lines in [keystone_authtoken].


4. Nova Integration

Add under the [DEFAULT] section:

notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

This tells Neutron to notify Nova when ports change.

Add under the [nova] Secton:

auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = nova
password = ubuntu

5. Lock Path

Add under the [oslo_concurrency] Section:

lock_path = /var/lib/neutron/tmp

Create the directory:

sudo mkdir -p /var/lib/neutron/tmp

βš™οΈ Step 6: Configure ML2 Plugin (ml2_conf.ini)

The ML2 plugin enables self-service networks using VXLAN.

sudo vi /etc/neutron/plugins/ml2/ml2_conf.ini

1. Configure Types and Mechanisms

Under [ml2] Section:

type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security

πŸ“Œ Explanation:

  • tenant_network_types = vxlan: Use VXLAN for private networks
  • mechanism_drivers = openvswitch,l2population: Enable OVS and ARP responder
  • l2population: Optimizes VXLAN flooding using controller-based learning

2. Configure VXLAN Networking

Under [ml2_type_vxlan] Section:

vni_ranges = 1:1000

This defines the VXLAN VNI range for tenant networks.

3. Configure Provider Virtual Network

Under [ml2_type_flat] Section:

flat_networks = provider

4. Enable Port Security

Under [securitygroup] Section:

enable_ipset = true
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = true

πŸ”₯ Required for security groups to work with OVS.


βš™οΈ Step 7: Configure the Open vSwitch agent

Edit:

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

Add Under [ovs] Section:

local_ip = 10.0.0.31
bridge_mappings = provider:br-provider

πŸ” Replace values:

  • br-provider: Name of the OVS bridge connected to the physical provider network (e.g., external network).
  • 10.0.0.31: Management IP address of the compute node (used for VXLAN tunneling).

πŸ“Œ Tip: Use the same IP as my_ip in /etc/nova/nova.conf.

Set Up Provider Network Bridge

You need an OVS bridge (br-provider) that connects to a physical interface (e.g., ens3) for provider network traffic.

  1. Create the Provider Bridge
sudo ovs-vsctl add-br br-provider
  1. Add Physical Interface to Bridge
sudo ovs-vsctl add-port br-provider ens3

πŸ” Replace ens3 with your actual physical network interface (e.g., eth1, enp2s0, etc.).

⚠️ Warning: Running this command may disconnect your SSH session if ens3 is your management interface.
βœ… Best practice: Use a dedicated interface for provider networks.

Add Under [agent] Section:

tunnel_types = vxlan
l2_population = true

πŸ” Replace 10.0.0.31 with the management IP of your compute node.

  • local_ip: Used for VXLAN tunnel endpoints
  • tunnel_types = vxlan: Enables VXLAN overlay networks
  • l2_population: Reduces flooding with ARP responder (recommended)

Add Under [securitygroup] Section:

enable_security_group = true
firewall_driver = openvswitch
# firewall_driver = iptables_hybrid   # Alternative option

πŸ”Ή Use openvswitch driver for better performance with OVS. πŸ”Ή If using iptables_hybrid, ensure kernel bridge filtering is enabled.

Enable Bridge Filtering (Only if using iptables_hybrid):

sudo modprobe br_netfilter
echo 'br_netfilter' | sudo tee -a /etc/modules-load.d/modules.conf

Set sysctl values:

echo 'net.bridge.bridge-nf-call-iptables=1' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

βš™οΈ Step 8: Configure Layer-3 (L3) Agent

sudo vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver = openvswitch
external_network_bridge =

πŸ”” external_network_bridge = (empty) allows multiple external networks.


βš™οΈ Step 8: Configure DHCP Agent

sudo vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = openvswitch
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

πŸ”Ή enable_isolated_metadata = true: Allows instances to get metadata via DHCP.


βš™οΈ Step 9: Configure Metadata Agent

The metadata agent delivers user data and credentials to instances.

sudo vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET

πŸ” Replace METADATA_SECRET with a strong secret (e.g., metadata_super_secret).

πŸ’‘ This same secret must be configured in Nova later.


βš™οΈ Step 10: Configure Nova to Use Neutron

Update Nova to use Neutron for networking and metadata.

sudo vi /etc/nova/nova.conf

In the [neutron] section:

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = ubuntu
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET

πŸ” Replace:

  • ubuntu β†’ password for neutron user
  • METADATA_SECRET β†’ same secret used in metadata_agent.ini

❗ Do not skip this step β€” required for metadata and security groups.


πŸ›  Step 11: Finalize Installation

1. Populate the Neutron Database

Run the database sync command:

sudo su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

🟑 Ignore deprecation warnings.


2. Restart Nova API Service

Apply Nova-Neutron integration:

sudo service nova-api restart

3. Start Neutron Services

sudo service neutron-server restart
sudo service neutron-openvswitch-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart
sudo service neutron-l3-agent restart

βœ… All services should start without errors.

Enable them at boot:

sudo systemctl enable neutron-server \
  neutron-openvswitch-agent \
  neutron-l3-agent \
  neutron-dhcp-agent \
  neutron-metadata-agent

βœ… Step 12: Verify Installation

Back on the controller node, verify services are running:

openstack extension list --network

Should show extensions like router, security-group, vxlan, etc.

Check agent list:

openstack network agent list

βœ… Expected Output:

+----+--------------------+------------+-------------------+-------+-------+----------------------------+
| ID | Agent Type         | Host       | Availability Zone | Alive | State | Binary                     |
+----+--------------------+------------+-------------------+-------+-------+----------------------------+
| 1  | L3 agent           | controller | nova              | :-)   | UP    | neutron-l3-agent           |
| 2  | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent         |
| 3  | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent     |
| 4  | Open vSwitch agent | controller | None              | :-)   | UP    | neutron-openvswitch-agent  |
+----+--------------------+------------+-------------------+-------+-------+----------------------------+

🟒 All agents should be UP.


πŸ“Œ Summary Checklist

Task Status
β˜‘οΈ Source admin credentials βœ…
β˜‘οΈ Create Neutron DB and user βœ…
β˜‘οΈ Register Neutron service and endpoints βœ…
β˜‘οΈ Install Neutron packages βœ…
β˜‘οΈ Configure neutron.conf, ml2_conf.ini βœ…
β˜‘οΈ Configure L3, DHCP, Metadata agents βœ…
β˜‘οΈ Update Nova to use Neutron βœ…
β˜‘οΈ Sync database and start services βœ…
β˜‘οΈ Verify agents with openstack network agent list βœ…

πŸ§ͺ Test Your Setup (After Compute Node Ready)

Once compute and network are ready:

  1. Create a self-service network
  2. Launch an instance on it
  3. Create a router to connect to provider network
  4. Assign a floating IP
  5. Ping/SSH into the instance

You’ve built a full cloud network!


πŸ”— Official Docs:

🎯 You're now ready to enable advanced networking in your OpenStack cloud!


πŸš€ Next Steps: Go to Compute Node

Now go to your compute node(s) and install Neutron components:

OpenStack Neutron (Networking) Installation Guide for Ubuntu

Compute Node Setup with Self-Service Networks (Option 2)

βœ… Based on:


🧩 Overview

This guide walks you through installing and configuring the OpenStack Neutron service on a compute node, using Networking Option 2 – Self-Service Networks.

With this setup:

  • βœ… Instances can use private (self-service) networks
  • βœ… Support for routers, floating IPs, and NAT
  • βœ… Overlay networking via VXLAN tunnels
  • βœ… Integration with Open vSwitch (OVS)
  • βœ… Security groups are enforced

πŸ”§ You will:

  • Install neutron-openvswitch-agent
  • Configure neutron.conf, openvswitch_agent.ini
  • Set up OVS bridges for provider and overlay networks
  • Enable security groups
  • Restart services

⚠️ Prerequisites:

  • Controller node must have Neutron (with ML2, L3, DHCP, Metadata agents) already installed and running.
  • RabbitMQ, Keystone, Nova, and Placement services must be accessible.
  • The compute node must have network connectivity to the controller.

πŸ“¦ Step 1: Install Neutron Open vSwitch Agent

Log in to your compute node and install the required package:

sudo apt update -y
sudo apt install neutron-openvswitch-agent -y

πŸ› οΈ This installs:

  • neutron-openvswitch-agent: Manages virtual switches and tunnels
  • openvswitch-switch: Core OVS support

❗ Do not install neutron-server, neutron-l3-agent, or neutron-dhcp-agent on compute nodes unless needed.


βš™οΈ Step 2: Configure Common Neutron Settings

Edit the main Neutron configuration file:

sudo nano /etc/neutron/neutron.conf

Update the following sections:

1. Disable Database Access

Compute nodes do not access the database directly.

[database]
# connection = sqlite:///neutron.sqlite
# Comment out or leave this line commented

βœ… Ensure no connection line is active.


2. RabbitMQ Message Queue

In the [DEFAULT] section:

[DEFAULT]
transport_url = rabbit://openstack:ubuntu@controller

πŸ” Replace RABBIT_PASS with the password for the openstack user in RabbitMQ.

βœ… Example: If RabbitMQ password is rabbit_secret, use:

transport_url = rabbit://openstack:rabbit_secret@controller

3. Concurrency Lock Path

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp

Create the directory if missing:

sudo mkdir -p /var/lib/neutron/tmp

βš™οΈ Step 3: Configure Open vSwitch Agent (openvswitch_agent.ini)

This is the key configuration for self-service networks using VXLAN. Configure the Open vSwitch agent

Edit:

sudo vi /etc/neutron/plugins/ml2/openvswitch_agent.ini

Add Under [ovs] Section:

local_ip = 10.0.0.31
bridge_mappings = provider:br-provider

πŸ” Replace values:

  • br-provider: Name of the OVS bridge connected to the physical provider network (e.g., external network).
  • 10.0.0.31: Management IP address of the compute node (used for VXLAN tunneling).

πŸ“Œ Tip: Use the same IP as my_ip in /etc/nova/nova.conf.

Set Up Provider Network Bridge

You need an OVS bridge (br-provider) that connects to a physical interface (e.g., ens3) for provider network traffic.

  1. Create the Provider Bridge
sudo ovs-vsctl add-br br-provider
  1. Add Physical Interface to Bridge
sudo ovs-vsctl add-port br-provider ens3

πŸ” Replace ens3 with your actual physical network interface (e.g., eth1, enp2s0, etc.).

⚠️ Warning: Running this command may disconnect your SSH session if ens3 is your management interface.
βœ… Best practice: Use a dedicated interface for provider networks.

Add Under [agent] Section:

tunnel_types = vxlan
l2_population = true

πŸ” Replace 10.0.0.31 with the management IP of your compute node.

  • local_ip: Used for VXLAN tunnel endpoints
  • tunnel_types = vxlan: Enables VXLAN overlay networks
  • l2_population: Reduces flooding with ARP responder (recommended)

Add Under [securitygroup] Section:

enable_security_group = true
firewall_driver = openvswitch
# firewall_driver = iptables_hybrid   # Alternative option

πŸ”Ή Use openvswitch driver for better performance with OVS. πŸ”Ή If using iptables_hybrid, ensure kernel bridge filtering is enabled.

Enable Bridge Filtering (Only if using iptables_hybrid):

sudo modprobe br_netfilter
echo 'br_netfilter' | sudo tee -a /etc/modules-load.d/modules.conf

Set sysctl values:

echo 'net.bridge.bridge-nf-call-iptables=1' | sudo tee -a /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

πŸ” Step 5: Restart OVS and Neutron Agent

Restart services to apply changes:

sudo service openvswitch-switch restart
sudo service neutron-openvswitch-agent restart

Enable auto-start:

sudo systemctl enable neutron-openvswitch-agent

πŸ›  Step 6: Verify OVS Agent on Controller

Go back to the controller node and verify the agent is registered.

1. Source Admin Credentials

. admin-openrc

2. List Neutron Agents

openstack network agent list

βœ… Look for:

+----+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
+----+--------------------+------------+-------------------+-------+-------+---------------------------+
| 5  | Open vSwitch agent | compute1   | None              | :-)   | UP    | neutron-openvswitch-agent |
+----+--------------------+------------+-------------------+-------+-------+---------------------------+

🟒 If the agent shows UP, your compute node is successfully connected!


πŸ”— Step 7: Configure Nova to Use Neutron (on Compute Node)

Ensure Nova is configured to use Neutron for networking.

Edit Nova config:

sudo nano /etc/nova/nova.conf

In the [neutron] section:

[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = Default
user_domain_name = Default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS

πŸ” Replace NEUTRON_PASS with the password you set for the neutron user in Keystone.

Then restart Nova:

sudo service nova-compute restart

βœ… This allows Nova to create ports and request network resources from Neutron.


βœ… Final Verification Checklist

Task Status
β˜‘οΈ Install neutron-openvswitch-agent βœ…
β˜‘οΈ Configure neutron.conf (RabbitMQ, lock path) βœ…
β˜‘οΈ Configure openvswitch_agent.ini (VXLAN, OVS bridge) βœ…
β˜‘οΈ Create br-provider and attach physical interface βœ…
β˜‘οΈ Restart openvswitch-switch and neutron-openvswitch-agent βœ…
β˜‘οΈ Run openstack network agent list on controller βœ…
β˜‘οΈ Confirm OVS agent is UP βœ…
β˜‘οΈ Restart nova-compute after Neutron setup βœ…

πŸš€ Next Steps

Now that your compute node is fully integrated:

  1. ➑️ On the controller, create a self-service network, router, and connect to provider network
  2. ➑️ Launch an instance on the self-service network
  3. ➑️ Assign a floating IP and test SSH access
  4. ➑️ Verify security groups (e.g., allow port 22)

Example:

openstack server create --image cirros --flavor m1.tiny --network selfservice-net --security-group default my-instance
openstack floating ip create provider-net
openstack server add floating ip my-instance <floating-ip>

πŸ“Œ Troubleshooting Tips

Issue Solution
OVS agent shows DOWN Check RabbitMQ connectivity, firewall (port 5672)
No network connectivity Verify local_ip matches compute node’s IP
VXLAN traffic not working Ensure local_ip uses interface on same subnet as other nodes
SSH to instance fails Check floating IP, security group rules, and metadata agent
br_netfilter errors Load module and set sysctl values as shown above

Check logs:

sudo tail -f /var/log/neutron/openvswitch-agent.log
sudo tail -f /var/log/nova/nova-compute.log

πŸ”— Official Docs

🎯 Your OpenStack cloud now supports scalable, secure, multi-tenant networking!


OpenStack Horizon (Dashboard) Installation and Verification Guide

For Ubuntu – Step-by-Step Guide (2025.1 Release)

βœ… Based on:


🧩 Overview

This guide provides a clear, step-by-step process to install and verify the OpenStack Dashboard (Horizon) on the controller node using Ubuntu.

Horizon is the web-based interface for OpenStack, allowing users and administrators to manage:

  • Instances (VMs)
  • Networks
  • Volumes
  • Images
  • Users and projects

πŸ”§ You will:

  • Install the openstack-dashboard package
  • Configure local_settings.py for integration
  • Enable required features (domains, API versions)
  • Reload Apache web server
  • Verify access via browser

⚠️ Prerequisites:

  • Controller node must have: Keystone (Identity), Nova (Compute), Glance (Image), Neutron (Networking) already installed.
  • Apache2 and Memcached services must be running.

πŸ“¦ Step 1: Install Horizon Package

Log in to your controller node and install the OpenStack dashboard:

sudo apt update
sudo apt install openstack-dashboard -y

βœ… This installs:

  • Django-based web dashboard
  • Apache configuration (/etc/apache2/conf-enabled/openstack-dashboard.conf)
  • Python dependencies

βš™οΈ Step 2: Configure local_settings.py

Edit the main Horizon configuration file:

sudo vi /etc/openstack-dashboard/local_settings.py

Update the following settings:

1. Set OpenStack Host

Ensure Horizon connects to services on the controller node:

OPENSTACK_HOST = "controller"

2. Allow Access from Your Hosts

Configure which hosts can access the dashboard.

Replace ['one.example.com'] with your allowed hosts or use ['*'] for testing:

ALLOWED_HOSTS = ['*']

πŸ”’ Production Note: Replace ['*'] with specific hostnames like ['controller', 'dashboard.example.com'].
Using ['*'] is insecure in production.


3. Configure Memcached Session Storage

Set up session storage using Memcached:

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
    'default': {
        'BACKEND': 'django.core.cache.backends.memcached.PyMemcacheCache',
        'LOCATION': 'controller:11211',
    }
}

βœ… Ensure Memcached is running:

sudo systemctl status memcached

❗ Comment out any other session engine lines.


4. Enable Keystone API v3

Set the correct Identity API version:

OPENSTACK_KEYSTONE_URL = "http://%s:5000/identity/v3" % OPENSTACK_HOST

βœ… Port 5000 is required for Keystone v3.


5. Enable Domain Support

Allow multi-domain user management:

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True

6. Set Default Domain

Use Default as the default domain for new users:

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"

7. Configure API Versions

Define the correct API versions for integrated services:

OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}

8. (Optional) Set Time Zone

Replace TIME_ZONE with your local time zone:

TIME_ZONE = "Asia/Dhaka"

🌍 See List of Time Zones for valid values.


9. Configure Neutron Networking (For Option 2 Only)

If you chose Neutron Option 2 (Self-Service Networks), enable router and floating IP support by uncommenting and enabling these options:

OPENSTACK_NEUTRON_NETWORK = {
    'enable_router': True,
    'enable_quotas': True,
    'enable_ipv6': True,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_fip_topology_check': True,
}

βœ… This enables:

  • Routers
  • Floating IPs
  • Network quotas
  • IPv6 support

🚫 If you used Neutron Option 1 (Provider Networks only), leave this section commented or set all to False.


πŸ”§ Step 3: Fix Apache WSGI Configuration (If Needed)

Ensure the following line exists in the Apache config:

sudo vi /etc/apache2/conf-available/openstack-dashboard.conf

Add this line if missing:

WSGIApplicationGroup %{GLOBAL}

βœ… This fixes potential import conflicts in mod_wsgi.


πŸ” Step 4: Reload Web Server

Apply all configuration changes:

sudo systemctl reload apache2

βœ… No restart is needed unless you made deep changes.


βœ… Step 5: Verify Horizon Installation

Now that Horizon is installed, verify it works.

1. Access Dashboard via Browser

Open your web browser and go to:

http://controller/dashboard

πŸ” Replace controller with the IP or hostname of your controller node.

Example: http://192.168.0.87/dashboard

You should see the OpenStack login page.


2. Log In with Admin Credentials

Use the following credentials:

  • Domain: Default
  • Username: admin
  • Password: (your admin password)

πŸ’‘ If login fails:

  • Check Keystone is running
  • Confirm password in admin-openrc
  • View logs: /var/log/apache2/*error*.log

3. Test Basic Operations

Once logged in, verify you can:

  • View Instances, Networks, Images, Volumes
  • See existing services under Admin > System Information
  • Switch between Admin and Demo projects (if demo user exists)

4. Check Dashboard Logs (Troubleshooting)

If the dashboard doesn't load:

sudo tail -f /var/log/apache2/error.log
sudo tail -f /var/log/apache2/openstack-dashboard_error.log

Common issues:

  • Memcached not running β†’ Start it: sudo systemctl start memcached
  • ALLOWED_HOSTS mismatch β†’ Set ALLOWED_HOSTS = ['*'] temporarily
  • Keystone unreachable β†’ Check http://controller:5000/v3 is accessible

πŸ“Œ Summary Checklist

Task Status
β˜‘οΈ Install openstack-dashboard package βœ…
β˜‘οΈ Set OPENSTACK_HOST = "controller" βœ…
β˜‘οΈ Configure ALLOWED_HOSTS βœ…
β˜‘οΈ Enable memcached session storage βœ…
β˜‘οΈ Set OPENSTACK_KEYSTONE_URL with port 5000 βœ…
β˜‘οΈ Enable domain support and default domain βœ…
β˜‘οΈ Set correct API versions (identity: 3, etc.) βœ…
β˜‘οΈ Enable Neutron features (if using self-service) βœ…
β˜‘οΈ Add WSGIApplicationGroup %{GLOBAL} βœ…
β˜‘οΈ Reload Apache: systemctl reload apache2 βœ…
β˜‘οΈ Access http://controller/dashboard in browser βœ…
β˜‘οΈ Log in as admin user βœ…

πŸš€ Next Steps

After successful Horizon installation:

  1. ➑️ Create a demo user and project for testing
  2. ➑️ Upload a cloud image (e.g., Ubuntu 22.04) via Glance
  3. ➑️ Launch your first VM using the dashboard
  4. ➑️ Assign a floating IP and SSH into it

Example commands to create demo user:

openstack project create --domain default --description "Demo Project" demo
openstack user create --domain default --password-prompt demo
openstack role add --project demo --user demo user

Then log in to Horizon as demo.


πŸ”— Official Documentation

🎯 You now have a fully functional web UI for managing your OpenStack cloud!