Deploying a private cloud can give your organization more control, better security, and flexibility compared to public cloud options. This guide walks you through the process in five steps using OpenStack, a popular open source platform trusted by everyone from small businesses to Fortune 100 companies.

Here’s a quick overview of the steps we’ll go through in this article:

  • Step 1: Infrastructure Setup
    Prepare your hardware, configure the network, and install helpful tools like Kolla-Ansible for containerized deployments.
  • Step 2: OpenStack Core Services
    Deploy and verify OpenStack services like Nova (compute), Neutron (networking), and Keystone (identity management).
  • Step 3: Network and Identity Configuration
    Set up secure networks, VLANs, and role-based access controls to manage users and resources.
  • Step 4: Resource Management
    Test your deployment, assign roles to nodes, and optimize resource allocation for better performance.
  • Step 5: Performance Tuning and Scaling
    Optimize storage, compute, and network configurations. Plan for growth using horizontal scaling and tools like Ceph for storage expansion.

What are the requirements for OpenStack?

We’ll go into this in detail below, but we also recommend learning more about hardware requirements along with some best practices and tips before you get started in this article from OpenMetal’s President.

Step 1: Infrastructure Setup

Building a solid infrastructure is the first step in deploying a private cloud with OpenStack.

Hardware and Software Requirements

The building blocks of your cloud infrastructure are:

Controller Nodes

These are the brains of your cloud. They handle API requests, database operations, and scheduling.

  • CPU: High core count (e.g., 16+ cores) and fast clock speeds are a must.
  • RAM: Plenty of RAM (e.g., 64+ GB) is important, especially for large deployments and database caching.
  • Storage: Fast, reliable storage (e.g., NVMe SSDs in RAID 10) for the operating system and OpenStack databases (MariaDB). Consider separate disks for OS and database.
  • Network: Redundant 10Gbps or faster network interfaces for management, API traffic, and internal communication will make things run most efficiently.

Compute Nodes

These run the virtual machines.

  • CPU: The number and type of CPUs depend on the expected workload. For virtualized workloads, consider CPUs with virtualization extensions (e.g., Intel VT-x, AMD-V).
  • RAM: Sufficient RAM to accommodate the memory requirements of the virtual machines.
  • Storage: Local storage (e.g., SSDs or HDDs) for instance storage or shared storage (e.g., Ceph) for centralized storage.
  • Network: High-bandwidth network interfaces (e.g., 10Gbps or faster) for VM traffic.

Storage Nodes (Ceph)

If using Ceph, dedicated storage nodes are recommended.

  • CPU: Moderate CPU power is sufficient.
  • RAM: Adequate RAM for Ceph OSD processes and caching.
  • Storage: Large capacity HDDs or SSDs for Ceph OSDs. Consider dedicated SSDs for Ceph journals or WALs.
  • Network: High-bandwidth, low-latency network interfaces (e.g., 25Gbps or faster) for Ceph replication and client traffic.

As longtime users and proponents of Ceph, a few tips and recommendations here if you do explore it for your OpenStack cloud setup:

  • Planning: Thoroughly plan your Ceph deployment, including the number of OSDs, monitors, and MDSs.
  • Network Design: Design a high-performance network for Ceph replication and client traffic.
  • Hardware Selection: Choose appropriate hardware for Ceph OSDs, including fast storage devices and high-bandwidth network interfaces.
  • Monitoring: Implement robust monitoring to track Ceph health and performance.
  • Tuning: Tune Ceph configuration parameters for optimal performance.
  • Backups: Implement a backup strategy for Ceph data.
  • Ceph pools: Create different pools for different types of data, and set different replication levels.
  • Placement Groups: Understanding placement groups, and how they effect performance is very important.

This video from our Director of Cloud Systems Architecture is a great crash course on Ceph as well:

Key Considerations

  • Workload Type: CPU-intensive, memory-intensive, or I/O-intensive workloads will have different hardware requirements. You’ll also likely have different setups depending on individual deployment types – research and development, sandboxes or testing, or live production environments.
  • Redundancy: Implement redundancy at all levels (e.g., redundant power supplies, network interfaces, storage arrays) to ensure high availability. For most cases, we also recommend using three identical servers – this will allow you to use Ceph easily as your storage provider (if you choose to) and setup OpenStack with a highly available control plane. Identical is not required but it will make your life easier for things like balancing VMs, doing live migrations which require matching CPU flags, and various other things.
  • Future Growth: Plan for future growth by starting with hardware that can be easily scaled.
  • Hardware Compatibility: Verify that all hardware components are compatible with your chosen OpenStack version.
  • Power and Cooling: Ensure adequate power and cooling capacity in the data center.

Deployment Tools

Containerization has made OpenStack deployments faster and more efficient. For instance, OpenMetal’s platform has made it possible to deploy a production-ready OpenStack cloud in under a minute. This is currently the fastest and easiest way (that we know of!) to get started with an OpenStack private cloud.

Here are some other deployment tools to consider:

  • Kolla-Ansible: Ideal for teams familiar with Ansible. It offers containerized deployments with plenty of customization options.
  • Red Hat OpenStack Platform Director: A go-to choice for enterprises already using Red Hat. It includes built-in security features and lots of support.
  • OpenStack-Ansible: Provides detailed control over deployments and supports rolling upgrades, making it great for organizations with complex needs.

Environment Preparation

Before diving into OpenStack installation, make sure your environment is ready with the following steps:

  • System Updates and Dependencies
    Update your system and install necessary tools:

    sudo apt update && sudo apt upgrade -y
    sudo apt install python3-pip chrony
    
  • Network Configuration
    Set up separate VLANs for management, storage, and external traffic for smoother operations.
  • Storage Setup
    Use RAID 10 or NVMe SSDs for controller nodes. For distributed storage, Ceph is a strong option, especially with dedicated 10Gbps or faster network interfaces.
  • Security Measures
    Apply SSL/TLS encryption and enforce RBAC policies to secure your systems.

Lastly, make sure all nodes are synchronized using NTP:

sudo timedatectl set-timezone UTC
sudo systemctl enable --now chronyd

With the environment prepared and synchronized, you’re all set to move on to deploying OpenStack’s core services.

Step 2: OpenStack Setup

Once your infrastructure is ready, it’s time to deploy OpenStack’s core components. Using Kolla-Ansible for a containerized deployment streamlines the process while ensuring dependable performance. Just know that it heavily relies on configuration files. Understanding globals.yml, passwords.yml, and network configuration files is critical. The OpenInfra Foundation has a great guide on using Kolla-Ansible if you’d like to get into more detail.

Core Service Installation

Follow these steps to set up OpenStack using Kolla-Ansible:

  • Install Prerequisites

First, install the required tools:

sudo apt install python3-pip
sudo pip install kolla-ansible
  • Configure Core Services

Edit the /etc/kolla/globals.yml file to include key configuration details:

kolla_base_distro: "ubuntu"
network_interface: "eth0"
neutron_external_interface: "eth1"
kolla_internal_vip_address: "10.10.10.254"
  • Deploy Services

Run the following commands to deploy OpenStack core services:

kolla-ansible bootstrap-servers
kolla-ansible prechecks
kolla-ansible deploy

Installation Check

Verify the deployment with these steps:

# Generate admin credentials
kolla-ansible post-deploy

# Load environment variables
source /etc/kolla/admin-openrc.sh

# Confirm operational services
openstack service list
openstack compute service list --service nova-compute

For any issues, check container logs for debugging:

docker logs keystone_api
docker logs nova_api

A few other tips and things to keep in mind when working with Kolla-Ansible:

  • Customization: Customizing services beyond basic configurations requires being comfortable with Ansible playbooks and OpenStack service configurations.
  • Debugging: Troubleshooting Kolla-Ansible deployments can be challenging. Familiarity with Docker and Ansible is needed for debugging container logs and playbook execution.
  • Version Management: Keep track of the versions of Kolla-Ansible, OpenStack, and container images. Inconsistencies can lead to deployment failures.
  • Configuration Management: Use version control (e.g., Git) to manage configuration files and track changes.
  • Example additions to globals.yml:
    • enable_openvswitch: "yes"
    • openstack_release: "yoga" (or other release)
    • kolla_internal_vip_address: "192.168.10.10"
    • network_interface: "eth0"
    • neutron_external_interface: "eth1"
    • enable_cinder: "yes"
    • enable_ceph: "yes"

With the core services up and running, you can move on to configuring network segmentation and access controls in Step 3.

Step 3: Network and Identity Setup

Once the core OpenStack services are deployed, it’s time to configure the network and identity management. These steps help with secure access and proper resource isolation for your private cloud.

Network Configuration

Neutron, OpenStack’s networking service, is your ticket to setting up connectivity. Use the OpenStack CLI to build your network infrastructure:

# Create a private network
openstack network create private-net
openstack subnet create --network private-net --subnet-range 192.168.1.0/24 private-subnet

# Set up a router
openstack router create main-router
openstack router set --external-gateway public-net main-router
openstack router add subnet main-router private-subnet

Define security rules to control traffic:

openstack security group create secure-group
openstack security group rule create --proto tcp --dst-port 22 --proto icmp secure-group

These commands establish a basic network setup with defined traffic rules for secure communication. A few other areas you may want to look into for a more comprehensive setup include:

  • VLANs: Use VLANs to isolate different types of network traffic (e.g., management, storage, tenant networks).
  • Network Segmentation: Create separate subnets for different services and tenants.
  • Routing: Configure routers to enable communication between different subnets and external networks.
  • Security Groups: Use security groups to control inbound and outbound traffic to virtual machines.
  • Neutron Plugins: Choose the appropriate Neutron plugin (e.g., ML2) and mechanism drivers (e.g., Open vSwitch) based on your network requirements.
  • Floating IPs: Configure floating IPs to allow external access to virtual machines.
  • DNS: Integrate OpenStack with a DNS server to provide name resolution for virtual machines.
  • Example Network commands:
    • openstack network create --provider-network-type vlan --provider-physical-network external --provider-segment 100 external-network
    • openstack subnet create --network external-network --subnet-range 192.0.2.0/24 --gateway 192.0.2.1 external-subnet

Keystone Setup

Keystone

To manage identity and authentication, start by generating Fernet tokens:

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

Next, create essential roles and assign them:

openstack role create admin
openstack role create member

openstack project create --description "Service Project" service

openstack user create --password-prompt cloud-admin
openstack role add --project service --user cloud-admin admin

For added security, consider these measures:

  • Principle of Least Privilege: Grant users and services only the necessary permissions. More about that here in our OpenStack Operator’s Manual.
  • Firewall Configuration: Configure firewalls to restrict access to OpenStack services.
  • Intrusion Detection/Prevention: Implement intrusion detection and prevention systems to detect and block malicious activity.
  • Security Audits: Conduct regular security audits to identify and address vulnerabilities.
  • Encryption: Encrypt sensitive data at rest and in transit.
  • Patch Management: Keep OpenStack and the underlying operating system patched and up to date.
  • Keystone Security:
    • Use strong passwords and enforce password policies.
    • Enable multi-factor authentication.
    • Regularly rotate Fernet keys.
  • Secure Communication: Use HTTPS for all API communication.
  • Logging and Monitoring: Centralize logs and monitor system activity for suspicious behavior.

With these configurations in place, you can confidently manage and protect virtual machines and storage resources via OpenStack’s dashboard.

Step 4: Cloud Resource Management

Now that network and identity setups are complete, it’s time to focus on validating operations and managing resources effectively.

Deployment Testing

Use these commands to test your OpenStack deployment and create a test instance:

# Validate deployment and create a test instance
openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano
openstack server create --flavor m1.nano --image cirros --network private validation-instance

Node Role Assignment

Assign roles to your nodes in the configuration file to define their responsibilities:

# Controller nodes
enable_keystone: "yes"
enable_horizon: "yes"

# Compute nodes
enable_nova_compute: "yes"

# Storage nodes 
enable_cinder: "yes"

Role assignment ensures your cloud infrastructure is prepared to handle increasing demands. Confirm these assignments with the following commands:

openstack compute service list
openstack hypervisor list

Resource Optimization Tips

To improve performance and scalability, consider these adjustments:

  • Fine-tune CPU allocation.
  • Separate and manage network traffic.
  • Enable resource tracking for better oversight.
  • Distribute storage loads effectively.

For example, you can balance resources using Nova host aggregates. If storage performance is a concern, explore options like Cinder QoS policies or distributed storage systems such as Ceph.

These steps will help you set up a well-optimized environment, paving the way for scaling and performance tuning in the next phase.

Step 5: Performance and Growth

Once your core operations are in place, it’s time to fine-tune performance and set the stage for scaling up.

VM Deployment

To deploy production VMs efficiently, stick to standardized configurations. Here’s an example:

# Define VM specifications
openstack flavor create --id 1 --vcpus 4 --ram 8192 --disk 80 production-small

# Generate and register SSH key
ssh-keygen -q -N ""
openstack keypair create --public-key ~/.ssh/id_rsa.pub prod-key

# Launch production instance
openstack server create --flavor production-small --image ubuntu-20.04 --security-group default --key-name prod-key prod-instance-01

System Optimization

Your storage setup can make or break the performance of your cloud operations. Use these techniques to improve system efficiency:

ComponentOptimization Approach
StorageEnable write-back caching
NetworkConfigure jumbo frames (MTU 9000)
ComputeEnable CPU pinning
MemorySet up huge pages

Additionally, tweak resource allocation settings for better performance:

[DEFAULT]
cpu_allocation_ratio = 1.5
ram_allocation_ratio = 1.2
disk_allocation_ratio = 1.0

Growth Planning

With performance optimized, focus on scaling your cloud infrastructure. Here are three core strategies to consider:

1. Horizontal Scaling
Expand your compute capacity by adding more servers. Here’s a Heat template snippet for auto-scaling:

heat_template_version: 2018-08-31
resources:
  asg:
    type: OS::Heat::AutoScalingGroup
    properties:
      min_size: 1
      max_size: 10
      resource:
        type: OS::Nova::Server

2. Storage Expansion
Scale your storage with Ceph, starting with a three-node cluster. Keep an eye on capacity and usage with these commands:

ceph df
ceph osd tree

3. Network Growth
Use Neutron with the ML2 plugin to implement software-defined networking (SDN), enabling flexible setups across multiple data centers. Track your progress with tools like Prometheus and Grafana.

Wrapping Up and Next Steps

Summary

To deploy a scalable OpenStack cloud, focus on these main steps:

  • Prepare your infrastructure and choose the right tools.
  • Deploy core services and use containers for better efficiency.
  • Configure networks and manage identities effectively.
  • Allocate resources wisely and conduct thorough testing.
  • Fine-tune performance and plan for scaling.

Future Development

Once the core setup is stable, consider adding these advanced features to your deployment:

  1. Service and Security Improvements
    • Use Barbican for managing cryptographic services and improving access controls.
    • Integrate Magnum to handle container orchestration, expanding the capabilities of your node roles.
    • Add Ironic for provisioning and managing bare metal servers.
    • Deploy Octavia to provide effective load balancing solutions.
    • Enable Designate to manage DNS services within your cloud.

These additions build on your existing setup, offering more functionality and reliability. By integrating these features as appropriate, you can create a private cloud environment that adapts to your organization’s demands and technical requirements.

If this seems like a lot and you’d like to explore a faster, easier option to get started, OpenMetal provides a hosted OpenStack private cloud that lets you deploy in under a minute. This is a great way to play around with OpenStack and decide if it’s right for you before building your own infrastructure (or you may end up deciding you’d like to avoid the hassle completely and stick with our hosted option!). We offer trials to test things out first – just check out the options below!

Get Started Today on an OpenStack Private Cloud

Try It Out

We offer complimentary access for testing our production-ready private cloud infrastructure prior to making a purchase. Choose from short term self-service or up to 30 day proof of concept cloud trials.

Start Free Trial

Buy Now

Heard enough and ready to get started with your new OpenStack cloud solution? Create your account and enjoy simple, secure, self-serve ordering through our web-based management portal.

Buy Private Cloud

Get a Quote

Have a complicated configuration or need a detailed cost breakdown to discuss with your team? Let us know your requirements and we’ll be happy to provide a custom quote plus discounts you may qualify for.

Request a Quote


 Read More on the OpenMetal Blog

How to Deploy an OpenStack Cloud in 5 Steps

Feb 20, 2025

Learn how to deploy a secure, scalable private cloud with OpenStack. Follow our 5-step guide, including setup, networking, and performance tuning.

How to Secure OpenStack Networking

Feb 14, 2025

Protecting OpenStack Networking helps avoid security incidents and supports reliable cloud operations. Learn essential strategies including access controls, network separation, and API protection to prevent data breaches.

How to Secure Container Orchestration in OpenStack

Feb 11, 2025

Protect your OpenStack environment from container security threats. This comprehensive guide covers key security practices, including access control with Keystone, image scanning, network segmentation with Neutron and Calico, runtime protection using tools like KubeArmor and Falco, and data encryption with Barbican.

OpenStack Networking vs. Kubernetes Networking

Feb 07, 2025

Understanding OpenStack networking and Kubernetes networking is important for cloud administrators. This post breaks down the key differences, including network models, security, and performance. Explore how they can be combined for hybrid cloud environments and choose the right solution for your needs.

5 Steps To Build Self-Healing OpenStack Clouds

Jan 29, 2025

Want to prevent downtime and build a cloud infrastructure that fixes itself automatically? Learn how to create a self-healing OpenStack cloud to minimize downtime and automate recovery processes effectively in five steps.

8 Ways to Secure Your OpenStack Private Cloud

Jan 23, 2025

Private cloud environments, especially OpenStack-based ones, face unique security challenges. This guide outlines the eight main security controls you need to focus on for data protection, compliance, and operational efficiency.

Why HealthTech is Turning to OpenStack Private Clouds

Jan 21, 2025

Discover why OpenStack is a game-changer for healthtech companies seeking private cloud solutions. Learn how OpenStack enhances security, ensures compliance, and provides greater control over sensitive patient data, all while offering the flexibility and scalability of cloud computing.

How to Automate Your OpenStack Cloud on OpenMetal

Jan 17, 2025

OpenMetal gives you a powerful and easy way to use OpenStack, a historically complex platform now made accessible to anyone. By combining OpenStack’s capabilities and services with OpenMetal’s automation tools, you can build your own private cloud that’s efficient, scalable, and easy to manage.

Solving Common Private Cloud Migration Challenges

Jan 15, 2025

Moving to a private cloud setup, especially with OpenStack, comes with its fair share of technical and operational hurdles. Understanding these issues is key to crafting a migration plan that works. Learn how to tackle these challenges directly for a smoother transition.

Use Cases for OpenMetal’s XL Hosted Private Cloud Hardware

Dec 20, 2024

Supercharge your demanding workloads with OpenMetal’s XL Hosted Private Cloud hardware. Featuring powerful Intel Xeon CPUs, massive memory, and fast NVMe storage, the XL series is perfect for AI/ML, game development, 3D rendering, and more.