We are very excited you are looking at using OpenStack as your cloud system.  We are big believers of Open Source and OpenStack is what runs our hosted private cloud products.  In addition, it automates our data centers and handles our bare metal dedicated servers.

We offer a complete OpenStack and Ceph install, outlined below, that may fit your design requirements.  The chart below can help you decide if you will go with an OpenStack on bare metal or an “on-demand OpenStack“.  Either way, we are thrilled you are considering OpenStack and we hope the server info below helps you decide on what type of server will fit your OpenStack system requirements.

OpenMetal’s OpenStack vs Roll Your Own

 OpenMetal’s OpenStack and CephRoll Your Own
Software MaintenanceOpenMetal issues updated versions 2 times per year that have been validated and tested on both test systems and our own OpenStack production clusters.  We use Kolla-Ansible and Ceph-Adm.You will maintain your own versions and handle the non-production testing and preparing for the upgrade.  However, you can select different management tools than Kolla-Ansible and Ceph-Adm that you may be more comfortable using.
OpenStack CapabilitiesBattle tested and iterated on over years, this version contains fixes, tweaks, and tuning that only comes from having many types of workloads and many different Cloud Administrators involved.If you have a lot of experience running OpenStack and your preferred Storage Architecture for OpenStack, it may be the best choice to just stay with it.  You can still discuss with our team design choices.  Our team can not assist past advice and ideas.
Testing and Training IncludedWe allow customers to use, free of charge for most situations, our XS and Small servers, for training and testing purposes.  These 3 server clusters can be spun up on-demand.You can still use our OpenStack test clusters to validate parts of your design or certain reference architectures, but it will be our version, not yours.  If you wanted to see how we implemented Barbican, for example, you can check it out, but ultimately you will need to replicate what we did in your own version.
OpinionatedYes, ours will be more opinionated as we have locked down a few things, made certain choices, etc.  But this could be a good thing.  We use Ceph throughout then layered in LVM direct options.Implement your own choices.  You may want a 2 server Control Plane and we do a 3 server Control Plane.  Your OpenStack reference architecture may differ, go for it.  You may be a LVM/direct drive first group, we agree this is great for certain use cases.

The above is not meant to convince you our version is the best.  This decision is often driven by your history with OpenStack, your preferred OpenStack reference architecture, and your required efficiency of hardware to available resources.  If you are a Hosting and Public Cloud Provider, for example, you must keep costs as low as possible and may not even want all of the features.  Please contact us though, as we have a special program for you.

Install OpenStack on Bare Metal

If you have decided to deploy OpenStack on bare metal, here are a few thoughts and recommendations.  If you are looking for a technical guide on how to install OpenStack versus what kind of hardware is best for OpenStack, check below the recommended hardware section.

First, IPMI is available to you, your servers will all be located in a set of VLANs exclusive to you, and our system is “Rack Aware” which means you can make sure your bare metal OpenStack members are in different failure domains.

Minimum Servers Needed for OpenStack

When we see people asking about minimum number of servers needed for OpenStack it is typically in reference to running valuable workloads on an OpenStack Cloud that have specific system requirements to meet.  This means “production” of some sort, even when the workloads are for development teams and may not have extreme uptime or performance requirements.

OpenStack minimum servers is not usually of concern for people that are going to just be training or learning OpenStack.  In that case, you can use an OpenStack in a VM or even just a single dedicated server.  If you are looking for this, please check out this educational program.

High Availability – OpenStack on 2 Servers vs OpenStack on 3 Servers

There are positives and negatives to using 2 servers versus 3 servers for OpenStack.  In the event you are hyper-converging your OpenStack – in this case we mean you will run the cloud control plane, storage system, and customer workloads on the same hardware nodes – then you have additional pros and cons of these architecture choices.

Of note, as clusters grow, your OpenStack System Requirements and thus your recommended hardware will change for the control plane nodes, the storage nodes, and the working nodes (VM nodes).  In particular the system requirements of the control plane will grow with the size of your cluster.  Load balancers, routers, and other networking functions that are being handled by the control plane nodes scale typically with the number of VMs it is supporting.

3 Server OpenStack

This is the way we provision our Ceph and OpenStack clusters.  With 3 servers it allows the following:

  • You have the control plane highly available with at least 2 active nodes for all services and network routes
  • Allow for triplicate data redundancy for Ceph – one copy per server, 3 servers – which is often recommended OpenStack storage architecture for production use cases.
  • Customer workloads – VMs – are run on the same nodes as the control plane.  This is very cost effective and with OpenMetal’s deployment maturity we have configuration controls to allow this in a healthy way.

2 Server OpenStack

  • You have the control plane highly available with at least 2 active nodes for all services and network routes
  • Allow for duplicate data redundancy for Ceph – one copy per server, 2 servers – in this case, this is only recommended as a OpenStack storage architecture for development workloads or very well understood production use cases.  For mixed use production or high value production, triplicate is the recommended architecture.
  • There is OpenStack reference architecture for this and we know of live customer usage of 2 server OpenStack control plane and storage converged – without customer VMs running – on 2 servers.  We ourselves do not deploy OpenStack on bare metal with 2 servers yet in an automated way, but you can install OpenStack on bare metal yourself this way.  Please talk to your account manager for advice and special docs we have for this.

Best Servers/Recommended Hardware for OpenStack

Our Cloud Cores are 3 Servers of the selected bare metal type and are identical servers.  The below explanations can be extended to determine the best servers for OpenStack in your bare metal situation, but some details will refer to how we do it here.

X-Small OpenStack Server

OpenStack System Requirements will absorb 50-60% of the capabilities of these servers.  The Small servers are perfect for cloud engineers for their testing and training on running Clouds.  In addition, these can be great for very small production systems.  Of note, small production systems are typically because the workload needs to be locked off from other clouds.  This is not very efficient, but usually the driver is not cost but something else, like security.

In the OpenMetal catalog, look for Small V1 (soon to be renamed XS).  Hardware specifications that are considered “X-Small” for OpenStack, per server:

  • 4 Cores, 8 Threads
  • 64GB RAM
  • 1 TB SSD
  • 2gbps Uplinks

In this case you could use these servers just for the Control Plane if you are doing your own bare metal install, but for an OpenMetal hosted private cloud, these are only for training, PoCs, etc.

Small OpenStack Server

OpenStack System Requirements will absorb around 25-30% of of the capabilities of these servers.  The Small (currently called Standard V1 and V2) servers can support a full OpenStack on bare metal install.  It can run hyperconverged and handle small VM workloads along side the control plane and Ceph OSDs.  In our on-demand OpenStack Cloud it is HA.

In the OpenMetal catalog, look for Standard V1/V2 (soon to be renamed Small).  Hardware specifications that are considered “Small” for OpenStack, per server:

  • 8 Cores, 16 Threads
  • 128GB RAM
  • 3.2 TB NVMe SSD
  • 20gbps Uplinks

These servers can also be just for the Control Plane for larger clusters as they have 20gbps NICs and plenty of RAM and CPU for handling network functions.  Note that Control Plane servers for OpenStack are going to be 2 or 3 like mentioned above.  Control Plane functionality, like Load Balancers, will be spread across those 2 or 3 servers.

Medium OpenStack Server

OpenStack System Requirements will absorb around 10-15% of of the capabilities of these servers.  The Medium servers are our minimum recommended OpenStack server because it has an efficient ratio of resources used by the Cloud versus provided for end workloads.  It can run hyperconverged and handle a lot of VM workloads along side the control plane and Ceph OSDs.

These are a recent addition based on popular demand and will be available both in the US data centers and the EU data center.

In the OpenMetal catalog, look for Medium V4 (available from June 2024 on).  Hardware specifications that are considered “Medium” for OpenStack, per server:

  • 12 Cores, 24Threads
  • 256GB RAM
  • 6.4 TB NVMe SSD (up to 6 per server)
  • 20gbps Uplinks

These servers can also be just for the Control Plane for larger clusters as they have 20gbps NICs and plenty of RAM and CPU for handling network functions.  Note that Control Plane servers for OpenStack are going to be 2 or 3 like mentioned above.  Control Plane functionality, like Load Balancers, will be spread across those 2 or 3 servers.

Large and XL OpenStack Servers

OpenStack System Requirements will absorb around 10-15% of the capabilities of these servers.  Note that the usage by the Control Plane will not decrease from the Medium to the Large and XL.  This is due to the size of the workloads that can be supported are larger on as the boxes get larger.

The Large servers are the very popular and, along with the XLs, are our most recommended OpenStack servers.  The larger the boxes, the more efficient they can be.  The Large and XL OpenStack Servers can be run hyperconverged and handle a lot of VM workloads along side the control plane and Ceph OSDs.  Both are also commonly used as “workload” nodes.  In this case, they are either just handling VMs or handling VMs and Ceph network block storage.

Large V2/V3/V4  

  • 16Cores, 32Threads
  • 512GB RAM
  • 2X6.4 TB NVMe SSD (up to 6 per server)
  • 20gbps Uplinks

XL V2/V3/V4  

  • 32Cores, 64Threads
  • 1TB RAM
  • 4X6.4 TB NVMe SSD (up to 10 per server)
  • 20gbps Uplinks

 

OpenStack Community

Again, we are very excited you are looking at using OpenStack.  We are big believers of Open Source and OpenStack.  Even if you choose a competitor or another OpenStack dedicated server provider, we are ready to help you join the Community.  Please check out our OpenStack Cloud Administrator’s Guide and our Education Program as well.

Get Started on an OpenStack Private Cloud

Try It Out

We offer complimentary access for testing our production-ready private cloud infrastructure prior to making a purchase. Choose from short term self-service or up to 30 day proof of concept cloud trials.

Start Free Trial

Buy Now

Heard enough and ready to get started with your new OpenStack cloud solution? Create your account and enjoy simple, secure, self-serve ordering through our web-based management portal.

Buy Private Cloud

Get a Quote

Have a complicated configuration or need a detailed cost breakdown to discuss with your team? Let us know your requirements and we’ll be happy to provide a custom quote plus discounts you may qualify for.

Request a Quote

Test Drive

For eligible organizations, individuals, and Open Source Partners, Private Cloud Cores are free to trial. Apply today to qualify.

Apply Now

Subscribe

Join our community! Subscribe to our newsletter to get the latest company news, product releases, updates from partners, and more.

Subscribe