The ideal cloud scenario is to operate with the speed of the public cloud while retaining the control and cost-effectiveness of private infrastructure. This has led to the rise of two dominant open source projects: OpenStack for Infrastructure-as-a-Service (IaaS) and Kubernetes for container orchestration.

Viewing these platforms as competitors is a misconception. They’re powerful, complementary technologies. OpenStack manages the foundational infrastructure – compute, storage, and networking – while Kubernetes orchestrates the containerized applications running on top of that infrastructure. As Kendall Nelson from the OpenInfra Foundation shared in this talk, they are different yet complementary technologies that are being used more and more frequently together!

This symbiotic relationship is a globally adopted blueprint. According to Thierry Carrez, General Manager of the OpenInfra Foundation, “more than two-thirds of OpenStack deployments leverage the integration of OpenStack and Kubernetes, with tens of millions of cores globally implementing that open infrastructure blueprint”. This model delivers tangible benefits, including impressive cost savings and deployment flexibility.

To understand this integration, it’s helpful to map Kubernetes concepts to their OpenStack service equivalents.

Kubernetes to OpenStack Service Mapping

Kubernetes ResourceOpenStack EquivalentDescription
PodRuns on a Nova VM (Worker Node)

The Kubernetes Pod executes on a virtual machine managed by OpenStack’s compute service, Nova.

Service (Type: LoadBalancer)Octavia Load Balancer

A Kubernetes service requiring external exposure can trigger the creation of a load balancer via OpenStack’s Octavia service.

PersistentVolumeClaim (PVC)

Cinder Volume

A request for persistent storage is fulfilled by creating a block storage volume through OpenStack’s Cinder service.

SecretBarbican Secret

Sensitive data can be securely stored by OpenStack’s key management service, Barbican, instead of using less secure default Kubernetes secrets.

NetworkPolicyNeutron Security GroupPod-level network rules are complemented by infrastructure-level firewalling from OpenStack’s networking service, Neutron.
Cluster CreationMagnum Cluster Template

The entire lifecycle of a Kubernetes cluster can be managed as a resource using templates within OpenStack’s container orchestration engine, Magnum.

This guide provides an analysis of five best practices for architecting, deploying, and managing Kubernetes on OpenStack.

1) Architecting a Resilient and Scalable Foundation

A successful integration begins with understanding the strengths and needs of your two pillars: OpenStack provides a stable, API-driven IaaS layer, while Kubernetes manages the application lifecycle on top. Infrastructure teams can focus on stability and efficiency, while developers can move with speed and agility.

Planning for Success: Hardware and Resource Considerations

A well-performing integration requires some initial hardware planning.

  • Compute (Nova): Size physical servers with enough CPU and RAM to handle the OpenStack control plane, the Kubernetes control plane, and the containerized application pods.
  • Storage (Ceph/Cinder): Prioritize NVMe-based SSDs for low latency and high throughput. For distributed and resilient storage, Ceph is the industry standard, providing a self-healing fabric ideal for persistent container volumes.
  • Networking (Neutron): A baseline of 10Gbps networking is needed to handle the high volume of traffic between nodes, storage systems, and microservices.

Achieving High Availability with OpenStack

Application-level resilience in Kubernetes is insufficient if the underlying infrastructure is a single point of failure.

  • Availability Zones (AZs): Distribute Kubernetes worker nodes across multiple Availability Zones (AZs) to ensure that a rack-level failure doesn’t cause a complete cluster outage.
  • Self-Healing Storage with Ceph: Using Ceph as the backend for Cinder (block storage) is a cornerstone of a self-healing cloud. Ceph automatically distributes and replicates data, recovering from disk or node failures without manual intervention.

Streamlining Cluster Lifecycle with OpenStack Magnum

Manually managing Kubernetes clusters is complex and error-prone. OpenStack Magnum automates this entire lifecycle, making Kubernetes a first-class resource within OpenStack. Magnum uses other core services like Heat for orchestration, Keystone for security, Neutron for networking, and Cinder for storage to provision a fully configured Kubernetes cluster via the OpenStack API. This automation is very useful for achieving repeatable, scalable deployments.

2) Implementing Seamless and Performant Networking

Effective networking is a complex challenge, as OpenStack and Kubernetes approach it from different perspectives. OpenStack Neutron provides a centralized, infrastructure-focused model, while Kubernetes uses a distributed, application-focused model via the Container Network Interface (CNI). A naive integration can lead to inefficient “double-tunneling,” where a Kubernetes overlay network runs on top of an OpenStack overlay network.

OpenStack Kuryr: The Native Neutron Bridge for Kubernetes

To solve this, the community developed Kuryr, a CNI plugin for Kubernetes that acts as a bridge to the OpenStack networking layer. Kuryr makes Kubernetes pods first-class citizens on the OpenStack Neutron network. When a pod is scheduled, the Kuryr controller makes an API call to Neutron to create a network port, which is then plumbed directly into the pod’s network namespace. This eliminates double-tunneling, improves performance, and enables direct, routed communication between pods and VMs.

Advanced Neutron Services for Kubernetes

The Kuryr integration allows Kubernetes to consume advanced infrastructure services from Neutron:

  • Load Balancing with Octavia: When a developer creates a Kubernetes Service of type: LoadBalancer, Kuryr translates this into an API call to OpenStack Octavia, which automatically provisions a robust, highly available load balancer.
  • Tenant Isolation and Security: Because each pod gets its own Neutron port, it can be placed in a tenant-specific network and associated with Neutron Security Groups. These act as stateful, distributed firewalls at the infrastructure level, providing a foundational layer of security that complements Kubernetes NetworkPolicies.

3) Ensuring Persistent, High-Performance Storage for Stateful Workloads

Managing persistent data for stateful applications was once a major challenge for Kubernetes. OpenStack’s block storage service, Cinder, provides the foundational solution. Cinder offers on-demand, self-service access to persistent block storage, abstracting the underlying hardware and presenting a simple API to request and manage volumes.

The Cinder CSI Plugin: Dynamically Connecting Kubernetes to OpenStack Storage

To bridge Kubernetes and Cinder, the community developed the Cinder Container Storage Interface (CSI) driver. This driver enables dynamic provisioning, a fully automated workflow. When a developer creates a PersistentVolumeClaim (PVC) in Kubernetes, the Cinder CSI driver detects it and orchestrates the API calls to Cinder and Nova to create a volume and attach it to the correct worker node, making it available to the application pod.

Advanced Cinder CSI Capabilities

The Cinder CSI driver exposes a rich set of enterprise-grade storage features to Kubernetes:

  • Topology-Aware Provisioning: The driver understands the OpenStack Availability Zone topology and ensures that a pod’s storage is created in the same AZ, minimizing I/O latency.
  • Volume Expansion: An operator can increase a volume’s size by simply editing the PVC manifest, often without application downtime.
  • Volume Snapshots and Cloning: Users can create point-in-time snapshots of their volumes directly from Kubernetes, which are invaluable for backup, recovery, and creating dev/test environments.
  • Raw Block Volumes: For high-performance applications like databases, the driver can present a Cinder volume as a raw block device instead of a mounted filesystem, giving the application direct, low-level control.

This integration brings storage management directly into the Kubernetes Infrastructure-as-Code (IaC) model, giving teams true self-service for the full application stack.

4) Building In Multi-Layered Security

Security cannot be an afterthought. The average cost of a data breach has reached over $4.9 million, with data compromises surging to a record high in 2023 with no signs of slowing down. A big portion of these breaches stem from preventable misconfigurations, making multi-layered security necessary.

Security Responsibility Matrix (OpenStack vs. Kubernetes)

Security Domain

Primary ResponsibilityKey Contribution
Physical SecurityData Center/IaaS ProviderSecures the physical hardware and facilities.
Hypervisor & Host OS SecurityOpenStack AdministratorHardens the operating system and hypervisor.
Cloud-Level IAM & TenancyOpenStack Keystone

Provides centralized authentication and resource isolation.

Infrastructure Network FirewallingOpenStack Neutron

Controls traffic at the VM’s virtual network interface.

Secrets ManagementOpenStack Barbican

Offers a hardened, centralized vault for cryptographic keys.

Container Image ScanningCI/CD PipelineScans images for known vulnerabilities before deployment.
Cluster-Level Authorization (RBAC)Kubernetes API Server

Manages user and service account permissions within the cluster.

In-Cluster Network FirewallingKubernetes CNI (NetworkPolicies)Provides fine-grained, pod-to-pod traffic control.
Runtime SecurityKubernetes Add-on (e.g., Falco)

Monitors container behavior for malicious activity.

Key Security Controls

  • Identity and Access with Keystone: OpenStack’s identity service, Keystone, provides centralized, multi-tenant authentication and Role-Based Access Control (RBAC). Enforcing the principle of least privilege here is critical.
  • Secrets Management with Barbican: Default Kubernetes Secret objects are not encrypted at rest. The integration can work with OpenStack Barbican, a dedicated service for the secure storage and management of secrets and keys, often backed by a Hardware Security Module (HSM).
  • Layered Firewalling: Combine Neutron Security Groups for infrastructure-level perimeter defense with Kubernetes NetworkPolicies for granular, application-aware micro-segmentation inside the cluster.
  • Supply Chain Security: Use a private registry (e.g., on OpenStack Swift) for trusted container images and integrate vulnerability scanning tools into the CI/CD pipeline to block vulnerable images from deployment.

5) Autoscaling for Performance and Efficiency

Autoscaling allows a platform to automatically adapt to changing demands, ensuring performance during peaks and minimizing costs during lulls. Kubernetes provides a sophisticated, multi-dimensional approach to this challenge.

The Three Dimensions of Kubernetes Autoscaling

  • Horizontal Pod Autoscaler (HPA): Operates at the application level, adjusting the number of running pods (replicas) based on metrics like CPU or memory utilization.
  • Vertical Pod Autoscaler (VPA): Also at the application level, this adjusts the CPU and memory resource requests and limits of individual pods to match their actual usage.
  • Cluster Autoscaler (CA): Operates at the infrastructure level, adding or removing worker nodes (OpenStack VMs) to ensure there is enough capacity to run all required pods.

Kubernetes Autoscaler Comparison

AutoscalerWhat it DoesScopeTrigger

Horizontal Pod Autoscaler (HPA)

Adds/removes pod replicas

ApplicationCPU/memory utilization, custom metrics
Vertical Pod Autoscaler (VPA)

Adjusts pod CPU/memory requests & limits

ApplicationHistorical resource usage
Cluster Autoscaler (CA)

Adds/removes worker nodes (VMs)

InfrastructurePending (unschedulable) pods

The Autoscaling Chain Reaction

True cloud elasticity comes from the interaction of these three autoscalers.

  1. A traffic spike increases the CPU utilization of an application’s pods.
  2. The HPA detects this and scales up the application by creating new pods.
  3. If there isn’t enough capacity on the existing nodes, these new pods become Pending.
  4. The Cluster Autoscaler detects the Pending pods and provisions new worker nodes by making API calls to OpenStack Nova.
  5. The new nodes join the cluster, and the scheduler places the Pending pods on them.
  6. Later, as traffic subsides, the HPA scales down the pods, and the Cluster Autoscaler detects the now-underutilized nodes and terminates them to save costs.

Understanding this multi-layered, reactive chain is key to designing a fully automated, elastic cloud environment.

Wrapping Up: Integrated Cloud Infrastructure with OpenStack and Kubernetes

The integration of Kubernetes and OpenStack is a mature strategy for building powerful, flexible, and cost-effective private and hybrid clouds. Success depends on understanding their complementary roles and taking advantage of integration points to create a cohesive system.

For those looking to accelerate this journey, OpenMetal offers pre-integrated, production-ready OpenStack environments designed to run demanding Kubernetes workloads, allowing teams to focus on application delivery rather than infrastructure complexity.

Interested in OpenMetal’s Hosted OpenStack-Powered Cloud?

Chat With Our Team

We’re available to answer questions and provide information.

Chat With Us

Schedule a Consultation

Get a deeper assessment and discuss your unique requirements.

Schedule Consultation

Try It Out

Take a peek under the hood of our cloud platform or launch a trial.

Trial Options

 

 

 Read More on the OpenMetal Blog

5 Best Practices for Kubernetes and OpenStack Integration

Jun 25, 2025

The integration of Kubernetes and OpenStack is a mature strategy for building powerful, flexible, and cost-effective private and hybrid clouds. Success depends on understanding their complementary roles and taking advantage of integration points to create a cohesive system. This guide provides an analysis of five best practices for architecting, deploying, and managing Kubernetes on OpenStack.

Multi-Cloud Networking with Kubernetes and OpenStack

Jun 11, 2025

If you’re looking to simplify your multi-cloud strategy, combining Kubernetes with OpenStack is a powerful approach. OpenStack provides the core infrastructure-as-a-service (IaaS), and Kubernetes orchestrates your containerized applications on top of it, giving you a consistent platform everywhere. This guide gives you a straightforward look at how to plan, build, and manage a multi-cloud network using these two technologies.

MicroVMs: Scaling Out Over Scaling Up in Modern Cloud Architectures

Jun 08, 2025

Explore how MicroVMs deliver fast, secure, and resource-efficient horizontal scaling for modern workloads like serverless platforms, high-concurrency APIs, and AI inference. Discover how OpenMetal’s high-performance private cloud and bare metal infrastructure supports scalable MicroVM deployments.

OpenStack Networking vs. Kubernetes Networking

Feb 07, 2025

Understanding OpenStack networking and Kubernetes networking is important for cloud administrators. This post breaks down the key differences, including network models, security, and performance. Explore how they can be combined for hybrid cloud environments and choose the right solution for your needs.

How Stakater Found the Right Cloud Infrastructure Partner in OpenMetal

Jan 08, 2025

Need high-performance, predictable cloud infrastructure? Learn how Stakater partnered with OpenMetal to achieve consistent performance for their demanding workloads, improve resource utilization, and gain better cost control.

How To Install a Rancher Managed Cluster on OpenStack

Apr 22, 2024

In the realm of deploying Kubernetes on OpenStack, Rancher stands out as the best tool available. Its comprehensive feature set, ease of use, and hybrid capabilities make it an excellent choice for organizations seeking to manage Kubernetes clusters seamlessly.

Kubernetes and Containerization in OpenStack

Jan 09, 2024

When considering OpenStack and Kubernetes, it is important to note that they do not compete with each other, rather, they are complementary projects. OpenStack is an infrastructure software, its priority is to manage your infrastructure resources such as virtual machines, networking services and storage.

Key Considerations When Choosing Infrastructure for Hosting Kubernetes Workloads

Sep 22, 2023

Many organizations are using Kubernetes to containerize their workloads because of the numerous benefits. These benefits include portability, scalability, reliability, automation and ecosystem. Running Kubernetes workloads on the wrong type of infrastructure can lead to a range of undesirable consequences such as: performance degradation, reliability issues, security vulnerabilities, and increased cost.
In this blog post, we’ll explore the key considerations you should keep in mind when choosing the right infrastructure to host your Kubernetes workloads.

The Best Tool For Deploying Kubernetes On OpenStack

Jun 02, 2023

In the realm of deploying Kubernetes on OpenStack, Rancher stands out as the best tool available. Its comprehensive feature set, ease of use, and hybrid capabilities make it an excellent choice for organizations seeking to manage Kubernetes clusters seamlessly.

Choosing the Right Container Orchestration Platform for OpenStack: A Comparison of Harvester, Nomad, and Kubernetes

May 19, 2023

While Harvester, Nomad, and Kubernetes share many similar app native features, their deployment and management approaches, as well as additional capabilities like distributed computing, can influence the best fit for your specific use case within the context of OpenStack.