What Is OpenStack?

OpenStack is a collection of open source tools called projects that provides users with everything they need to build and manage cloud computing infrastructure. With OpenStack organizations can create and manage virtualized resources in a flexible and scalable manner. This includes virtual machines, storage and networking. OpenStack can be used for building both public and private clouds, in fact public OpenStack cloud providers sometimes utilize clouds from private OpenStack cloud providers. The ability to securely segment workloads in an OpenStack cloud allows teams to keep workloads isolated and even allow different teams to share a cloud without impacting each other’s workloads.

Understanding Containerized Workloads

Containerization allows you to encapsulate applications with its dependencies, which creates a self contained unit that ensures consistency. Containers are an efficient and portable manner in which you can package, distribute and run software applications. Containers can be seamlessly migrated between development, tested and production environments (even across various clouds providers). Users who choose to isolate workloads in containers vs virtual machines do so because containers are lightweight. The lightweight nature of containers allows for rapid start up times and efficient use of resources.

What is Kubernetes?

Kubernetes is an orchestration tool developed by Google Cloud Platform to automate the deployment, scaling and management of containerized applications. Kubernetes can be deployed on clouds (public, private and hybrid), bare metal servers, edge computing and container platforms.

Kubernetes integration is one of the key elements that led to the mass adoption of OpenStack we have been witnessing, Kubernetes is deployed on 85% of OpenStack deployments. This blog will explore Kubernetes and Containerization in OpenStack clouds. 

Understanding Kubernetes in the world of OpenStack.

Kubernetes is an excellent tool for deploying containers, it ensures that they start up quickly and run seamlessly but it also has a fail proof system so if a container fails, another will take its place to avoid disruption to your application. Auto-scaling in Kubernetes will increase the number of containers available to your application during peak periods and scale back down when demand decreases, keeping your application running smoothly and using resources efficiently. Networking amongst containers within a cluster is also managed with Kubernetes, simplifying the complexities of networking within a cluster environment for developers. 

When considering OpenStack and Kubernetes, it is important to note that they do not compete with each other, rather, they are complementary projects. OpenStack is an infrastructure software, its priority is to manage your infrastructure resources such as virtual machines, networking services and storage. Kubernetes focuses on container orchestration, so it manages the deployment, scaling and operation of containers (for which it needs the resources that OpenStack manages). Together, Kubernetes and OpenStack provide comprehensive solutions for cloud applications, OpenStack manages the underlying infrastructure and Kubernetes manages the applications running on the infrastructure. 

Benefits of Running Kubernetes on OpenStack

Let’s take a close look at the reasons why organizations choose OpenStack as the environment to run their Kubernetes workloads:

  • Cost Efficient: OpenStack users experience significant savings because it’s open source, in fact, OpenMetal customers who move Kubernetes workloads over from public clouds typically reduce their cloud bill for that workload by approximately 50%.
  • Scalability: OpenStack supplies the necessary infrastructure resources to support Kubernetes scalable container orchestration platform.
  • Flexibility: Unlike proprietary cloud models that may be strictly public or private, with OpenStack organizations can choose their deployment model (public, private or hybrid).
  • Security: With the OpenStack code openly available organizations can find code vulnerabilities before security breaks or make adjustments to their OpenStack clouds to increase the security feature. OpenStack also has RBAC with the OpenStack Keystone project and network isolation. 
  • Open Source: Both OpenStack and Kubernetes are open source, meaning that organizations benefit from the contributions of the open source community that works tirelessly to improve and advance both of these projects. As mentioned before, 85% of OpenStack deployments are also using Kubernetes, so organizations can trust that the community is consistently working to optimize the integration of these two open source projects. 

What is Magnum?

Magnum is the OpenStack project that is used to deploy container orchestration engines such as Kubernetes, Docker Swarm and Apache Mesos on OpenStack clouds. Magnum integrates with other OpenStack projects such as Nova, Neutron, Cinder and Keystone to leverage the existing infrastructure for your containers, additionally it can use templates from another project called Heat to manage the lifecycle of container clusters. Magnum uses OpenStack security features to allow users to deploy secure and isolated container environments. Organizations can use Magnum as a unified method of working with various containerized applications on OpenStack.

Why Deeper Integration Matters

With so many OpenStack users running Kubernetes workloads on their OpenStack deployments, it is clear that deep integration between these open source projects is essential. There are many tools that organizations can use to deploy Kubernetes on OpenStack such as Kubespray, Magnum and Rancher, but to make it even easier for users, some OpenStack providers like OpenMetal have turn key Kubernetes infrastructure built into their clouds allowing users to spin up clusters as easily as a VM.

Challenges and Considerations

While there are numerous benefits to running Kubernetes on OpenStack, an organization that chooses to take on this task will face some challenges. Mainly their complexity. Integrating both of these open source projects and maintaining them over time will require expertise in both OpenStack and Kubernetes. This may be too much for organizations with limited engineering talent on hand. But this obstacle can be overcome by using a hosted OpenStack provider, so you don’t need to maintain the underlying infrastructure. And it can be further simplified if your provider supplies you with a cloud that already has Kubernetes integrated. But you may miss out on opportunities to optimize your deployment if you don’t eventually bring this skill set on board. Firstly, let’s be clear, this is not a disadvantage when compared to running Kubernetes on a public cloud where you do not have the flexibility to fine tune your cloud. But one of the advantages of using an open source private cloud is that users are able to make root level changes to optimize your cloud for your unique workloads, but to do this you will need the technical expertise on hand. To take advantage of these perks, you can either invest in OpenStack training, use freely available OpenStack training resources or subscribe to a managed service from a consulting firm or your OpenStack provider. 

Kubernetes on OpenStack: Real World Example

As an OpenStack provider, OpenMetal has many customers running Kubernetes workloads on OpenStack clouds. But we’ll look at one example, Pypestream. Pypestream provides a next-generation conversational AI solution. Their customers use their chat box to provide their customers with a better chat experience, and considering some of their customers are HBOMax, Gillette, Farmers Insurance, etc. you can foresee that they have high demand. Pypestream moved workloads over from AWS where they faced high costs, unpredictable billing and competitive conflicts with customers (customers who competed with Amazon did not want their money flowing back to Amazon Web Service that often subsidized Amazon).
Pypestream moved Kubernetes workloads from their AWS public cloud instances to a private OpenStack cloud core which has 3 standard servers running their control plane. They later added several V1 servers and Ceph storage clusters (in addition to the built-in Ceph storage that comes with OpenMetal cloud cores). They are currently using OpenStack API integrations and cloud capacity to run Kubernetes across 140 compute instances. 

So what was Pypestream’s experience moving their workloads from public clouds to private on-demand OpenStack clouds?

  • 50% cost savings over AWS was realized in the first 6 months, funds they were able to reinvest into their business to acquire additional personnel, resources and initiatives. These savings also allowed them to implement competitive pricing to give them an edge over their competitors. 
  • Ease of Migration. This is a given since Kubernetes is designed to allow for easy migration across various infrastructure, but OpenMetal gave Pypestream the ability to use OpenStack “as-a-service” for the quickest transition path. 
  • Pypestream now has greater agility within their testing and production environments which increases productivity and gets their products to market faster.

Ready to get started with Kubernetes on OpenStack?

Explore our library of resources to get you started on your training. This collection includes:

  • Guides/documentation created by our in house engineers
  • Video tutorials by your favorite tech masterminds (LearnLinuxTV)
  • Blogs by tech experts and OpenMetal tech enthusiasts

Kubernetes Workloads

On-Demand OpenStack clouds by OpenMetal support Kubernetes integration and give users the freedom to choose the deployment and management systems they are most comfortable with.

Our advanced solution seamlessly integrates the prowess of a private OpenStack cloud with the precision of Kubernetes orchestration. Craft your environment with tailored efficiency, scalability, and data security.

 

Kubernetes Infrastructure

More on the OpenMetal Blog…

Kubernetes Secrets on OpenStack 

In this blog post, you’ll learn about what secrets are, how to create standard Kubernetes secrets, and how to get started with the OpenStack Key Manager. For any application that you’re deploying …Read More

Who Is Using OpenStack?

It may be surprising to see large scale OpenStack use cases such as Walmart or China Mobile, or use cases in organizations like NASA who have stringent security regulations, but the ability of organizations to fine tune and customize …Read More

Key Considerations When Choosing Infrastructure for Hosting Kubernetes Workloads.

Key Considerations When Choosing Infrastructure For Hosting Kubernetes Workloads

In this blog post, we’ll explore the key considerations you should keep in mind when choosing the right infrastructure to host your Kubernetes workloads… Read More

Test Drive

For eligible organizations, individuals, and Open Source Partners, Private Cloud Cores are free to trial. Apply today to qualify.

Apply Now

Subscribe

Join our community! Subscribe to our newsletter to get the latest company news, product releases, updates from partners, and more.

Subscribe