Why Kubernetes on OpenStack is Valuable for Organizations

In this blog:

  • Managing Kubernetes Clusters on Openstack
  • Scalability on Openstack
  • Ways to Deploy Kubernetes on Openstack
  • Security on Openstack

Many engineers believe that OpenStack is a “thing of the past”, but it’s not. In fact, it’s becoming increasingly popular all throughout not only Telco, but auto manufacturers and any other organization that needs the ability to scale and manage workloads a certain way. With OpenStack, you get to manage and maintain Kubernetes clusters the way that you want.

In this blog post, you’ll learn about why Kubernetes on OpenStack is valuable for organizations and how you can get started with it today.

You Manage The Clusters

When it comes to Kubernetes, you have two types of servers to manage:

  • Control Planes
  • Worker Nodes

Control Planes are where the API server, Etcd, the scheduler, and Controller Managers live. Without these components, Kubernetes wouldn’t work. Without Etcd, there would be no backend datastore. Without the scheduler, Pods wouldn’t know what worker node to go to. Without the controller manager, current state and desired state wouldn’t be the same. Without the Kubernetes API, there would be no way to interact with Kubernetes.

Worker Nodes are where Pods live. Not just your application, but Pods like the kube-proxy and other internal components that make Kubernetes tick.

That all sounds pretty important to manage, right? With OpenStack, you don’t have to worry about major cluster sprawl regardless of how you’re deploying Kubernetes.

Instead, you can manage clusters in one location and can worry about how you want to manage the clusters and components like Etcd and the scheduler instead of figuring out how to manage cluster sprawl itself. A great way that organizations are managing Kubernetes clusters is with Cluster API, which is a tool that provides declaratives APIs to manage and maintain clusters at large scale.

You Scale The Way You Want To Scale

Without proper scalability, Kubernetes would only be able to work in any environment for a certain period of time. Once the Worker Nodes get to a certain limit from a resource (CPU, memory, etc.) perspective, Pods will start to crash, not start, and applications (both internal and customer-facing) will be offline. Because of that, scalability needs to be architected into any Kubernetes solution before even getting started with deployments.

When it comes to Kubernetes on OpenStack, and Kubernetes running almost anywhere, there’s the Kubernetes Autoscaler. It’s a tool that automatically adjusts the size of a Kubernetes cluster in a horizontal and vertical scaling fashion. From a horizontal scaling perspective, the cluster will add more nodes if necessary. For example, if you’re running three worker nodes and they’re full from a resource perspective, a fourth worker node will be automatically added. If you implement vertical auto-scaling (which many don’t, but it is an option), instead of a fourth worker node being added, the existing worker nodes will increase in size (RAM, CPU, etc.) to handle the new and existing application workloads.

The Kubernetes AutoScaler isn’t something OpenStack specific, and almost every place that runs Kubernetes services (including public clouds) use the Kubernetes AutoScaler.

It’s one of the core features of Kubernetes and is even found on the primary Kubernetes GitHub Org.

Kubernetes With Unparalleled Flexibility and Control

Learn About Kubernetes on OpenStack >>

Multiple Ways To Deploy

Inside of OpenStack, there’s a “native” way to deploy, which is called Magnum. However, that’s certainly not the only way to deploy. You can use many different on-prem style solutions including:

and any of the other on-prem style Kubernetes deployment tools.

From an infrastructure creation perspective, you can automatically deploy those pieces as well. For example, you can use the Terraform OpenStack Provider to create servers that you would like to run Kubernetes on. Then, you can use a configuration management tools (Ansible, Chef, Puppet, etc.) to configure one of the many ways to deploy Kubernetes (Kubeadm, kubespray, etc.) on the servers that you create via Terraform.

To spice things up a bit, you can even think about hybrid-cloud management here. For example, let’s say you have some workloads inside of Azure. You can use Azure Arc to manage Kubernetes clusters, including Kubernetes clusters that are running on-prem or in OpenStack. From a deployment and management perspective, the possibilities are quite endless when you consider running Kubernetes in OpenStack.

Security Is In The Palm of Your Hands

There is a ton of security concerns with certain sectors of the world when it comes to specific levels of:

  • SOC2
  • HIPPA
  • PHI
  • Hi-Trust
    and several legal matters could be a liability.

There are a lot of organizations moving to the public cloud, and that’s fine. The public cloud isn’t some place that is awful to go to, and OpenMetal understands that organizations will continue moving to the public cloud. With that being said, if securing and storing your data in specific areas is a concern to you, OpenStack can help.

From a security perspective, you want the ability to understand where your control plane(s) and worker node(s) are, what services and networks they’re interacting with, and to mitigate as much security risk as possible. Because of that, certain sectors/organizations want the ability to manage the entire Kubernetes stack from start to finish to ensure they have full control of where data exists.

There’s also the concern from a security perspective that certain aspects of the architecture will have short cuts due to trying to speed up deployments, which can sometimes be slower with on-prem workloads due to many factors (datacenters, how networks are being deployed, physical hardware concerns, etc.). With OpenStack, it’s a private cloud, so it looks/smells/feels like a public cloud, but you’re hosting it. Because of that, the security concerns of taking short cuts can go away.

Closing Thoughts

Although many organizations are moving to public clouds, the truth is, many medium to large organizations are keeping workloads on-prem for reasons like security, scalability, and overall management. However, with on-prem workloads comes new challenges. To meet those challenges somewhere in the middle, organizations are starting to adopt private clouds like OpenStack to give them the best of both worlds.


New call-to-action


About Our Guest Writer

Michael Levan is a consultant, researcher, and content creator with a career that has spanned countless business sectors, and includes working with companies such as Microsoft, IBM, and Pluralsight. An engineer at heart, he is passionate about enabling excellence in software development, SRE, DevOps, and actively mentors developers to help them realize their full potential. You can learn more about Michael on his website at: https://michaellevan.net

Interested in Learning More?

OpenMetal and OpenStack are the perfect fit for Kubernetes. But don’t take our word for it:

Test Drive

For eligible organizations, individuals, and Open Source Partners, Private Cloud Cores are free of charge. Qualify today.

Apply Now

Subscribe

Join our community! Subscribe to our newsletter to get the latest company news, product releases, partner updates, and more.

Subscribe

Follow Us

Follow OpenMetal and the team on LinkedIn and share our passion for OpenStack, On-Demand Private Clouds, open source, and more!

Follow