Many organizations are using Kubernetes to containerize their workloads because of the numerous benefits. These benefits include portability, scalability, reliability, automation and ecosystem. Running Kubernetes workloads on the wrong type of infrastructure can lead to a range of undesirable consequences such as: performance degradation, reliability issues, security vulnerabilities, and increased cost.
In this blog post, we’ll explore the key considerations you should keep in mind when choosing the right infrastructure to host your Kubernetes workloads.

Kubernetes on OpenStack is a powerful combination. It helps organizations manage their applications and services. This power duo provides the flexibility to scale up or down as needed, while also allowing for easy deployment and management of applications. This is essential for an organizations success in today’s fast paced digital age where organizations must be able to deploy their applications quickly and efficiently, at scale, and across multiple environments.

At the beginning of the Kubernetes era, many engineers had a concern – what about apps that have to store data? Kubernetes got a “reputation” of only being for stateless applications and applications that didn’t need to store data. However, that’s vastly changed over the years when implementing Kubernetes. In this blog post, you’re going to learn how to manage Kubernetes volumes and what CSIs are, along with how to install a CSI plugin on a Kubernetes cluster running in OpenStack.

Creating repeatable and automated processes, especially for creating infrastructure layers, is drastically important. It’s the make or break between creating resources at scale and clicking a bunch of buttons for 90% of your day. In the past few years, the mantra for almost every engineering team has been move faster, and the way to do that is with proper automation. 

In this blog post, you’re going to learn about an important repeatable process, creating Kubernetes clusters using Magnum.