Kubernetes Security: Managing Clusters on OpenMetal

In this blog:

  • Ops Security
  • Public vs Private Clusters
  • Networking
  • OpenStack Security-Specific Features
  • Kubernetes on OpenStack for Telco Guest Video Tutorial

Security is at the forefront of every engineer’s mind when it comes to Kubernetes. In a recent article published by bleeping computer, over 900,000 Kubernetes instances were exposed online. When it comes to security and Kubernetes from an OpenStack perspective, one of the most significant pieces is Operations security. In this blog post, you’ll learn about Kubernetes security on OpenStack and how to manage it from an Ops perspective.

Ops Security

When you’re thinking about raw Kubernetes operations security, two things come to mind: 

  • Cluster Management
  • API Management 

With cluster management, it’s all about where and how your Kubernetes clusters are running. When thinking about the control plane/API server, you have to ensure that pieces like Etcd and the scheduler are secure. With worker nodes, all Pods are running there, so if your worker nodes aren’t secure, chances are your containerized applications aren’t either. Figuring out how to secure the underlying operating system and the system itself is crucial for Kubernetes deployments.

Underneath the Kubernetes hood is an API. Whether you’re running something like kubectl get pods or kubectl apply -f deployment.yaml , you’re interacting with an API. The Kubernetes API has multiple versions and with each new versions comes feature enhancements and when needed, security updates. Depending on what distro you’re deploying Kubernetes to on OpenStack, the API versions will be limited. For example, on fedora-atomic 9.4.1, you can only go up to Kubernetes API v1.18. On fedora-coreos 14.0.0, you can go up to Kubernetes API v1.23.3. That’s a huge difference in Kubernetes API versions and support from a Kubernetes perspective, so you need to ensure that the API is handled and upgraded with security in mind.

Public vs Private Clusters

When you’re deploying a Kubernetes API server, you have two typical options:

  • Public control plane
  • Private control plane

With a public control plane, the API server/control plane is exposed to the entire world. Although attackers would need some pretty sophisticated brute force attacks to crack the Kubernetes server, it’s still not good to have servers sitting on the internet for no reason. If RBAC policies aren’t configured the proper way (and in many cases, they aren’t), attackers will have a much easier time breaking into your cluster. By default, the API server is exposed on port 443, which can be a major security concern in itself.

With a private control plane, the API server/control plane is only accessible inside  VPC (network) and if you’re working from home or remote, chances are you’ll need a VPN to access it. Creating a private Kubernetes cluster is a great way to ensure isolation and level up security.

When running Kubernetes on OpenStack, you have the option to choose whether you want a public control plane or a private control plane.

Networking 

Hosting a Kubernetes cluster on OpenStack allows you to have full control, including over how you manage networking. Kubernetes for OpenStack currently supports two network frameworks:

  • Flannel
  • Calico

Flannel is a great choice for engineers that are just getting started with hosting a raw Kubernetes cluster. It’s pretty much out-of-the-box networking with minimal configuration needed. It’s a great way to get a cluster up and running fast. The problem is that it isn’t focused on security out of the box, but instead, focused on ease of use.

Calico on the other hand is solely focused on Kubernetes and container security out of the box. It’s supported by many Kubernetes providers including OpenStack, OpenShift, and almost any other bare-metal Kubernetes service. The reason why engineers typically go with Calico from a security perspective is that it doesn’t just support security policies for hosts and policy rules, but it supports security policies at the application layer. For example, if you’re running an Istio service mesh, you can set up Calico security policies for the service mesh. Another huge reason is pod-to-pod security. By default, Kubernetes doesn’t encrypt east-west traffic with Pods, but Calico does using WireGuard.

OpenStack Security-Specific Features

To wrap up this blog post, let’s talk about some OpenStack-specific features that are security-related and can help when making the OpenStack Decision.

First, there’s the Octavia OVN Load Balancer. With a lot of other bare-metal Kubernetes providers, you have to worry about how you’re going to load balance traffic and it doesn’t come built-in. A lot of engineers will go with metallb to solve this. However, with OpenStack, you don’t have to go outside of the platform. If you have multiple public-facing applications that need a load balancer, or if you want your Kubernetes Pods to be behind a load balancer so they can be managed by a service mesh, having the Octavia OVN Load Balancer out of the box is one less headache that you have to worry about.

Then, there are the OVN ACLs and security groups. As all engineers know, handling traffic both inbound and outbound is crucial. Opening all ports, for example, would be a major concern for every security team. However, because managing security groups can be time-consuming, a lot of engineers that aren’t security-focused will do this just to get their application running. OVN ACLs help to make managing security groups a little bit easier. It allows you to manage packet filtering policies on OpenStack without having to worry about any 3rd-party provider or filtering on the network equipment itself inside of your data center.

 

About Our Guest Writer

Michael Levan is a consultant, researcher, and content creator with a career that has spanned countless business sectors, and includes working with companies such as Microsoft, IBM, and Pluralsight. An engineer at heart, he is passionate about enabling excellence in software development, SRE, DevOps, and actively mentors developers to help them realize their full potential. You can learn more about Michael on his website at: https://michaellevan.net

Interested in Learning More?

OpenMetal and OpenStack are the perfect fit for Kubernetes. But don’t take our word for it:

Test Drive

For eligible organizations, individuals, and Open Source Partners, Private Cloud Cores are free of charge. Qualify today.

Apply Now

Subscribe

Join our community! Subscribe to our newsletter to get the latest company news, product releases, partner updates, and more.

Subscribe

Follow Us

Follow OpenMetal and the team on LinkedIn and share our passion for OpenStack, On-Demand Private Clouds, open source, and more!

Follow