Businesses looking to effectively manage and orchestrate these workloads at scale are using containerizing workloads on Kubernetes within the cloud. Within the open source community, the combination of Kubernetes and OpenStack offers a powerful solution for businesses. It allows businesses to utilize efficient orchestration, scalability, flexibility, and portability, which ensures optimal resource utilization and cost efficiency.
If you’re running Kubernetes on OpenStack, you will need a platform to manage your cluster and for this many users turn to Rancher.
Rancher provides a unified interface for managing Kubernetes clusters. It simplifies deployment, configuration, and monitoring of Kubernetes. This makes it the ideal choice for organizations looking to leverage the power of Kubernetes on OpenStack.
Why Use Rancher To Run Kubernetes Clusters on OpenStack?
Ease of Use: One of Rancher’s key strengths lies in its user-friendly nature. Rancher offers a graphical user interface (GUI) that makes managing Kubernetes clusters a breeze. With Rancher, you can easily create and oversee clusters, deploy applications, and keep an eye on their performance. This simplicity ensures that even users with limited Kubernetes knowledge can easily navigate and make the most of the platform.
Robust Features: Rancher comes equipped with an extensive range of features that enhance the management of Kubernetes clusters on OpenStack. Some notable features include:
Cluster Provisioning: Rancher is a versatile platform that supports creating Kubernetes clusters not only on OpenStack but also on other infrastructure platforms like AWS, Azure, and vSphere. This means that organizations can easily build clusters that combine resources from their own data centers and the cloud. This flexibility enables them to scale their applications as needed and adapt to changing demands efficiently.
Application Deployment: Rancher provides different ways to deploy applications to Kubernetes clusters. You can choose from various deployment methods such as Helm charts, Docker images, and Kubernetes manifests. These methods simplify the process of deploying applications, allowing you to select the most convenient option for your needs. Rancher also offers a built-in application catalog, which means you can deploy pre-configured applications with just one click. This catalog saves time and effort by providing ready-to-use applications that can be quickly deployed to your Kubernetes clusters.
Monitoring Capabilities: Rancher offers powerful monitoring capabilities for Kubernetes clusters. It provides features like health checks, metrics, and logs, allowing you to keep track of the performance and health of your clusters. Rancher also seamlessly integrates with well-known third-party monitoring tools such as Prometheus and Grafana. This integration enables you to have a comprehensive view of your cluster’s performance and resource utilization. With Rancher, you can easily monitor and analyze data to ensure the smooth operation of your Kubernetes clusters, even if you’re not a technical expert.
Enhanced Security: Security is a top priority for Rancher, and it provides various features to protect your Kubernetes clusters. One essential feature is role-based access control (RBAC), which allows administrators to manage user permissions and control access to Kubernetes resources. This ensures that only authorized individuals have the appropriate level of access. Rancher also offers network policies, which enable administrators to define rules for container communication, ensuring that network traffic within the cluster is secure and controlled.
Moreover, Rancher takes steps to protect data by implementing encryption measures. It ensures that data is encrypted both at rest and during transit. This means that your sensitive information remains safeguarded, reducing the risk of unauthorized access.
By incorporating these security features, Rancher ensures that your Kubernetes clusters are well-protected and aligned with industry best practices.
Hybrid Environments: Rancher excels in managing Kubernetes clusters in hybrid environments, where organizations have a mix of on-premises and cloud resources. With Rancher, you can easily manage and orchestrate clusters across different environments, whether it’s on-premises or in the cloud. This means you can deploy applications seamlessly, leveraging the benefits of both worlds.
The hybrid capability offered by Rancher allows organizations to adapt their infrastructure based on their changing needs. They can utilize their existing on-premises resources while taking advantage of the scalability and flexibility offered by the cloud. Rancher simplifies the management of these hybrid environments, providing a unified platform to oversee and control your Kubernetes clusters, regardless of where they are deployed.
By using Rancher, organizations can achieve a seamless and efficient integration of on-premises and cloud resources, enabling them to optimize their infrastructure and adapt to the evolving demands of their workloads.
Rancher Use Cases In Today’s World
Rancher is a versatile solution that caters to a wide range of use cases, making it suitable for various types of organizations:
- DevOps
Rancher simplifies Kubernetes management for DevOps teams. Its all-in-one interface streamlines cluster provisioning, application deployment, monitoring, and security. This empowers DevOps teams to focus on efficiently delivering applications, improving productivity, and accelerating the software development lifecycle. - Managed Services
Rancher is valuable for managed service providers who offer Kubernetes as a service. With Rancher’s turnkey solution, service providers can deploy and manage Kubernetes clusters at scale. This reduces costs, enhances operational efficiency, and enables service providers to deliver a reliable and scalable platform for their customers’ containerized applications. - Enterprises
Rancher is an excellent choice for enterprises with diverse IT requirements. It provides scalability and flexibility, allowing enterprises to effectively manage Kubernetes clusters regardless of the size or complexity of their infrastructure. Rancher simplifies cluster deployment, orchestration, and monitoring, enabling enterprises to optimize their infrastructure and embrace modern application deployment practices. - Multi-Cloud and Hybrid Environments
Rancher is well-suited for organizations operating in multi-cloud and hybrid environments. It supports managing Kubernetes clusters across different cloud providers and on-premises infrastructure. Rancher’s unified management platform ensures consistent and efficient management of clusters, enabling organizations to leverage the benefits of multiple environments while maintaining control and visibility. - Edge Computing
Rancher is increasingly used in edge computing scenarios where Kubernetes clusters are deployed at the edge of the network. Rancher’s lightweight footprint and simplified management capabilities make it an ideal choice for managing and orchestrating containerized applications in distributed edge environments.
How To Install a Rancher Managed Cluster on OpenStack
This guide will validate running an RKE1 (Rancher) cluster on an OpenStack Cloud. We’ll be following the official rancher documentation: Setting up a High-availability RKE Kubernetes Cluster.
RKE1 is the first iteration of Rancher’s Kubernetes deployment system. RKE2 is available and also works within OpenStack. We’ll be creating a guide on RKE2 in the near future. To learn the differences between RKE1 and RKE2, please see RKE1 vs RKE2.
Setting up an RKE1 cluster on OpenStack is rather simple. First, we need create the nodes for our cluster within OpenStack. Then we create an RKE configuration file that points to each of our nodes. Finally, we use RKE to install Kubernetes on our nodes.
This Guide was created using a standard sized On-Demand OpenStack Cloud by OpenMetal.
Prerequisites
This guide requires access to the OpenStack CLI. Complete the following steps to install the OpenStack CLI.
Create Nodes
We’ll need to create 3 nodes for our cluster. Before we can create our nodes, we need to create a network, subnet, and router. We’ll also need to create a security group so traffic can reach our cluster.
Create a Project
Creating a project will separate the resources and make it easy to cleanup later.
openstack project create --domain default --description "RKE1 Cluster" rke1
openstack role add --project rke1 --user admin admin
Update the OpenStack CLI
Update the following environment variables.
Note: Replace <project_id> with the project id from the previous step.
export OS_PROJECT_ID=<project_id>
export OS_PROJECT_NAME=rke1
Network
We’ll need to create some networking resources for our nodes.
Create a Network
openstack network create \
--project rke1 \
rke1
Create a Subnet
openstack subnet create \
rke1-subnet \
--project rke1 \
--network rke1 \
--subnet-range 172.31.0.0/28
Create a Router
openstack router create \
rke1-router \
--project rke1
Add a Subnet to the Router
openstack router add subnet \
rke1-router \
rke1-subnet
Set the Router’s External Gateway
openstack router set --external-gateway \
$(openstack network list --external -f value -c ID) \
rke1-router
Create a Security Group
Create a security group that allows traffic on common ports required by Kubernetes deployment systems.
Note: This is not a definitive list. In production deployments you’ll want to lock down your security groups to only allow traffic to the nodes and ports that need it.
openstack security group create rke1 --project rke1
openstack security group rule create --protocol icmp --dst-port 1:65535 rke1
openstack security group rule create --protocol tcp --dst-port 22:22 rke1
openstack security group rule create --protocol tcp --dst-port 53:53 rke1
openstack security group rule create --protocol tcp --dst-port 179:179 rke1
openstack security group rule create --protocol tcp --dst-port 6443:6443 rke1
openstack security group rule create --protocol tcp --dst-port 2380:2380 rke1
openstack security group rule create --protocol tcp --dst-port 7080:7080 rke1
openstack security group rule create --protocol tcp --dst-port 8472:8472 rke1
openstack security group rule create --protocol tcp --dst-port 8080:8080 rke1
openstack security group rule create --protocol tcp --dst-port 9100:9100 rke1
openstack security group rule create --protocol tcp --dst-port 10250:10250 rke1
openstack security group rule create --protocol udp --dst-port 8472:8472 rke1
openstack security group rule create --protocol tcp --dst-port 30000:32767 rke1
Create a Key Pair
This key will be used by the deployment environment to access the nodes.
Generate a key pair
ssh-keygen -t ed25519 -N '' -f /root/.ssh/id_rke1
Upload the public key to OpenStack
openstack keypair create --public-key /root/.ssh/id_rke1.pub rke1-key
Create Instances
Prepare Docker installation script
Each of your nodes will need docker installed on them. This script will install any dependencies after the VM is created.
cat <<EOF > ./install_docker.sh
#!/bin/bash
curl https://releases.rancher.com/install-docker/20.10.sh | sh
sudo usermod -aG docker ubuntu
EOF
Create Servers
These nodes will serve as your 3 Kubernetes cluster nodes. We’ll use the --user-data
flag to pass the script we created above to the nodes. This will install docker on the nodes. We’ll also use the --max
flag to create 3 nodes at once.
openstack server create --flavor m1.medium \
--image="Ubuntu 20.04 (focal-amd64)" \
--network rke1 \
--key-name rke1-key \
--security-group rke1 \
--user-data ./install_docker.sh \
--max 3 \
rke1
Deployment node
We’ll also deploy a node that we’ll use to deploy our cluster. This node will be used to run the RKE installer. It’s not required, but it’s a good idea to keep the deployment environment separate from the cluster. This will allow you to easily destroy the deployment environment without affecting your cluster.
openstack server create --flavor m1.medium \
--image="Ubuntu 20.04 (focal-amd64)" \
--network rke1 \
--key-name rke1-key \
--security-group rke1 \
rke1-launcher2
Add a floating IP to the deployment node
At this point, you should have 4 nodes running. Each node is currently only accessible from the private network. We’ll need to add a floating IP to the deployment node so we can access it.
openstack floating ip create \
--description "RKE1 Cluster - Deployment Node" \
--project rke1 \
External
openstack server add floating ip rke1-launcher2 <floating-ip>
Note: You’ll need to replace
<floating-ip>
with the floating IP you created for this and the following commands.
Copy the SSH key to deployment node
We’ll need to copy the SSH key we created earlier to the deployment node. This will allow RKE to deploy Kubernetes to the 3 cluster nodes.
scp -i ~/.ssh/id_rke1 ~/.ssh/id_rke1 ubuntu@<floating-ip>:/home/ubuntu/.ssh/id_rsa
SSH to deployment node
ssh -i ~/.ssh/id_rke1 ubuntu@<floating-ip>
Deploy Your Cluster
Install RKE
curl -OL https://github.com/rancher/rke/releases/download/v1.3.14/rke_linux-amd64
chmod +x rke_linux-amd64 && sudo mv rke_linux-amd64 /usr/local/bin/rke
Install Kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Prepare your configuration file
Create a configuration file for your RKE cluster. Substitute the address
with the IP addresses of the nodes you created. Save this file as rancher-cluster.yml
Full documentation for the configuration file can be found here: Configuration Options
Example Configuration file: Example Configuration
You can view the IP addresses of your nodes by running:
openstack server list --project rke1
Configuration File
nodes:
- address: <node_1_IP_address>
user: ubuntu
role:
- controlplane
- worker
- etcd
- address: <node_2_IP_address>
user: ubuntu
role:
- controlplane
- worker
- etcd
- address: <node_3_IP_address>
user: ubuntu
role:
- controlplane
- worker
- etcd
services:
etcd:
snapshot: true
creation: 6h
retention: 24h
ingress:
provider: nginx
options:
use-forwarded-headers: "true"
Run RKE
This process takes about 2 minutes. After it completes, you should have a working Kubernetes cluster.
rke up --config rancher-cluster.yml
Output:
...
INFO[0171] [ingress] Setting up nginx ingress controller
INFO[0171] [ingress] removing admission batch jobs if they exist
INFO[0171] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0171] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0171] [addons] Executing deploy job rke-ingress-controller
INFO[0181] [ingress] removing default backend service and deployment if they exist
INFO[0181] [ingress] ingress controller nginx deployed successfully
INFO[0181] [addons] Setting up user addons
INFO[0181] [addons] no user addons defined
INFO[0181] Finished building Kubernetes cluster successfully
Verify Installation
Set config as default
mkdir ~/.kube && cp kube_config_rancher-cluster.yml ~/.kube/config
Fetch resources
Get pods in all namespaces.
kubectl get pods -A
Output:
ubuntu@rke1-launcher:~/rancher$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-9s6t2 0/1 Completed 0 3m55s
ingress-nginx ingress-nginx-admission-patch-2brpz 0/1 Completed 0 3m55s
ingress-nginx nginx-ingress-controller-j286h 1/1 Running 0 3m55s
ingress-nginx nginx-ingress-controller-nm5m7 1/1 Running 0 3m55s
ingress-nginx nginx-ingress-controller-x65pp 1/1 Running 0 3m55s
kube-system calico-kube-controllers-74df54cbb7-49xm7 1/1 Running 0 4m25s
kube-system canal-jkzvb 2/2 Running 0 4m26s
kube-system canal-mz67r 2/2 Running
...
Credentials
Save a copy of the following files in a secure location:
rancher-cluster.yml
: The RKE cluster configuration file.kube_config_cluster.yml
: The Kubeconfig file for the cluster, this file contains credentials for full access to the cluster.rancher-cluster.rkestate
: The Kubernetes Cluster State file, this file contains credentials for full access to the cluster.
What’s Next?
You’ve now deployed a Kubernetes cluster using RKE1. You can now deploy workloads to the cluster. However, it is recommended that you deploy an OpenStack load balancer and persistent volumes as well.
Here are some guides that may help you with this process.
You can find additional resources for running Kubernetes workloads on OpenStack here.
More from OpenMetal…
Ready to run Kubernetes Workloads on OpenStack? This page is our library of all Kubernetes Documentation, tutorials and Blogs created by our team and affiliates.
OpenMetal’s Cloud Cores support Kubernetes integration and gives users the freedom to choose their deployment and management systems…Learn More
Unleashing the Potential of Cloud-Based Applications with OpenShift.
Prefer to use OpenShift to run Kubernetes workloads on OpenStack? Explore how to streamline cloud-based application management with OpenShift. Learn more about its features and uses. Bonus OpenShift tutorial by LearnLinuxTv …Read More
On-Demand OpenStack clouds by OpenMetal support Kubernetes integration and give users the freedom to choose the deployment and management systems they are most comfortable with.
Craft your environment with tailored efficiency, scalability, and data security.
Test Drive
For eligible organizations, individuals, and Open Source Partners, Private Cloud Cores are free to trial. Apply today to qualify.
Subscribe
Join our community! Subscribe to our newsletter to get the latest company news, product releases, updates from partners, and more.