Published: June 19, 2019
Updated: June 27, 2019
Tags: OpenStack, Rancher, Kubernetes
11 min read

Deploying Rancher On OpenStack

Red Hat OpenStack Platform 13, Rancher 2.2.4 and Ubuntu 16.04 guests used in this blog post

This blog post describes a deployment which is not optimal for production, use it only as part of development

Introduction To Rancher

“One Platform for Kubernetes Management”

Rancher [1] is an open source [2] container management platform (Platform as a Service) based on Kubernetes which allows users to run their container workloads across multiple Kubernetes clusters.

Rancher can be deployed on premise, baremetal and on various cloud providers.

Why OpenStack?

Rancher to container orchestration is what OpenStack to infrastructure is, an open platform to run workloads across multiple platforms and architectures.

While OpenStack manages the various parts of infrastructure (storage, networking, compute and many other services), it is most commonly paired with Kubernetes to leverage container orchestration.

Leveraging both products creates an open environment which is dynamic, scalable and robust.

OpenStack Preparation Before Deployment

Before deploying Rancher, make sure that all the credentials and endpoints are accessible and that instances could be created on OpenStack cloud.

Infrastructure details that are used during Rancher deployment (bold is a must):

  • Authentication URL
  • Username
  • Project(Tenant) Name/ID
  • Domain Name/ID (if using identity v3)
  • Region Name

If you are unsure on where to get these details from, we can download them from horizons’ web interface (OpenStack Dashboard):
Red Hat OpenStack 13 project dashboard
This will result in the following file being downloaded:

# This is a clouds.yaml file, which can be used by OpenStack tools as a source
# of configuration on how to connect to a cloud. If this is your only cloud,
# just put this file in ~/.config/openstack/clouds.yaml and tools like
# python-openstackclient will just work with no further config. (You will need
# to add your password to the auth section)
# If you have more than one cloud account, add the cloud entry to the clouds
# section of your existing file and you can refer to them by name with
# OS_CLOUD=openstack or --os-cloud=openstack
clouds:
  openstack:
    auth:
      auth_url: https://...:13000/v3
      username: “vkhitrin”
      project_id: 19eb5624e491407d810f8813aaa48720
      project_name: “rhosnfv”
      user_domain_name: “...”
    region_name: “regionOne”
    interface: “public”
    identity_api_version: 3

Instances use the following resources during Rancher deployment (bold is a must):

  • Keypair
  • Network (if necessery Floating IP as well)
  • Flavor
  • Image
  • Userdata (cloud-init script)
  • Availability Zone
  • Config Drive

Once everything is configured and verified with OpenStack, we may proceed on setting up Rancher server to deploy a Rancher cluster.

Deploying Rancher

Setting Up Rancher Server

Rancher server is a container with a web interface able to orchestrate and manage various Rancher clusters.
It is a standalone component that can be installed anywhere.

To deploy a Rancher server, execute the following command from a docker host:

sudo docker run --name rancher-server -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable

This will deploy a Rancher server from stable branch which can be accessed via http://<hostname>.

Once Rancher server is deployed, some initial configuration is required.

Set Rancher URL

Set a DNS/IP which will be accessible by all the nodes that will be created during cluster creation:
Rancher server URL configuration

Set Admin User

Create a password for admin user:
Rancher server admin user configuration
Afterwards, we will be redirected to Rancher’s dashboard:
Rancher clusters management

Enable OpenStack As Infrastructure Provider

Before proceeding with the cluster creation, an OpenStack node driver has to be enabled in Rancher since it is turned off by default.

On the top navigation bar, navigate to Tools > Drivers:
Rancher navigation toolbar
Navigate to Node Drivers:
Rancher Node Drivers
Select OpenStack from and press Activate:
Rancher enable OpenStack node driver
OpenStack Node Driver should be enabled now:
OpenStack node driver status
Navigate back to Clusters and press Add Cluster and OpenStack option should be added to From nodes in an infrastructure provider section:
Rancher cluster provisioning

Deploying Rancher Cluster On OpenStack Infrastructure

During deployment, Rancher will create several guest instances on OpenStack that contain Kubernetes components based on the role assigned to the servers.

Rancher cluster contains nodes from the following Kubernetes roles:

  • etcd - Kubernetes nodes running etcd container act as a single source of truth in the cluster. Stores Kubernetes’ state. A single etcd is enough but when deploying High Availability deployments, a quorum must be present (3,5 and so on).
  • control plane - Kubernetes nodes running stateless components like APIs, schedulers and controllers.
  • worker - Kubernetes nodes used for running workloads/pods.

Deploying a Rancher cluster

Go back to the Add Cluster tab.

Set Cluster Name + Description

Set a name to your cluster and add a description if you desire:
Cluster name+description configuration

Add Members to Cluster

By default, admin user is part of Default Admin group.
If you would like to add additional users during deployment, create them and add them in this section.
Rancher cluster members configuration
Additional users may be added post deployment.

Provisioning Instances

Select the required roles, node count and additional node pools according to your desired topology:
Rancher Node Pools configuration
Clicking on Add Node Template will open a template which will contain all the configuration needed to communicate with OpenStack:
Rancher OpenStack node template
Fill the template accordingly.

There are also additional settings to configure Docker which we will not discuss in this blog post.

Workarounds Used During Deployment
Authenticating With Identity v3

If you have issues with docker-machine using identity v2 when your OpenStack cloud only supports identity v3, refer to the following blog post [3] helpful.
Workaround applied:

  1. Login into the rancher-server container sudo docker exec -it rancher-server bash
  2. Add the following environment variable to ~/.bashrc echo ‘export OS_IDENTITY_API_VERSION="3"‘ >> ~/.bashrc
  3. Use tenantId, using tenantName will cause docker-machine to use identity v2
Floating IP Provisioning Not Consistent

Floating IP is not consistent because of a race condition present in docker-machine [4], I connected my external network directly to the instance.

Failed To Provision RHEL Guest Image

RHEL based instances failed to provision even though they are supported by Rancher (I provisioned a Kubernetes cluster using rke [5] on RHEL instances).
Used Ubuntu based instances.

Additional Cluster Configuration

Most notable is Network Provider which determins the CNI (Container Network Interface) that will be used in Kubernetes cluster:
Rancher cluster Network Provider configuration
Currently Rancher supports 4 options: Flannel, Calicio, Canal (default) and Weave.
A high level overview comparison between the various CNIs can be found in Rancher’s CNI Provider documentation [6].

Following Deployment Process

Rancher Cluster Provisioning

After initiating a Ranch cluster deployment, you’ll be redirected to the clusters dashboard:
Rancher clusters dashboard
The cluster should be in provisioning state.

OpenStack Provisiong

OpenStack Rancher node driver uses docker-machine to create the instances on your OpenStack cloud:
OpenStack dashboard with provisioned instances
The output of docker-machine is logged and can be viewed in the container logs docker logs rancher-server:

2019/06/15 19:03:44 [INFO] stdout: (ranch-infra-node3) Creating machine...
2019/06/15 19:03:44 [INFO] (ranch-infra-node3) Creating machine...
2019/06/15 19:03:58 [INFO] stdout: Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:03:58 [INFO] Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:03:58 [INFO] stdout: Detecting operating system of created instance...
2019/06/15 19:03:58 [INFO] Detecting operating system of created instance...
2019/06/15 19:03:58 [INFO] stdout: Waiting for SSH to be available...
2019/06/15 19:03:58 [INFO] Waiting for SSH to be available...
2019/06/15 19:03:58 [INFO] stdout: Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:03:58 [INFO] Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:03:59 [INFO] stdout: Detecting operating system of created instance...
2019/06/15 19:03:59 [INFO] Detecting operating system of created instance...
2019/06/15 19:03:59 [INFO] stdout: Waiting for SSH to be available...
2019/06/15 19:03:59 [INFO] Waiting for SSH to be available...
2019/06/15 19:03:59 [INFO] stdout: Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:03:59 [INFO] Waiting for machine to be running, this may take a few minutes...
2019/06/15 19:04:00 [INFO] stdout: Detecting operating system of created instance...
2019/06/15 19:04:00 [INFO] Detecting operating system of created instance...
2019/06/15 19:04:00 [INFO] stdout: Waiting for SSH to be available...
2019/06/15 19:04:00 [INFO] Waiting for SSH to be available...

Rancher Cluster Nodes Provisioning

Clicking on the cluster name will open the cluster management dashoard, click on Nodes in the navigation bar to track the node status:
Rancher cluster management
Rancher cluster nodes management
Clicking on a node will open node’s management dashboard, which will contain the current oepration/state:
Rancher cluster node status
If any errors occur, they’ll be briefly shown in the cluster management dashboard:
Rancher cluster with an error
More detailed errors be logged in rancher-server container, use docker logs rancher-server to view deployment logs:

2019/06/17 22:26:27 [INFO] cluster [c-h5nmn] provisioning: Building Kubernetes cluster
2019/06/17 22:26:27 [INFO] cluster [c-h5nmn] provisioning: [dialer] Setup tunnel for host [XXX.XXX.XXX.XXX]
2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Failed to set up SSH tunneling for host [XXX.XXX.XXX.XXX]: Can’t establish dialer connection: can not build dialer to c-h5nmn:m-tqqgm
2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Removing host [XXX.XXX.XXX.XX] from node lists
2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [XXX.XXX.XXX.XXX]
2019/06/17 22:26:27 [ERROR] ClusterController c-h5nmn [cluster-provisioner-controller] failed with: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [XXX.XXX.XXX.XXX]

Verifying Deployment

If your deployment was successful, the cluster management dashboard will be updated:
Rancher cluster successful deployment
Each individual node in the cluster will also be updated with the relevant information:
Rancher cluster node status post deployment

Post Deployment

Creating A Project

Rancher introduces a new component which is not native to Kubernetes, projects.
Project may contain several Kubernetes namespaces underneath which are logically grouped together and can be maintained using the same RBAC policies, which allows to utilize multitenancy [7].

By default, Rancher creates the following projects/namespaces:
Rancher cluster Projects/Namespaces

To create a project, navigate to Projects/Namespaces:
Rancher cluster
Click Add Project:
Rancher project creation
Resource quotas allow you to limit resource utilization, refer to the documentation for more info [8].

Creating A Namespace

After creating a project, we can create a namespace, click on Create Namespace in the desired project:
Rancher cluster project page
Once a namespace is created, it’ll appear under a project:
Namespace status

Deploying A Workload

Rancher Workloads

Workloads in Rancher include:

  • Kubernetes Deployments [9] - stateless pods, pods which on disruption do not keep their storage and are recreated.
  • Kubernetes StatefulSets [10] - stateful pods, pods which keep their storage on disruption.
  • Kubernetes DaemonSets [11] - pods that are scheduled on every node in the cluster.
  • Kubernetes Jobs [12] - pods which on successful completion terminate themselves.
  • Kubernetes CronJobs - [13] - similar to Kubernetes Jobs but run to completion on a cron-based schedule.

To deploy a workload, click on the scope section in a navigation bar and select your cluster and project:
Rancher scope selection
Press Deploy:
Rancher cluster project workloads dashboard

Deploy Workload

Pick type of workload, image, name and namespace:
Deploy Workload

Port Mapping

Expose container ports, there are several options to pick from:

  • NodePort - expose container ports on all nodes in the cluster
  • HostPort - expose container ports on a node where the continaier resides
  • ClusterIP - expose container ports on a cluster network(internal)
  • Layer-4 Load Balancer - expose container ports through a load balancer

Port Mapping configuration

Environment Variables

Pass environment variables and secrets:
Environment Variables

Node Scheduling

Define the scheduling rules of workload:
Node Scheduling configuration

Health Check

Define workload liveness and readiness [14]:
Health Check configuration

Volumes

Attach volumes to workload:
Volumes configuration

Scaling/Upgrade Policy

Define the scaling/update policy of workload:
Scaling/Upgrade Policy configuration

Verifying Workload

If your workload was successfully deployed, it will appear under Workloads:
Rancher project workloads
You should be able to connect to your pod(s) depending on your configuration.

Deploying An Application

Rancher Application

Applications in Rancher refer to Helm charts [15].
Helm charts are a collection of files which describe Kubernetes resources.

To deploy an application, click on the scope section in a navigation bar and select your cluster and project:
Rancher Scope Selection
Click on Apps in the navigation menu and press Launch:
Rancher cluster project apps dashboard

Select An Application From Catalog

Helm charts are stored in a catalog.
Rancher's default catalog contains many popular applications:
Application catalog

Deploying Application

After picking an application from the catalog, a configuration screen will appear. Every application will have it’s own configuration:
EFK app configuration

xip.io As A Wildcard DNS Server

If you would like to have multitenancy or several applications under the same host, you may use the wildcard DNS approach.
Wildcard DNS (*.XXX.XXX.XXX.XXX) will resolve all the subdomains using XXX.XXX.XXX.XXX address, for example both my-first-app.10.10.10.10 and my-second-app.10.10.10.10 will resolve to 10.10.10.10.

Wildcard DNS is leveraged by container management platforms to direct traffic between containers using a single domain.

xip.io is a public wildcard DNS server which will be used by default during application deployments.
Applications will recieve a DNS address http://<app_name>>.10.10.10.10.xip.io that is resolved to 10.10.10.10.

Verifying Application

If your application was successfuly deployed, it will appear under Apps:
Application management

Final Notes

In this blog post we discussed how to quickstart a development Rancher environment using OpenStack infrastructure.
There are many additional features that are not discussed in this blog post (persistent volumes with Cinder, configuring a load balancer using Ocatavia and more) which will make your deployment production ready.

To know more about additional features, refer to Rancher, Kubernetes and OpenStack documentation.


  1. Rancher - Official Website ↩︎

  2. Rancher GitHub repo ↩︎

  3. codecentric - Configure your Gitlab CI with docker-machine against keystone v3 ↩︎

  4. docket-machine GitHub repo - Openstack floating IP race #4038 ↩︎

  5. rke GitHub repo ↩︎

  6. Rancher documentation - CNI Providers - CNI Features by Provider ↩︎

  7. Multitenancy - Wikipedia ↩︎

  8. Rancher documentation - Resource Quotas ↩︎

  9. Kubernetes documentation - Deployments ↩︎

  10. Kubernetes documentation - StatefulSets ↩︎

  11. Kubernetes documentation - DaemonSets ↩︎

  12. Kubernetes documentation - Jobs ↩︎

  13. Kubernetes documentation - CronJobs ↩︎

  14. Kubernetes documentation - Configure Liveness and Readiness Probes ↩︎

  15. Helm project website ↩︎