Red Hat OpenStack Platform 13, Rancher 2.2.4 and Ubuntu 16.04 guests used in this blog post
This blog post describes a deployment which is not optimal for production, use it only as part of development
Introduction To Rancher
“One Platform for Kubernetes Management”
Rancher can be deployed on premise, baremetal and on various cloud providers.
Rancher to container orchestration is what OpenStack to infrastructure is, an open platform to run workloads across multiple platforms and architectures.
While OpenStack manages the various parts of infrastructure (storage, networking, compute and many other services), it is most commonly paired with Kubernetes to leverage container orchestration.
Leveraging both products creates an open environment which is dynamic, scalable and robust.
OpenStack Preparation Before Deployment
Before deploying Rancher, make sure that all the credentials and endpoints are accessible and that instances could be created on OpenStack cloud.
Infrastructure details that are used during Rancher deployment (bold is a must):
- Authentication URL
- Project(Tenant) Name/ID
- Domain Name/ID (if using identity v3)
- Region Name
If you are unsure on where to get these details from, we can download them from horizons’ web interface (OpenStack Dashboard):
This will result in the following file being downloaded:
# This is a clouds.yaml file, which can be used by OpenStack tools as a source # of configuration on how to connect to a cloud. If this is your only cloud, # just put this file in ~/.config/openstack/clouds.yaml and tools like # python-openstackclient will just work with no further config. (You will need # to add your password to the auth section) # If you have more than one cloud account, add the cloud entry to the clouds # section of your existing file and you can refer to them by name with # OS_CLOUD=openstack or --os-cloud=openstack clouds: openstack: auth: auth_url: https://...:13000/v3 username: “vkhitrin” project_id: 19eb5624e491407d810f8813aaa48720 project_name: “rhosnfv” user_domain_name: “...” region_name: “regionOne” interface: “public” identity_api_version: 3
Instances use the following resources during Rancher deployment (bold is a must):
- Network (if necessery Floating IP as well)
- Userdata (cloud-init script)
- Availability Zone
- Config Drive
Once everything is configured and verified with OpenStack, we may proceed on setting up Rancher server to deploy a Rancher cluster.
Setting Up Rancher Server
Rancher server is a container with a web interface able to orchestrate and manage various Rancher clusters.
It is a standalone component that can be installed anywhere.
To deploy a Rancher server, execute the following command from a docker host:
sudo docker run --name rancher-server -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:stable
This will deploy a Rancher server from stable branch which can be accessed via
Once Rancher server is deployed, some initial configuration is required.
Set Rancher URL
Set a DNS/IP which will be accessible by all the nodes that will be created during cluster creation:
Set Admin User
Create a password for
Afterwards, we will be redirected to Rancher’s dashboard:
Enable OpenStack As Infrastructure Provider
Before proceeding with the cluster creation, an OpenStack node driver has to be enabled in Rancher since it is turned off by default.
On the top navigation bar, navigate to
Tools > Drivers:
OpenStack from and press
OpenStack Node Driver should be enabled now:
Navigate back to
Clusters and press
Add Cluster and
OpenStack option should be added to
From nodes in an infrastructure provider section:
Deploying Rancher Cluster On OpenStack Infrastructure
During deployment, Rancher will create several guest instances on OpenStack that contain Kubernetes components based on the role assigned to the servers.
Rancher cluster contains nodes from the following Kubernetes roles:
- etcd - Kubernetes nodes running
etcdcontainer act as a single source of truth in the cluster. Stores Kubernetes’ state. A single
etcdis enough but when deploying High Availability deployments, a quorum must be present (3,5 and so on).
- control plane - Kubernetes nodes running stateless components like APIs, schedulers and controllers.
- worker - Kubernetes nodes used for running workloads/pods.
Deploying a Rancher cluster
Go back to the
Add Cluster tab.
Set Cluster Name + Description
Set a name to your cluster and add a description if you desire:
Add Members to Cluster
admin user is part of Default Admin group.
If you would like to add additional users during deployment, create them and add them in this section.
Additional users may be added post deployment.
Select the required roles, node count and additional node pools according to your desired topology:
Add Node Template will open a template which will contain all the configuration needed to communicate with OpenStack:
Fill the template accordingly.
There are also additional settings to configure Docker which we will not discuss in this blog post.
Workarounds Used During Deployment
Authenticating With Identity v3
If you have issues with
docker-machine using identity v2 when your OpenStack cloud only supports identity v3, refer to the following blog post  helpful.
- Login into the
sudo docker exec -it rancher-server bash
- Add the following environment variable to ~/.bashrc
echo ‘export OS_IDENTITY_API_VERSION="3"‘ >> ~/.bashrc
docker-machineto use identity v2
Floating IP Provisioning Not Consistent
Floating IP is not consistent because of a race condition present in
docker-machine , I connected my external network directly to the instance.
Failed To Provision RHEL Guest Image
RHEL based instances failed to provision even though they are supported by Rancher (I provisioned a Kubernetes cluster using
rke  on RHEL instances).
Used Ubuntu based instances.
Additional Cluster Configuration
Most notable is
Network Provider which determins the CNI (Container Network Interface) that will be used in Kubernetes cluster:
Currently Rancher supports 4 options:
Canal (default) and
A high level overview comparison between the various CNIs can be found in Rancher’s CNI Provider documentation .
Following Deployment Process
Rancher Cluster Provisioning
After initiating a Ranch cluster deployment, you’ll be redirected to the clusters dashboard:
The cluster should be in
OpenStack Rancher node driver uses
docker-machine to create the instances on your OpenStack cloud:
The output of
docker-machine is logged and can be viewed in the container logs
docker logs rancher-server:
2019/06/15 19:03:44 [INFO] stdout: (ranch-infra-node3) Creating machine... 2019/06/15 19:03:44 [INFO] (ranch-infra-node3) Creating machine... 2019/06/15 19:03:58 [INFO] stdout: Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:03:58 [INFO] Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:03:58 [INFO] stdout: Detecting operating system of created instance... 2019/06/15 19:03:58 [INFO] Detecting operating system of created instance... 2019/06/15 19:03:58 [INFO] stdout: Waiting for SSH to be available... 2019/06/15 19:03:58 [INFO] Waiting for SSH to be available... 2019/06/15 19:03:58 [INFO] stdout: Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:03:58 [INFO] Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:03:59 [INFO] stdout: Detecting operating system of created instance... 2019/06/15 19:03:59 [INFO] Detecting operating system of created instance... 2019/06/15 19:03:59 [INFO] stdout: Waiting for SSH to be available... 2019/06/15 19:03:59 [INFO] Waiting for SSH to be available... 2019/06/15 19:03:59 [INFO] stdout: Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:03:59 [INFO] Waiting for machine to be running, this may take a few minutes... 2019/06/15 19:04:00 [INFO] stdout: Detecting operating system of created instance... 2019/06/15 19:04:00 [INFO] Detecting operating system of created instance... 2019/06/15 19:04:00 [INFO] stdout: Waiting for SSH to be available... 2019/06/15 19:04:00 [INFO] Waiting for SSH to be available...
Rancher Cluster Nodes Provisioning
Clicking on the cluster name will open the cluster management dashoard, click on
Nodes in the navigation bar to track the node status:
Clicking on a node will open node’s management dashboard, which will contain the current oepration/state:
If any errors occur, they’ll be briefly shown in the cluster management dashboard:
More detailed errors be logged in
rancher-server container, use
docker logs rancher-server to view deployment logs:
2019/06/17 22:26:27 [INFO] cluster [c-h5nmn] provisioning: Building Kubernetes cluster 2019/06/17 22:26:27 [INFO] cluster [c-h5nmn] provisioning: [dialer] Setup tunnel for host [XXX.XXX.XXX.XXX] 2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Failed to set up SSH tunneling for host [XXX.XXX.XXX.XXX]: Can’t establish dialer connection: can not build dialer to c-h5nmn:m-tqqgm 2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Removing host [XXX.XXX.XXX.XX] from node lists 2019/06/17 22:26:27 [ERROR] cluster [c-h5nmn] provisioning: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [XXX.XXX.XXX.XXX] 2019/06/17 22:26:27 [ERROR] ClusterController c-h5nmn [cluster-provisioner-controller] failed with: Cluster must have at least one etcd plane host: failed to connect to the following etcd host(s) [XXX.XXX.XXX.XXX]
If your deployment was successful, the cluster management dashboard will be updated:
Each individual node in the cluster will also be updated with the relevant information:
Creating A Project
Rancher introduces a new component which is not native to Kubernetes, projects.
Project may contain several Kubernetes namespaces underneath which are logically grouped together and can be maintained using the same RBAC policies, which allows to utilize multitenancy .
By default, Rancher creates the following projects/namespaces:
To create a project, navigate to
Resource quotas allow you to limit resource utilization, refer to the documentation for more info .
Creating A Namespace
After creating a project, we can create a namespace, click on
Create Namespace in the desired project:
Once a namespace is created, it’ll appear under a project:
Deploying A Workload
Workloads in Rancher include:
- Kubernetes Deployments  - stateless pods, pods which on disruption do not keep their storage and are recreated.
- Kubernetes StatefulSets  - stateful pods, pods which keep their storage on disruption.
- Kubernetes DaemonSets  - pods that are scheduled on every node in the cluster.
- Kubernetes Jobs  - pods which on successful completion terminate themselves.
- Kubernetes CronJobs -  - similar to Kubernetes Jobs but run to completion on a cron-based schedule.
To deploy a workload, click on the scope section in a navigation bar and select your cluster and project:
Pick type of workload, image, name and namespace:
Expose container ports, there are several options to pick from:
- NodePort - expose container ports on all nodes in the cluster
- HostPort - expose container ports on a node where the continaier resides
- ClusterIP - expose container ports on a cluster network(internal)
- Layer-4 Load Balancer - expose container ports through a load balancer
Pass environment variables and secrets:
Define the scheduling rules of workload:
Define workload liveness and readiness :
Attach volumes to workload:
Define the scaling/update policy of workload:
If your workload was successfully deployed, it will appear under
You should be able to connect to your pod(s) depending on your configuration.
Deploying An Application
Applications in Rancher refer to Helm charts .
Helm charts are a collection of files which describe Kubernetes resources.
To deploy an application, click on the scope section in a navigation bar and select your cluster and project:
Apps in the navigation menu and press
Select An Application From Catalog
Helm charts are stored in a catalog.
Rancher's default catalog contains many popular applications:
After picking an application from the catalog, a configuration screen will appear. Every application will have it’s own configuration:
xip.io As A Wildcard DNS Server
If you would like to have multitenancy or several applications under the same host, you may use the wildcard DNS approach.
Wildcard DNS (
*.XXX.XXX.XXX.XXX) will resolve all the subdomains using
XXX.XXX.XXX.XXX address, for example both
my-second-app.10.10.10.10 will resolve to
Wildcard DNS is leveraged by container management platforms to direct traffic between containers using a single domain.
xip.io is a public wildcard DNS server which will be used by default during application deployments.
Applications will recieve a DNS address
http://<app_name>>.10.10.10.10.xip.io that is resolved to
If your application was successfuly deployed, it will appear under
In this blog post we discussed how to quickstart a development Rancher environment using OpenStack infrastructure.
There are many additional features that are not discussed in this blog post (persistent volumes with Cinder, configuring a load balancer using Ocatavia and more) which will make your deployment production ready.
To know more about additional features, refer to Rancher, Kubernetes and OpenStack documentation.