June 15, 2017 Stephane Beuret

Persistent volumes with Rancher/Kubernetes on AWS

Volume persistence in Kubernetes (and other orchestrators) is in full swing, and for good reason, Kubenetes is no longer satisfied today to be a stateless runtime environment, but also and more often stateful … Let’s see how with Rancher and AWS we may have volumes provisioned automatically.

High level architecture on AWS

For this blog I use a single group of ec2, all in the same security group, which allows all traffic between instances.

I’ve bootstrapped ec2 instances with the last RancherOS version available on AWS, which is currently v1.0.2.



I have first to setup my rancher/server. I use a containerised instance of mariadb on the same ec2 which will hosting rancher/server.

$ docker run -d -p 3306:3306--restart=unless-stopped \
  -e MYSQL_DATABASE=cattle\
  -e MYSQL_USER=cattle\
  -e MYSQL_PASSWORD=cattle mariadb:5

 Then rancher/server.

docker run -d --restart=unless-stopped \
  -p 8080:8080rancher/server:stable \
  --db-host 3306\
  --db-user cattle --db-pass cattle --db-name cattle
Where is my instance private ip address.

Setup Kubernetes environment

Once rancher/server bootstrap is finished, it’s time to use the Web UI. Use first the Environment tab.

I will add a custom template, based on the Kubernetes one, so use Add Template button.

Then use Edit Config button, and change Cloud Provider field from Rancher to aws.

At the bottom, click on Configure button, and finally, at the bottom of Add Template page, click on Create.

With this template, I will then create a Kubernetes environment; use Add Environment button and choose the custom template*.

* As I write these lines, I realize that there is a bug with the custom template, and that it is better to modify the generic template Kubernetes, by changing the cloud provider.

You can then switch to this environment, and disable the Default.

For now, the environment is Unhealthy, but don’t worry about that, once you add hosts, it will recovery an healthy state.

Adding hosts

To add hosts, go to Infrastructure tab, and click on add hosts.

You need first to validate your host registration URL; I’ll use the internal ip address instead of te external, because of my firewall rules.

Custom machine driver is fine, so just copy the command in step 5…

…and paste it to the hosts.

Few seconds later, you can see your hosts registering in the Web UI.

Wait a couple a minutes to see your environment ready.

Allow your hosts to claim volumes

The goal here is that the hosts themselves can claim their volumes, without having to create them manually. To do this, hosts must be given permission to do so. So you have to create a policy in IAM, assign it to a role, and give that role to the hosts.

In IAM, create a custom policy; I’ve called mine Kubernetes.

You can use the following policy (provided bien Rancher Labs), or create your own.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": "ec2:Describe*",
            "Resource": "*"
            "Effect": "Allow",
            "Action": "ec2:AttachVolume",
            "Resource": "*"
            "Effect": "Allow",
            "Action": "ec2:DetachVolume",
            "Resource": "*"
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Effect": "Allow",
            "Action": [
            "Resource": [

She allows hosts to attach and detach volumes, as well as managing load balancers. Create then a Role, and attach the policy to the Role.

Last step, attach the policy to the ec2.

Try it!

It’s now time to try if volumes can be automatically claimed in Kubernetes. Check first if your environment is now ready.

Looks fine, so let’s have a look to Kubenetes dashboard.

It’s ready. I’ll use helm to launch a mysql pod, which claim for storage.

# Run kubectl commands inside here
# e.g. kubectl get rc
> helm repo update
Hang tight whilewe grab the latest from your chart repositories...
...Successfully got an update from the "stable"chart repository
Update Complete. ⎈ Happy Helming!⎈
> helm install stable/mysql
NAME:   voting-woodpecker
LAST DEPLOYED: Wed Jun 1416:44:112017
NAMESPACE: default
==> v1/Secret
NAME                      TYPE      DATA      AGE
voting-woodpecker-mysql   Opaque    20s
==> v1/Service
NAME                      CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
voting-woodpecker-mysql<none>        3306/TCP   0s
==> extensions/Deployment
voting-woodpecker-mysql   11100s
==> v1/PersistentVolumeClaim
voting-woodpecker-mysql   Pending                                      0s
MySQL can be accessed via port 3306on the following DNS name from within your cluster:
To get your root password run:
    kubectl get secret --namespace defaultvoting-woodpecker-mysql -o jsonpath="{.data.mysql-root-password}"| base64 --
decode; echo
To connect to your database:
1. Run an Ubuntu pod that you can use as a client:
    kubectl run -i --tty ubuntu --image=ubuntu:16.04--restart=Never -- bash -il
2. Install the mysql client:
    $ apt-get update && apt-get install mysql-client -y
3. Connect using the mysql cli, then provide your password:
    $ mysql -h voting-woodpecker-mysql -p
> kubectl get pods
NAME                                  READY     STATUS    RESTARTS   AGE
voting-woodpecker-mysql-422905634-8mhsl   1/1Running   01m
> kubectl get pvc
NAME                  STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
voting-woodpecker-mysql   Bound     pvc-c2d24add-514a-11e7-bf91-02d33055a22a   8Gi        RWO           1m
In Kubernetes UI.

Tagged: , , ,

Contact Us