January 30, 2017 Stephane Beuret

Hands on Kubernetes with kubeadm

Passionate for Docker? Curious to learn more about Kubernetes, the Google orchestrator? Kubeadm is the solution! Kubeadm is a new tool to deploy painlessly kubernetes on Centos 7, Ubuntu 16.0′ or even Hypriot v1.0.1 on RPi! To learn more about kubeadm: http://kubernetes.io/docs/getting-started-guides/kubeadm/

In this technical blog, I’ll show you how to:

  • Create four ec2 instances on AWS with Centos 7.2
  • Install Docker on Centos with LVM devicemapper
  • Install Kubernetes and Kubeadm on each instance
  • Run your Master and join with the minion
  • Launch your first nginx Pod
  • Create your first Replication Controller
  • Expose your Replication Controller  with a Load Balancer Service

So, let’s first create four ec2 instances.

Select your image, let’s choose CentOS for example even if running a clean Docker is a bit more complicated than under Ubuntu.

Choose your instance type. In this demo I’ve chosen t2.medium, which is good enough for kubadm itself but on a large environment, Kubernetes Cluster instances should need more cpu and RAM ; so select your instance type accordingly.

Create four instances, lab environment perfectly fits with this blog.

Add an additional disk for devicemapper. Even if it’s just a demo, I dislike running Docker on a loopback filesystem. So I need a dedicated disk for my lvm device-mapper.

Click on Review an Launch and a couple of seconds after instances are already running, great!

To access more efficiently to your instances, create a config file under .ssh/ in your home directory. Perms for .ssh must be 700 and 600 for .ssh/config.
Host blog-k8s-ma
  HostName ec2-52-213-95-220.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s1
  HostName ec2-52-213-33-234.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s2
  HostName ec2-52-213-86-70.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s3
  HostName ec2-52-213-93-214.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Now you can access to your instances only using ssh blog-k8s-xx:
$ ssh blog-k8s-ma
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such fileor directory
To avoid unsightly message like -bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory, edit /etc/environment on your instance:
Now it’s time to install docker
The development team @docker works on Ubuntu, that is why is far easy to install Docker on Ubuntu. Personally, I find Ubuntu too much handwork, so that’s why I prefer to blog about CentOS. To find all details about this procedure, please refer to https://docs.docker.com/engine/installation/linux/centos/.
First update instances, it’s always the first step to perform.
$ sudo yum update -y
$ sudo reboot
Add the Docker repo to use docker-engine package instead docker from epel repo, which is not up to date.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
name=Docker Repository
Then install the docker-engine package with a simple yum install:
$ sudo yum install docker-engine -y
It’s now time to configure device-mapper
# fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes, 41943040 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk label type: dos
 Disk identifier: 0x000123f5

Device Boot Start End Blocks Id System
 /dev/xvda1 * 2048 41929649 20963801 83 Linux

Disk /dev/xvdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 # yum install lvm2 -y
 # pvcreate /dev/xvdb
 Physical volume "/dev/xvdb" successfully created
 # vgcreate docker /dev/xvdb
 Volume group "docker" successfully created
 # lvcreate --wipesignatures y -n thinpool docker -l 95%VG
 Logical volume "thinpool" created.
 # lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
 Logical volume "thinpoolmeta" created.
 # lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
 WARNING: Converting logical volume docker/thinpool and docker/thinpoolmeta to pool's data and metadata volumes.
 Converted docker/thinpool to thin pool.
 # mkdir /etc/systemd/system/docker.service.d
 # tee /etc/systemd/system/docker.service.d/docker-thinpool.conf <<-'EOF'
 ExecStart=/usr/bin/dockerd --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool \
 --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true
 # systemctl daemon-reload
 # systemctl start docker
 # docker info
 Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 19.92 MB
 Data Space Total: 20.4 GB
 Data Space Available: 20.38 GB
 Metadata Space Used: 61.44 kB
 Metadata Space Total: 213.9 MB
 Metadata Space Available: 213.8 MB
 Thin Pool Minimum Free Space: 2.039 GB
 # docker ps
Finally install kubeadm
To finish this long installation procedure, let’s install kubeadm. He’s part of kubenetes repository. You can notice on the way a cool sed command: replace in line beginning by SELINUX enforcing by disabled; very useful when you dislike as me using vi at every turn.
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
# sed -i '/^SELINUX./ { s/enforcing/disabled/; }' /etc/selinux/config
# setenforce 0
# yum install -y kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker
# systemctl enable kubelet && systemctl start kubelet
Initializing your master
Well done, it’s time to setup your hosts to run kubernetes.
It could be convenient to have a “true” hostname instead of something like ip-192-168-0-177:
# echo "blog-k8s-ma.data-essential.com" > /etc/hostname
# sed -i s'/localhost l/blog-k8s-ma.data-essential.com localhost l/' /etc/hosts
# echo "HOSTNAME=blog-k8s-ma.data-essential.com" >> /etc/sysconfig/network
# echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
You should be wondering how easy is it: just run kubeadm init on the master node. Here I add --pod-network-cidr=, because it’s a prerequisite to run flannel, but there are others network drivers available for kubernetes (Weave, Calico…)
# kubeadm init --pod-network-cidr=

A word about Kubernetes core services.There is some base services that run on:

Master node

  • etcd: the key-value store from CoreOS
  • api server: it is Kubernetes entry point for external and internal services
  • scheduler: choose on which minion run a Pod, depending of it’s resources
  • controller manager: create, update and destroy resources that he manage

Worker node (minion)

  • kubelet: verify that minion is well, and perform health check on Pods
  • kube-proxy: act as a proxy and a load-balancer, he manage network traffic

There is not network service, because even if it is mandatory, it’s an addon: http://kubernetes.io/docs/admin/addons/. This mean that you can choose your network driver between:

  • flannel
  • calico
  • canal
  • romana
  • weave net
After a couple of minutes, the cluster is set up and gives you the following message:
Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:
kubeadm join --token=d562d2.bf3721e0655d4f12
Let’s have a look on our running containers (for now, we only have system pods, so use --all-namespaces or --namespace kube-system on your command line to avoid an empty answer)
# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-tit44                                   1/1	Running             0          2m
kube-system   etcd-blog-k8s-ma.data-essential.com                      1/1	Running             0          1m
kube-system   kube-apiserver-blog-k8s-ma.data-essential.com            1/1	Running             0          3m
kube-system   kube-controller-manager-blog-k8s-ma.data-essential.com   1/1	Running             0          3m
kube-system   kube-discovery-1150918428-hixmp                          1/1	Running             0          2m
kube-system   kube-dns-654381707-8tb6w                                 0/3	ContainerCreating   0          2m
kube-system   kube-proxy-flvft                                         1/1	Running             0          2m
kube-system   kube-scheduler-blog-k8s-ma.data-essential.com            1/1	Running             
Nice! Don’t worry about the DNS pod, he is waiting for the network. So it’s time get the network pod. For using flannel, do:
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
Now network should be running as well as DNS:
# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-tit44                                   1/1	Running   0          7m
kube-system   etcd-blog-k8s-ma.data-essential.com                      1/1	Running   0          6m
kube-system   kube-apiserver-blog-k8s-ma.data-essential.com            1/1	Running   0          7m
kube-system   kube-controller-manager-blog-k8s-ma.data-essential.com   1/1	Running   0          7m
kube-system   kube-discovery-1150918428-hixmp                          1/1	Running   0          7m
kube-system   kube-dns-654381707-8tb6w                                 3/3	Running   0          6m
kube-system   kube-flannel-ds-inzwb                                    2/2	Running   0          1m
kube-system   kube-proxy-flvft                                         1/1	Running   0          6m
kube-system   kube-scheduler-blog-k8s-ma.data-essential.com            1/1	Running   0          6m
Perfect, minions can now join: use the command provided by master node on each worker node.
# kubeadm join --token=d562d2.bf3721e0655d4f12

Node join complete:

* Certificate signing request sent to master and response received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
Verify on master node that workers have successfully join:
# kubectl get nodes
NAME                             STATUS    AGE
blog-k8s-ma.data-essential.com   Ready     45m
blog-k8s-s1.data-essential.com   Ready     1m
blog-k8s-s2.data-essential.com   Ready     54s
blog-k8s-s3.data-essential.com   Ready     46s
Our cluster is now ready to use! Ok, all this way traveled to finally:
Create the first pod
I have wrote a yaml file with all necessary information to create an nginx pod. All these data are mandatory in your yaml: name, image, pull policy and so on. This is a simple basic template.
apiVersion: v1
kind: Pod
  name: nginx
    apps: nginx
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    - containerPort: 80
Now, I will use my template to run my first pod and ensure everything run as expected.
# kubectl create -f nginx-pod.yml
pod "nginx" created
# kubectl get pods
nginx     1/1       Running   0          16s
# kubectl get pods -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP           NODE
nginx     1/1       Running   0          32s   blog-k8s-s3.data-essential.com
# kubectl describe pods nginx
Name:       nginx
Namespace:  default
Node:       blog-k8s-s3.data-essential.com/
Start Time: Wed, 30 Nov 2016 20:22:11 +0000
Labels:     apps=nginx
Status:     Running
Fine, the pod goes well, but running a pod is not the objective on a kubernetes cluster. For sure, I rather prefer to have an nginx service highly available than a pod which may failed at any time. That true, if it fails, kubernetes will schedule a new one, but meanwhile I loose my service… Let’s delete the pod.
# kubectl delete -f nginx-pod.yml
pod "nginx" deleted
# kubectl get pods
Let’s create a new yaml file for ReplicationController. A ReplicationController ensures that a specified number of pod “replicas” are running at any one time.
Once the file is wrote down, let’s use it to run the rc:
apiVersion: v1
kind: ReplicationController
  name: nginx
  replicas: 3
    apps: nginx
      name: nginx
        apps: nginx
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        - containerPort: 80
Once the file is wrote down, let’s use it to run the rc:
# kubectl create -f nginx-rs.yml
replicationcontroller "nginx" created
# kubectl get rc
nginx     3         3         3         32s
# kubectl get pods
nginx-2vaeu   1/1	Running   0          18s
nginx-cnupw   1/1	Running   0          18s
nginx-wyz4z   1/1	Running   0          18s
# kubectl get pods -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-2vaeu   1/1	Running   0          50s   blog-k8s-s2.data-essential.com
nginx-cnupw   1/1	Running   0          50s   blog-k8s-s1.data-essential.com
nginx-wyz4z   1/1	Running   0          50s   blog-k8s-s3.data-essential.com
Now we have 3 pods widely distributed on the worker nodes.Finally, I will create the Load Balancer Service, which will be the entry point for our nginx service by “exposing” (means set up a public port) the ReplicationController. First, I’ll create the yaml file:
apiVersion: v1
kind: Service
  name: nginx
    apps: nginx
  - protocol: TCP
    port: 80
    targetPort: 80
    apps: nginx
  type: LoadBalancer
# kubectl create -f nginx-svc.yml
service "nginx" created
# kubectl describe svc nginx
Name:           nginx
Namespace:      default
Labels:         apps=nginx
Selector:       apps=nginx
Type:           LoadBalancer
Port:           80/TCP
NodePort:       32314/TCP
Session Affinity:   None
As the describe command show us, we have a public ip for accessing nginx service on port 80, and the load balancer will allocate the traffic to a specific worker on his private ip (eg. on port 32314.
Tagged: , ,

Contact Us