January 30, 2017 Stephane Beuret

Hands on Kubernetes with kubeadm

Passionate for Docker? Curious to learn more about Kubernetes, the Google orchestrator? Kubeadm is the solution! Kubeadm is a new tool to deploy painlessly kubernetes on Centos 7, Ubuntu 16.0′ or even Hypriot v1.0.1 on RPi! To learn more about kubeadm: http://kubernetes.io/docs/getting-started-guides/kubeadm/

In this technical blog, I’ll show you how to:

  • Create four ec2 instances on AWS with Centos 7.2
  • Install Docker on Centos with LVM devicemapper
  • Install Kubernetes and Kubeadm on each instance
  • Run your Master and join with the minion
  • Launch your first nginx Pod
  • Create your first Replication Controller
  • Expose your Replication Controller  with a Load Balancer Service

So, let’s first create four ec2 instances.

Select your image, let’s choose CentOS for example even if running a clean Docker is a bit more complicated than under Ubuntu.

Choose your instance type. In this demo I’ve chosen t2.medium, which is good enough for kubadm itself but on a large environment, Kubernetes Cluster instances should need more cpu and RAM ; so select your instance type accordingly.

Create four instances, lab environment perfectly fits with this blog.

Add an additional disk for devicemapper. Even if it’s just a demo, I dislike running Docker on a loopback filesystem. So I need a dedicated disk for my lvm device-mapper.

Click on Review an Launch and a couple of seconds after instances are already running, great!

Tip
To access more efficiently to your instances, create a config file under .ssh/ in your home directory. Perms for .ssh must be 700 and 600 for .ssh/config.
Host blog-k8s-ma
  HostName ec2-52-213-95-220.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s1
  HostName ec2-52-213-33-234.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s2
  HostName ec2-52-213-86-70.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Host blog-k8s-s3
  HostName ec2-52-213-93-214.eu-west-1.compute.amazonaws.com
  User centos
  identityFile ~/.ssh/data-essential-demo.pem
Now you can access to your instances only using ssh blog-k8s-xx:
$ ssh blog-k8s-ma
-bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such fileor directory
$
To avoid unsightly message like -bash: warning: setlocale: LC_CTYPE: cannot change locale (UTF-8): No such file or directory, edit /etc/environment on your instance:
LANG=en_US.utf-8
LC_ALL=en_US.utf-8
Now it’s time to install docker
The development team @docker works on Ubuntu, that is why is far easy to install Docker on Ubuntu. Personally, I find Ubuntu too much handwork, so that’s why I prefer to blog about CentOS. To find all details about this procedure, please refer to https://docs.docker.com/engine/installation/linux/centos/.
First update instances, it’s always the first step to perform.
$ sudo yum update -y
$ sudo reboot
Add the Docker repo to use docker-engine package instead docker from epel repo, which is not up to date.
$ sudo tee /etc/yum.repos.d/docker.repo <<-'EOF'
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg
EOF
Then install the docker-engine package with a simple yum install:
$ sudo yum install docker-engine -y
It’s now time to configure device-mapper
# fdisk -l

Disk /dev/xvda: 21.5 GB, 21474836480 bytes, 41943040 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disk label type: dos
 Disk identifier: 0x000123f5

Device Boot Start End Blocks Id System
 /dev/xvda1 * 2048 41929649 20963801 83 Linux

Disk /dev/xvdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
 Units = sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 # yum install lvm2 -y
 # pvcreate /dev/xvdb
 Physical volume "/dev/xvdb" successfully created
 # vgcreate docker /dev/xvdb
 Volume group "docker" successfully created
 # lvcreate --wipesignatures y -n thinpool docker -l 95%VG
 Logical volume "thinpool" created.
 # lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG
 Logical volume "thinpoolmeta" created.
 # lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta
 WARNING: Converting logical volume docker/thinpool and docker/thinpoolmeta to pool's data and metadata volumes.
 THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
 Converted docker/thinpool to thin pool.
 # mkdir /etc/systemd/system/docker.service.d
 # tee /etc/systemd/system/docker.service.d/docker-thinpool.conf <<-'EOF'
 [Service]
 ExecStart=
 ExecStart=/usr/bin/dockerd --storage-driver=devicemapper --storage-opt=dm.thinpooldev=/dev/mapper/docker-thinpool \
 --storage-opt=dm.use_deferred_removal=true --storage-opt=dm.use_deferred_deletion=true
 EOF
 # systemctl daemon-reload
 # systemctl start docker
 # docker info
 Storage Driver: devicemapper
 Pool Name: docker-thinpool
 Pool Blocksize: 524.3 kB
 Base Device Size: 10.74 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 19.92 MB
 Data Space Total: 20.4 GB
 Data Space Available: 20.38 GB
 Metadata Space Used: 61.44 kB
 Metadata Space Total: 213.9 MB
 Metadata Space Available: 213.8 MB
 Thin Pool Minimum Free Space: 2.039 GB
 # docker ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Finally install kubeadm
To finish this long installation procedure, let’s install kubeadm. He’s part of kubenetes repository. You can notice on the way a cool sed command: replace in line beginning by SELINUX enforcing by disabled; very useful when you dislike as me using vi at every turn.
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
	https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# sed -i '/^SELINUX./ { s/enforcing/disabled/; }' /etc/selinux/config
# setenforce 0
# yum install -y kubelet kubeadm kubectl kubernetes-cni
# systemctl enable docker
# systemctl enable kubelet && systemctl start kubelet
Initializing your master
Well done, it’s time to setup your hosts to run kubernetes.
Tip
It could be convenient to have a “true” hostname instead of something like ip-192-168-0-177:
# echo "blog-k8s-ma.data-essential.com" > /etc/hostname
# sed -i s'/localhost l/blog-k8s-ma.data-essential.com localhost l/' /etc/hosts
# echo "HOSTNAME=blog-k8s-ma.data-essential.com" >> /etc/sysconfig/network
# echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg
You should be wondering how easy is it: just run kubeadm init on the master node. Here I add --pod-network-cidr=10.244.0.0/16, because it’s a prerequisite to run flannel, but there are others network drivers available for kubernetes (Weave, Calico…)
# kubeadm init --pod-network-cidr=10.244.0.0/16
Info

A word about Kubernetes core services.There is some base services that run on:

Master node

  • etcd: the key-value store from CoreOS
  • api server: it is Kubernetes entry point for external and internal services
  • scheduler: choose on which minion run a Pod, depending of it’s resources
  • controller manager: create, update and destroy resources that he manage

Worker node (minion)

  • kubelet: verify that minion is well, and perform health check on Pods
  • kube-proxy: act as a proxy and a load-balancer, he manage network traffic

There is not network service, because even if it is mandatory, it’s an addon: http://kubernetes.io/docs/admin/addons/. This mean that you can choose your network driver between:

  • flannel
  • calico
  • canal
  • romana
  • weave net
After a couple of minutes, the cluster is set up and gives you the following message:
Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:
 
kubeadm join --token=d562d2.bf3721e0655d4f12 192.168.0.177
Let’s have a look on our running containers (for now, we only have system pods, so use --all-namespaces or --namespace kube-system on your command line to avoid an empty answer)
# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-tit44                                   1/1	Running             0          2m
kube-system   etcd-blog-k8s-ma.data-essential.com                      1/1	Running             0          1m
kube-system   kube-apiserver-blog-k8s-ma.data-essential.com            1/1	Running             0          3m
kube-system   kube-controller-manager-blog-k8s-ma.data-essential.com   1/1	Running             0          3m
kube-system   kube-discovery-1150918428-hixmp                          1/1	Running             0          2m
kube-system   kube-dns-654381707-8tb6w                                 0/3	ContainerCreating   0          2m
kube-system   kube-proxy-flvft                                         1/1	Running             0          2m
kube-system   kube-scheduler-blog-k8s-ma.data-essential.com            1/1	Running             
Nice! Don’t worry about the DNS pod, he is waiting for the network. So it’s time get the network pod. For using flannel, do:
# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f kube-flannel.yml
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created
Now network should be running as well as DNS:
# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
kube-system   dummy-2088944543-tit44                                   1/1	Running   0          7m
kube-system   etcd-blog-k8s-ma.data-essential.com                      1/1	Running   0          6m
kube-system   kube-apiserver-blog-k8s-ma.data-essential.com            1/1	Running   0          7m
kube-system   kube-controller-manager-blog-k8s-ma.data-essential.com   1/1	Running   0          7m
kube-system   kube-discovery-1150918428-hixmp                          1/1	Running   0          7m
kube-system   kube-dns-654381707-8tb6w                                 3/3	Running   0          6m
kube-system   kube-flannel-ds-inzwb                                    2/2	Running   0          1m
kube-system   kube-proxy-flvft                                         1/1	Running   0          6m
kube-system   kube-scheduler-blog-k8s-ma.data-essential.com            1/1	Running   0          6m
Perfect, minions can now join: use the command provided by master node on each worker node.
# kubeadm join --token=d562d2.bf3721e0655d4f12 192.168.0.177

Node join complete:

* Certificate signing request sent to master and response received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.
Verify on master node that workers have successfully join:
# kubectl get nodes
NAME                             STATUS    AGE
blog-k8s-ma.data-essential.com   Ready     45m
blog-k8s-s1.data-essential.com   Ready     1m
blog-k8s-s2.data-essential.com   Ready     54s
blog-k8s-s3.data-essential.com   Ready     46s
Our cluster is now ready to use! Ok, all this way traveled to finally:
Create the first pod
I have wrote a yaml file with all necessary information to create an nginx pod. All these data are mandatory in your yaml: name, image, pull policy and so on. This is a simple basic template.
nginx-pod.yml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    apps: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    ports:
    - containerPort: 80
Now, I will use my template to run my first pod and ensure everything run as expected.
# kubectl create -f nginx-pod.yml
pod "nginx" created
# kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
nginx     1/1       Running   0          16s
# kubectl get pods -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP           NODE
nginx     1/1       Running   0          32s       10.244.3.2   blog-k8s-s3.data-essential.com
# kubectl describe pods nginx
Name:       nginx
Namespace:  default
Node:       blog-k8s-s3.data-essential.com/192.168.0.171
Start Time: Wed, 30 Nov 2016 20:22:11 +0000
Labels:     apps=nginx
Status:     Running
IP:     10.244.3.2
Fine, the pod goes well, but running a pod is not the objective on a kubernetes cluster. For sure, I rather prefer to have an nginx service highly available than a pod which may failed at any time. That true, if it fails, kubernetes will schedule a new one, but meanwhile I loose my service… Let’s delete the pod.
# kubectl delete -f nginx-pod.yml
pod "nginx" deleted
# kubectl get pods
#
Let’s create a new yaml file for ReplicationController. A ReplicationController ensures that a specified number of pod “replicas” are running at any one time.
Once the file is wrote down, let’s use it to run the rc:
nginx-rs.yml
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 3
  selector:
    apps: nginx
  template:
    metadata:
      name: nginx
      labels:
        apps: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
Once the file is wrote down, let’s use it to run the rc:
# kubectl create -f nginx-rs.yml
replicationcontroller "nginx" created
# kubectl get rc
NAME      DESIRED   CURRENT   READY     AGE
nginx     3         3         3         32s
# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-2vaeu   1/1	Running   0          18s
nginx-cnupw   1/1	Running   0          18s
nginx-wyz4z   1/1	Running   0          18s
# kubectl get pods -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP           NODE
nginx-2vaeu   1/1	Running   0          50s       10.244.2.2   blog-k8s-s2.data-essential.com
nginx-cnupw   1/1	Running   0          50s       10.244.1.2   blog-k8s-s1.data-essential.com
nginx-wyz4z   1/1	Running   0          50s       10.244.3.3   blog-k8s-s3.data-essential.com
Now we have 3 pods widely distributed on the worker nodes.Finally, I will create the Load Balancer Service, which will be the entry point for our nginx service by “exposing” (means set up a public port) the ReplicationController. First, I’ll create the yaml file:
nginx-svc.yml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    apps: nginx
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  selector:
    apps: nginx
  type: LoadBalancer
# kubectl create -f nginx-svc.yml
service "nginx" created
# kubectl describe svc nginx
Name:           nginx
Namespace:      default
Labels:         apps=nginx
Selector:       apps=nginx
Type:           LoadBalancer
IP:         10.98.243.164
Port:           80/TCP
NodePort:       32314/TCP
Endpoints:      10.244.1.2:80,10.244.2.2:80,10.244.3.3:80
Session Affinity:   None
As the describe command show us, we have a public ip 10.98.243.164 for accessing nginx service on port 80, and the load balancer will allocate the traffic to a specific worker on his private ip (eg. 10.244.1.2) on port 32314.
Tagged: , ,

Contact Us