November 11, 2017 Stephane Beuret

Test your deployments locally with minikube

A chapter of the courses that I give demonstrates how easy it is to implement a multi-tier application on Rancher / Kubernetes (thanks to the Rancher Catalog). The recipe is the same, whether you are on Rancher 1.x or 2.0! The exercise also uses a storage driver that allows application deployment with persistent storage.

My application is on Github, it is simply an application in Node.js with a MongoDB backend, which displays “Hello MyApp from MongoDB!” in the browser when it is called on the route /myapp. To familiarize yourself a bit with Kubernetes, I decided to deploy this application directly on a cluster, with CLI.

You can use any K8s cluster, a deployed with Rancher, or with kubeadm (see my previous blog), or do it locally with kinikube. This is the last option that I chose because it allows to test our deployment locally. For the installation of minikube and kubectl, I invite you to read this tutorial.

If I consider that my application is a MongoDB backend and a Node.js frontend, what will I need in K8s? First of all a volume for persistence. With minikube, I use local storage. A volume claim, to make this volume available to MongoDB. A MongoDB deployment. One service, to make MongoDB available to other third parties. Another deployment for Node.js, as well as its service. And finally an ingress rule to access my application outside my cluster. I will even push the vice a little further by using a configMap for my Node.js environment varaibles (MongoDB URI). Ready?

First, have a look to config files:

mongodb-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
hostPath:
path: /data/mongoData

mongodb-pv-claim.yml


kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: data-claim
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi

mongodb-deploy.yml


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
template:
metadata:
labels:
app: db
spec:
restartPolicy: Always
volumes:
- name: data-storage
persistentVolumeClaim:
claimName: data-claim
containers:
- name: mongodb-container
image: "de13/mongo-myapp"
volumeMounts:
- name: data-storage
mountPath: /var/lib/mongo
ports:
- containerPort: 27017

mongodb-svc.yml


kind: Service
apiVersion: v1
metadata:
name: mongodb-svc
spec:
selector:
app: db
ports:
- protocol: TCP
port: 27017

nodejs-cm.yml


apiVersion: v1
kind: ConfigMap
metadata:
name: nodejs-configmaps
data:
URI: mongodb://mongodb-svc:27017/hello

nodejs-deploy.yml


apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: nodejs-container
image: "de13/myapp"
env:
- name: URI
valueFrom:
configMapKeyRef:
name: nodejs-configmaps
key: URI

nodejs-svc.yml


kind: Service
apiVersion: v1
metadata:
name: nodejs-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 3000

nodejs-ing.yml


apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodejs-ingress
spec:
rules:
- http:
paths:
- path: /myapp
backend:
serviceName: nodejs-svc
servicePort: 3000

Now let’s deploy everything:


bash-3.2$ minikube start --vm-driver=xhyve
bash-3.2$ ls
mongodb-deploy.yml mongodb-pv.yml nodejs-cm.yml nodejs-ing.yml
mongodb-pv-claim.yml mongodb-svc.yml nodejs-deploy.yml nodejs-svc.yml
bash-3.2$ kubectl apply -f mongodb-pv.yml
persistentvolume "data" created
bash-3.2$ kubectl apply -f mongodb-pv-claim.yml
persistentvolumeclaim "data-claim" created
bash-3.2$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-claim Bound data 5Gi RWO 4m
bash-3.2$ kubectl describe pvc data-claim
Name: data-claim
Namespace: default
StorageClass:
Status: Bound
Volume: data
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"data-claim","namespace":"default"},"spec":{"accessModes":["ReadW...
pv.kubernetes.io/bind-completed=yes
pv.kubernetes.io/bound-by-controller=yes
Capacity: 5Gi
Access Modes: RWO
Events:
bash-3.2$ kubectl apply -f mongodb-deploy.yml
deployment "mongodb" created
bash-3.2$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mongodb-67747bf6b5-drgjq 0/1 ContainerCreating 0 4m
bash-3.2$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mongodb-67747bf6b5-drgjq 1/1 Running 0 4m
bash-3.2$ kubectl apply -f mongodb-svc.yml
service "mongodb-svc" created
bash-3.2$ kubectl apply -f nodejs-cm.yml
configmap "nodejs-configmaps" created
bash-3.2$ kubectl describe cm nodejs-configmaps
Name: nodejs-configmaps
Namespace: default
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"URI":"mongodb://mongodb-svc:27017/hello"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"nodejs-configmaps","names...


Data
====
URI:
----
mongodb://mongodb-svc:27017/hello
Events:
bash-3.2$ kubectl apply -f nodejs-deploy.yml
deployment "nodejs" created
bash-3.2$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mongodb-67747bf6b5-drgjq 1/1 Running 0 5m
nodejs-77565ccc68-fv6dp 1/1 Running 0 4m
bash-3.2$ kubectl apply -f nodejs-svc.yml
service "nodejs-svc" created
bash-3.2$ kubectl apply -f nodejs-ing.yml
ingress "nodejs-ingress" created
bash-3.2$ kubectl describe ing nodejs-ingress
Name: nodejs-ingress
Namespace: default
Address:
Default backend: default-http-backend:80 (172.17.0.3:8080)
Rules:
Host Path Backends
---- ---- --------
*
/myapp nodejs-svc:3000 ()
Annotations:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 4m ingress-controller Ingress default/nodejs-ingress
bash-3.2$ minikube ip
192.168.64.4

Let’s see if everything works as expected in the browser:

Look’s great!

Tagged: , , , , ,

Contact Us