A Quick Guide On Scaling With Kubernetes

Illustrator: Finnur Alfred Finnsen

Kubernetes has been on the spotlight for some time now. It is very powerful but comes with a very tough learning curve. This very easy guide will introduce you to the important things about Kubernetes. It will get you up and be running quickly. I´ll show you a new way to deploy and manage your systems, applications and microservices. The benefits are huge, so let´s get started.

Kubernetes is an open-sourced platform for managing containerized workloads and services. It is a container platform and also a portable cloud platform. Think of Kubernetes as a system with an API to manage and scale your containerized applications. This API can do more things for you. You only have to tell it what to do by providing a written contract as a YAML or JSON file or call its API. We will look into this shortly.

Kubernetes is Greek for “helmsman” or “pilot”. The system was founded by Joe Beda, Brendan Burns, and Craig McLuckie. It is now the most used open source system for deployment, scaling and container management for applications. Kubernetes was built with over 15 years of experience of running production workloads at Google, together with the best ideas and practices from the community.

Dockerizing your application

You have finally created an application using your favorite language. You want to place this in a Docker container so that you can scale it and manage it in the cloud. Software running on Kubernetes (also called K8s) are packaged as Linux containers. Containerizations will give you the power to create self-contained Linux execution environments. Any software and all of its dependencies can be packaged up into a single file and be shared on the world wide web. Any user can download your container and deploy it on their Kubernetes system with very little effort. We will create a container programmatically so that you can use continuous delivery and continuous delivery pipelines with Travis or Jenkins or others.

Okay, you have chosen Kubernetes as your container management system for this task. Where do you start?

  Your application structure may look like this
Your application structure may look like this

You are able to run your application from the command line like so:

npm start

To “dockerize” the application, create a file named Dockerfile and add the following commands:

FROM node:8.2.1

ADD . /myAppDir



CMD ["npm", "start"]

Use the Dockerfile created above to create a Docker image by running this command:

docker build . --tag my-registery/my-cool-app:1

Here, you are simply telling Docker to get a Node image from an image registry.

Create a new directory in your container from your current directory. This will move every file and directory to your container and make it your work directory, so you can issue commands from it.

Then you specify the command that will run when you start your image.

Expose will expose a port from the container to the outside world. Just make sure you are running your node app with that exposed port.

Now that you have “dockerized” your application, it´s time to manage it so that it can handle load and scale easily.


Think of Kubernetes as a system that manages the Docker containers you just built. It can grow bigger or smaller if you want it to so that you can handle many different scenarios that can happen while your system is running in the cloud. 

To get started, install minikube which is Kubernetes locally and kubectl which is a way to talk to your clusters via the command line.

Minikube: https://kubernetes.io/docs/tasks/tools/install-minikube/

Kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

First, let´s define a cluster. What is a Kubernetes cluster?

A cluster is basically a group of similar machines that are closely grouped together.

In terms of Kubernetes they are a group of nodes. Nodes are the smallest computing hardware unit in Kubernetes.  It represents a single machine in your cluster.  A node can be a virtual machine hosted somewhere like on the Cloud like IBM Cloud.

You start your cluster by issuing the following command: minikube start.


Kubernetes doesn’t run directly on Docker containers as you might expect. There´s a higher level of wrapping applied to one or more containers. The wrapper is called a Pod. You should limit one container to one pod. They will automatically share the same resources and local network. You can still think of them as wrapped containers if you like. They are the basic unit of computation in Kubernetes.


You may have many Pods that wrap a container each. Managing them can be challenging. That’s why we have deployments. It´s basically a layer of abstraction. Their goal is to declare how many replicas (copies) of a Pod should be running at a time. When you give your Kubernetes cluster a deployment JSON / YAML file, it will quickly and automatically run up the assigned number of Pods and then monitor each one of them in the cluster. If one of the Pods die because of an error or a power outage, the deployment will automatically re-generate it for you in the cluster to follow the contract specified in your JSON/YAML file.

We can think of it as if a Pod is the Docker image itself or the container, deployments handle multiple copies of these for you. You don’t need to do manual work to replicate pods. Deployments will handle this for you. I have chosen JSON over YAML because it is easy to validate and read. This is a contract (spec) that Kubernetes will follow. We are specifying that we want two replicas for this deployment. We are also telling Kubernetes which Docker image to pull and replicate in our containers section.


  "apiVersion": "extensions/v1beta1",

  "kind": "Deployment",

  "metadata": {

    "name": "app-deployment"


  "spec": {

    "replicas": 2,

    "template": {

      "metadata": {

        "labels": {

          "app": "my-cool-app"



      "spec": {

        "containers": [


            "name": "my-cool-app",

            "image": "registry.eu-gb.bluemix.net/millad_images/my-cool-app:1",

            "command": [




            "ports": [


                "name": "coolAppPort",

                "containerPort": 3000











Like Deployments, Services are another abstraction which governs how we access Pods in Kubernetes. Think of it as an endpoint to access many Pods defined in a deployment. A service has a port, it´s part of a so-called Kubernetes REST object. You define Pods with an app name and inform the service about that name so that we can gain access to these Pods.

Services that are in the cluster export their own environment variables that can be accessed as {SERVICE_NAME}_SERVICE_HOST. It will provide you with information about your services that are deployed in the cluster. There are many more available in the Kubernetes documentation. It is a simple way to get PORT and HOST information from services by their name in the cluster, even from your application. This will make it quite easy for you to call other Kubernetes services by their name from your application.


  “apiVersion”: “v1”,

  “kind”: “Service”,

  “metadata”: {

    “name”: “my-cool-app-service”,

    “labels”: {

      “app”: “my-cool-app”



  “spec”: {

    “selector”: {

      “app”: “my-cool-app”


    “type”: “NodePort”,

    “ports”: [


        “port”: 80,

        “targetPort”: “coolAppPort”





Now that you have created these JSON files, we will let the Kubernetes API read these files to create a cluster that follows the contract we provided.

Please take a look at the commands below step by step to deploy your app.

// You have to start kubernetes by running this command:

- minikube start


// Your kubernetes ip 

- minkube ip


// Your docker images -> You should see the image you created earlier

- docker images


// give you everything you have on your cluser

- kubectl get all


// This command will allow your kubernetes to use your local docker setup and repo without problems

- eval $(minikube docker-env)


// Apply your services and deployments to the cluster by running apply with the JSON files created in this tutorial

- kubectl apply -f kubectl create -f my-cool-app-service.json -f my-cool-app-deployment.json 


// Get information about your service

- kubectl get svc my-cool-app-service -o json

// list cluster endpoints

- Endpoints: kubectl get endpoints

- LOGS SHOW: kubeclt logs POD_ID

You can now access your app from the endpoint provided by running the get endpoint command mentioned above. Remember to use the correct port to access your app. If one of your pods die, your deployment will follow your spec and create a new one for you.


I hope you can see how powerful Kubernetes is and how it manages your Docker containers by applying abstractions. We have only scratched the surface of what Kubernetes can do for you. Your next step is to take this to the cloud and enjoy a battle-hardened system that can scale.

Millad Dagdoni

Published by Millad Dagdoni

Programmer, architect

%d bloggers like this: