Vanilla Kubernetes - High level design

This page will try to describe, in short and from my own view, a vanilla kubernetes cluster. As vanilla as possible at least, and as simple as possible. I felt this was a good starting point to learn about k8s.

Everything will be listed somewhat in a chronological order of which service or concept you should learn about first.

The goal of this guide is to have a working three node k8s cluster with a deployed application accessible from the host system or parent network.

This assumes you're already familiar with;

  • Containers
  • Docker
  • cgroup in Linux kernel
  • Podman to some extent (the fact that docker isn't the only cgroup interface)
  • Basic application development (Flask/NodeJS Express quickstart guides would be enough)
  • How the most basic Dockerfile works
  • The purpose of docker registry servers
  • Using Ansible and Vagrant
  • IPtables and general Linux networking like routes, pseudo NICs and forwarding

General Terminology

  • k8s is just an alias for Kubernetes.
  • Control plane is the master server in a k8s cluster.
  • Worker is a node in a k8s cluster.
  • Pod is a unit of 1 or more containers that need to share resources. For example if a service needs a local redis you could run one in its own container, in the same pod as the service container. They can also share a data volume.


I've made myself these Ansible playbooks hosted on to setup a kubeadm cluster. They've been extensively tested on Vagrant VMs and deployed on actual VMs using CentOS 7.

Follow the instructions in the README file to get a working cluster in your own Vagrant. As of writing it only supports libvirt because I run it on a Linux laptop.

Once complete you should be able to login to the master node and run some kubectl commands to verify that it works.

$ kubectl get pods --all-namespaces

Setup terminology

  • kubeadm is the tool used to init a new cluster, or join new worker nodes.
  • kubelet is the k8s service running on all nodes, master and worker.
  • kubectl is the CLI communicating with the k8s API to perform lookups and changes in the cluster.
  • kubernetes-api is running as a pod in the cluster and controls the cluster.


I'll be using my own flask boilerplate example service to deploy, its source repo is here on and its docker image is here on the registry.

Deployment terminology

  • Deployment is a way to start a pod in the k8s cluster.
  • A deployment specifies a container image, a name, labels and a container port for example.
  • Label is a way to find and use objects inside k8s.
  • Service is a way to expose a deployment to the cluster using a proxy and port forwarding.


Labels are very important, that's how a service knows which deployment it's supposed to use. Or how an ingress knows which service and deployment it's routing traffic for.

So while object names can differ, like a deployment or a service object, the label must be correct.

Labels are set in metadata sections of manifests, read more about manifests later.

And in selector sections you find labels that match your query.

So when deploying an application you set its label in the metadata section. Then in all subsequent objects like service and ingress you query that same label.

Manual deployment

This pulls a docker image and deploys it in a pod of your cluster.

$ kubectl create deployment flask-boilerplate

You can also try it with ''--dry-run -o yaml'' to see what a yaml manifest might look like, more about using manifests under Deploy manifest.

Now you can use ''kubectl get deploy'' or ''kubectl get pods'' to see the results of your deployment.

Here's how you create a service to expose the pod to your cluster.

$ kubectl create service nodeport flask-boilerplate --tcp=80:5000

Now you can see the internal cluster IP designated to your service.

$ kubectl get service
NAME                TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
flask-boilerplate   NodePort   `<none>`        80:31472/TCP   11m

And test the app you deployed there.

$ curl -sLD - ''
Content-Type: text/html; charset=utf-8
Content-Length: 275
Server: Werkzeug/1.0.0 Python/3.7.7
Date: Sun, 15 Mar 2020 10:52:21 GMT

HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 16
Server: Werkzeug/1.0.0 Python/3.7.7
Date: Sun, 15 Mar 2020 10:52:21 GMT


This means that IPtables rules and routes setup by Calico (during the Ansible run) are routing your traffic from any k8s node to the correct pod and container.

If you're curious about where this pod is running its container you can either login to every worker and run ''docker ps -a'' until you find it. Or check under Container ID in the output of ''kubectl describe pods'' and then grep for that container ID (the first 6-8 chars) on all the workers. That's the only method I'm aware of at this moment.

Delete deployment

$ kubectl delete service flask-boilerplate
$ kubectl delete deployment flask-boilerplate

This will shutdown pods and containers until nothing is left.

Deploy manifest

A more automated way of deploying apps and services is to define them in yaml manifest files.

Here are two manifests, one for a deployment where I define a docker image, and one for a service where I define which port to use in the deployed container.

Note that you can get a bit more creative with labels here. Previously the label has defaulted to the name of the deployment or service but here we can use one name for the deployment object, and one for the label.

apiVersion: apps/v1
kind: Deployment
    name: flask-boilerplate-deploy
    app: flask-boilerplate

    replicas: 1
      app: flask-boilerplate
        app: flask-boilerplate
      - name: flask-boilerplate
        - containerPort: 5000

Note the use of ''targetPort: 5000'' here which means I'm binding port 80 on a virtual "ClusterIP" to port 5000 of the deployed container. The use of port 5000 in this case is defined in the Dockerfile or the app deployed by it.

apiVersion: v1
kind: Service
    name: flask-boilerplate-svc
    app: flask-boilerplate
    - protocol: TCP
      port: 80
      targetPort: 5000

Here's an example of how to apply these manifests and re-produce the results from the previous manual deployment.

[vagrant@master ~]$ kubectl apply -f flask-boilerplate-deploy.yaml 
deployment.apps/flask-boilerplate-deploy created
[vagrant@master ~]$ kubectl apply -f flask-boilerplate-service.yaml 
service/flask-boilerplate-svc created
[vagrant@master ~]$ kubectl get svc
NAME                    TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
flask-boilerplate-svc   ClusterIP   `<none>`        80/TCP    3s
[vagrant@master ~]$ curl -sLD - ''
Content-Type: text/html; charset=utf-8
Content-Length: 275
Server: Werkzeug/1.0.0 Python/3.7.7
Date: Sun, 15 Mar 2020 11:31:37 GMT

HTTP/1.0 200 OK
Content-Type: application/json
Content-Length: 16
Server: Werkzeug/1.0.0 Python/3.7.7
Date: Sun, 15 Mar 2020 11:31:37 GMT


Delete deployment

Now delete them, note the different names.

$ kubectl delete service flask-boilerplate-svc
$ kubectl delete deployment flask-boilerplate-deploy

See also


Ingress is used to expose your service to the network around your k8s cluster, so far we've used internal IPs only available to k8s nodes.

Ingress terminology

  • Ingress is used to expose services to the surrounding networks, and the world. Set domain names and TLS certificates.
  • Ingress controller is a service that takes an ingress definition in k8s and turns it into an exposed servie route.
  • Ingress definition is a k8s manifest that defines your ingress.
  • ingress-nginx is the ingress controller I'll be using for this guide.
  • Ingress rules define paths that are routed from the ingress controller to the back end service.

Ingress deployment

FIXME: Finish the Ingress sections.

See also