Different Ways to run kubernetes locally
When it comes to develop on kubernetes one needs to think about different ways to run test clusters locally. The toolings around this evolved greatly over the last few years. The following blog article will discuss different ways on spinning up clusters.
Introduction
Using Kubernetes locally was a bit of a hassle in the early days of Kubernetes. Kubernetes consists as discussed in A first dip into kubernetes, Kubernetes consists out of multiple interconnected parts.
These parts communicate in a specific way with each other and therefore need e.g. a proper routing to work together properly. When running for example EKS, lot's of the grunt work is done for you already. AWS will take care and you can easily virtualize a network and other parts you need to spin up Kubernetes, even connecting a container registry is pretty straight forward.
But let's step one step back, before even thinking about running your cluster in a production environment, you will come to a point, where you need to test new tools, or where you want to debug an existing application in an environment without taking care for all the heavy work and costs of spinning up a cluster on a cloud provider.
Existing Solutions
For running Kubernetes locally different ways exists, with different tradeoffs you need to accept.
Common and esay to use ones are for sure:
There are far more one click solutions, that work quiet well. For example canonical backed microk8s into ubuntu and even Docker Desktop allowes to spin up a simple kubernetes cluster over the user interface.
Whenever you need a highly sophisticated solution I can suggest spinning up a vagrant setup. This will for sure be way more complex, but you have the possibilty to create your cluster in an environment which mimics a lot of your real setup. You can also easily test node images or do chaos engineering with full fledged nodes.
The main difference between the solutions is nowadays, the possibility to configure your cluster over the cli and the virutalization you can use as a backend.
Spinning up a fresh cluster can be very handy when you run integration tests, it is a common pattern to spin up a cluster in a pipeline, deploy your workload or operator and for example use kuttle if everything works well.
In the following we will discuss each solution and spin up a simple cluster.
Let's have a deeper look into the different options we mentionend.
Kind
Kind means Kubernetes in Docker
and was primarily created to test changes on upstream kubernetes.
Kind is pretty straight forward, to spin up a very basic cluster after installing the tools, simply run:
kind create cluster --name test-cluster
This will pull the node images and spin up a single node cluster.
Kind only supports docker as a Container Runtime Interface which is based on contained. When spinning up Kubernetes you might not want to use docker, this is some limitation you have while using vanilla Kind as a local cluster environment.
When your cluster is ready you should see the following on your terminal:
Creating cluster "test-cluster" ...
ā Ensuring node image (kindest/node:v1.26.3) š¼
ā Preparing nodes š¦
ā Writing configuration š
ā Starting control-plane š¹ļø
ā Installing CNI š
ā Installing StorageClass š¾
Set kubectl context to "kind-test-cluster"
You can now use your cluster with:
kubectl cluster-info --context kind-test-cluster
Not sure what to do next? š
Check out https://kind.sigs.k8s.io/docs/user/quick-start/
You are not good to go and to play around with your cluster.
Quickly check kubernetes get nodes
NAME STATUS ROLES AGE VERSION
test-cluster-control-plane Ready control-plane 9m13s v1.26.3
As you might already see, a single node cluster does not make sense in means of Kubernetes. One node means, that this is a single point of failure and you can't scale the cluster when your workload hits the clusters limit. But it's good enough to test tools or some deployment you created.
For running your own container you can use:
kind load docker-image --name test-cluster nginx
This is the easiest way to get custom container images running on your kind cluster but will not scale pretty well and you might want to spin up registry.
Simply follow the Kind Documentation on spinning up a local registry and you are good to go:
#!/bin/sh
set -o errexit
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
docker run \
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \
registry:2
fi
cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"
EOF
REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}"
for node in $(kind get nodes); do
docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
[host."http://${reg_name}:5000"]
EOF
done
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
docker network connect "kind" "${reg_name}"
fi
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: local-registry-hosting
namespace: kube-public
data:
localRegistryHosting.v1: |
host: "localhost:${reg_port}"
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF
What does the script do:
- It creates a local registry listening on port 5000 on your host machine.
- Using
-p "127.0.0.1:${reg_port}:5000"
will ensure that the registry is only accessible from your machine --network bridge
will ensure, that the container can be resolved over dns later on, this is important, as in docker the node of the kubernetes cluster can't resolve localhost as an external service as the localhost in a container, is the container itself.
- Using
- The cluster will be created
- Kind allows to create a cluster over passing in a configuration file, in this configuration file you can e.g. override the containerd configuration, or like here, the path of the registry configuration
- In each node the registry is configured
- The registry is connected to the kind docker network
- The local registry is documented in the cluster
After running this you can push containers to your local registry and your kind cluster will be able to pull the images.
As you see setting up a cluster locally is pretty easy to automate.
To not repeat the same steps, we will now look into k3d and focus there on scaling your local cluster and on how to setup a load balancer. This is also possible in kind, you can refer to the following documentation Setting up a loadbalancer on Kind or Kind Configuration / setting up different nodes.
K3d
K3d wraps k3s, the minimal kubernetes distribution of Rancher.
Rancher itself is an enterprise level platform to run Kubernetes clusters. In Rancher you can do things like spinning up multiple downstream clusters from one master cluster installation and provision them with internal tooling. We will focus on this at some other point.
Let's start here by spinning up a k3d cluster.
You can do this either by running:
k3d cluster create test-cluster
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-test-cluster'
INFO[0000] Created image volume k3d-test-cluster-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-test-cluster-tools'
INFO[0001] Creating node 'k3d-test-cluster-server-0'
INFO[0001] Creating LoadBalancer 'k3d-test-cluster-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
INFO[0001] Starting new tools node...
INFO[0001] Starting Node 'k3d-test-cluster-tools'
INFO[0002] Starting cluster 'test-cluster'
INFO[0002] Starting servers...
INFO[0002] Starting Node 'k3d-test-cluster-server-0'
INFO[0005] All agents already running.
INFO[0005] Starting helpers...
INFO[0005] Starting Node 'k3d-test-cluster-serverlb'
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 3 network members into CoreDNS configmap...
INFO[0014] Cluster 'test-cluster' created successfully!
INFO[0014] You can now use it like this:
kubectl cluster-info
For getting your cluster deployed in a more reproducible way, k3d gives you the option to use a yaml config file. A nice feature here is, that k3d allows substituting environment variables.
You can find the example used here in local-cluster k3d examples
---
apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: ${CLUSTER_NAME}-cluster
servers: ${CLUSTER_SERVERS}
agents: ${CLUSTER_AGENTS}
kubeAPI:
host: "${CLUSTER_HOST}"
hostIP: "${CLUSTER_HOST_IP}"
hostPort: "${CLUSTER_HOST_PORT}"
image: ${CLUSTER_IMAGE}
subnet: "${CLUSTER_SUBNET}"
options:
k3d:
wait: true
timeout: "180s"
disableLoadbalancer: true
disableImageVolume: false
disableRollback: false
kubeconfig:
updateDefaultKubeconfig: false
switchCurrentContext: false
As you see, you can easily configure the kubeApi and on what ports it should listen. You can also easily specify the servers (control planes) or agents (nodes) to be provisioned
Let's provision our cluster:
export CLUSTER_NAME=test
export CLUSTER_SERVERS=1
export CLUSTER_AGENTS=2
export CLUSTER_HOST=localhost
export CLUSTER_HOST_IP=127.0.0.1
export CLUSTER_HOST_PORT=6443
export CLUSTER_IMAGE=rancher/k3s:v1.28.4-k3s2
export CLUSTER_SUBNET=172.28.0.0/16
k3d cluster create --config config.yaml
Running export KUBECONFIG=$(k3d kubeconfig write test-cluster)
will give you access to your newly created cluster.
The cluster will come up with two nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3d-test-cluster-server-0 Ready control-plane,master 20s v1.28.4+k3s2
k3d-test-cluster-agent-0 Ready <none> 13s v1.28.4+k3s2
K3d allows very specific configuration on how for example to use a loadbalancer or to mount your local filesystem. Consider, whenever you mount your local filesystem, you have to handle this in a proper way when it comes to running a production cluster. As the filesystem mount is only specific for one of your node, a different node will not have the same files as another. When you really need a stateful cluster, you have to take care, that your application is built resilient enough to handle e.g. being scheduled on a node without data. Running stateful workload is a complex topic and is not adressed here for now, but having a local filesystem mount in your development can make some processes easier, imagine e.g. you are developing a machine learning application that needs to pull a lot of data to operate on.
Let's extend our cluster definition to allow mounting the local filesystem from our agent nodes.
---
apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: ${CLUSTER_NAME}-cluster
servers: ${CLUSTER_SERVERS}
agents: ${CLUSTER_AGENTS}
kubeAPI:
host: "${CLUSTER_HOST}"
hostIP: "${CLUSTER_HOST_IP}"
hostPort: "${CLUSTER_HOST_PORT}"
image: ${CLUSTER_IMAGE}
subnet: "${CLUSTER_SUBNET}"
volumes:
- volume: ${PWD}/cluster-mount:/host-mount
nodeFilters:
- agent:*
options:
k3d:
wait: true
timeout: "180s"
disableLoadbalancer: true
disableImageVolume: false
disableRollback: false
kubeconfig:
updateDefaultKubeconfig: false
switchCurrentContext: false
K3d allows you to very specifically filter the nodes which should be able to mount your filesystem.
Let's start the cluster:
k3d cluster create --config config-with-volumes.yaml
And create a file in our host folder cluster-mount
And run a pod mounting the filesystem:
---
apiVersion: v1
kind: Pod
metadata:
name: test-mount
spec:
containers:
- image: alpine
name: test
command:
- cat
args:
- /mount/README.md
volumeMounts:
- mountPath: /mount
name: host-mount
volumes:
- name: host-mount
hostPath:
path: /host-mount
type: Directory
Running, k logs -f test-mount
will give you the following output:
# Overview
This is a testfile :)
From my opinion on k3d has a very nice configuration interface, which allows a very simple and clear configuration of your local setup. Even setting up a loadbalancer is easy an easy task within k3d.
Let's have a look at minikube, which is one of the oldest ways to run
Minikube
Minkube claims to be the best local Kubernetes environment.
One thing to mention on minikube is, that it is easy to switch out the driver used underneath. By default minikube will run by auto-detecting the driver. For setting a driver use:
minikube start --driver=[driver]
Driver is one of: qemu2, docker, podman (experimental), ssh (defaults to auto-detect)
This allows using minikube very lightweight, but also gives the flexibility to run for example qemu2 as backend for your local cluster. This will give you more power over e.g. the networking stack used or makes it easier to handle resource sepearation of your cluster.
For our test we will use the docker driver and start a cluster. Apart from that minikube works with addons e.g. to setup a local or even a cloud registry to work with.
For spinning up a cluster simply run:
minikube start --driver docker --static-ip 192.168.200.200
This will start a cluster and also give it a static ip which can be used to reference the cluster on your machine. Be aware, that only private ip ranges are allowed.
Let's have a look on how to get access to an application running in your minikube cluster.
As stated already, minikube uses "addons" to for example setting up ingresses:
minikube addons enable ingress
This will spin up a nginx ingress:
š” ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
š” After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
āŖ Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
āŖ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
āŖ Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
š Verifying ingress addon...
š The 'ingress' addon is enabled