Background
If you are like me who wants to try out new tools or test something on Kubernetes, then I have something for you.
Yes, you can create a Kubernetes cluster locally using Minikube and Docker, but we’ll be limited to a single-node cluster, which might not suffice for many use cases like testing multi-node scenarios, HA configurations, or distributed applications.
In this article, I’ll guide you on how you can create a multi-node local Kubernetes cluster using Kind (Kubernetes in Docker).
What is Kind?
Kind (Kubernetes in Docker) is a tool for running local Kubernetes clusters using Docker container “nodes”. It was primarily designed for testing Kubernetes itself, but it’s perfect for local development and CI/CD pipelines.
Key benefits:
- Multi-node clusters (including HA)
- Easy to create and destroy clusters
- Supports latest Kubernetes versions
- Minimal resource requirements
- Great for testing and development
Prerequisites
- Docker installed and running
kubectl
installed (optional but recommended)- At least 8GB RAM (for multi-node clusters)
- 20GB free disk space
Steps
1. Install Docker
If you haven’t installed Docker already, you can download Docker Engine or Docker Desktop for your operating system by following the official documentation here.
Verify Docker installation:
docker --version
docker ps
2. Install Kind
On macOS (using Homebrew):
brew install kind
On Linux:
# For AMD64 / x86_64
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# For ARM64
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
On Windows (using Chocolatey):
choco install kind
Or using Go:
go install sigs.k8s.io/kind@v0.20.0
Verify Kind installation:
kind --version
3. Create a Single Node Cluster (Quick Start)
For a quick test, you can create a single-node cluster:
kind create cluster --name my-cluster
This creates a cluster named “my-cluster” with a single control-plane node.
4. Create a Multi-Node Cluster
To create a multi-node cluster, we need a configuration file. Create a file named kind-config.yaml
:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
# One control plane node
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=master"
# Three worker nodes
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=worker"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=worker"
- role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-type=worker"
Now create the cluster:
kind create cluster --name multi-node-cluster --config kind-config.yaml
5. Create a High Availability (HA) Cluster
For testing HA scenarios, create a cluster with multiple control-plane nodes:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
# Three control plane nodes
- role: control-plane
- role: control-plane
- role: control-plane
# Three worker nodes
- role: worker
- role: worker
- role: worker
Create the HA cluster:
kind create cluster --name ha-cluster --config ha-config.yaml
6. Verify Your Cluster
Check cluster status:
kubectl cluster-info --context kind-multi-node-cluster
kubectl get nodes
You should see output like:
NAME STATUS ROLES AGE VERSION
multi-node-cluster-control-plane Ready control-plane 2m v1.27.3
multi-node-cluster-worker Ready <none> 90s v1.27.3
multi-node-cluster-worker2 Ready <none> 90s v1.27.3
multi-node-cluster-worker3 Ready <none> 90s v1.27.3
7. Advanced Configuration
Port Mapping (for NodePort services):
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
protocol: TCP
- role: worker
- role: worker
Custom Kubernetes Version:
kind create cluster --image kindest/node:v1.28.0
Mount Local Directories:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /path/on/host
containerPath: /path/in/container
Working with Multiple Clusters
List all Kind clusters:
kind get clusters
Switch between clusters:
kubectl config use-context kind-cluster1
kubectl config use-context kind-cluster2
Delete a cluster:
kind delete cluster --name multi-node-cluster
Testing Your Multi-Node Setup
Deploy a sample application to test the multi-node setup:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: kubernetes.io/hostname
This deployment ensures pods are distributed across different nodes.
Troubleshooting
Docker resource issues:
# Increase Docker Desktop memory to at least 8GB
# Check Docker resource usage
docker system df
docker system prune -a
Cluster creation failures:
# Check Kind logs
kind export logs --name multi-node-cluster
Network issues:
# Verify Docker networks
docker network ls
docker network inspect kind
Alternatives to Kind
While Kind is excellent for local development, here are some alternatives:
-
Minikube
- Pros: More mature, supports multiple drivers (VirtualBox, KVM, Docker)
- Cons: Primarily single-node, multi-node is experimental
- Use case: Simple local development
minikube start --nodes 3
-
k3d (Rancher)
- Pros: Lightweight, fast, built on k3s
- Cons: Not full Kubernetes (k3s is a simplified version)
- Use case: Lightweight multi-node testing
k3d cluster create mycluster --servers 1 --agents 3
-
MicroK8s
- Pros: Snap-based, easy add-ons, production-ready
- Cons: Limited to Ubuntu/Linux, requires snap
- Use case: Ubuntu development environments
microk8s add-node
-
kubeadm (on VMs)
- Pros: Real Kubernetes experience, highly customizable
- Cons: Resource intensive, complex setup
- Use case: Learning Kubernetes internals
-
Docker Desktop Kubernetes
- Pros: Integrated with Docker Desktop, simple enable/disable
- Cons: Single-node only, limited configuration
- Use case: Quick single-node testing
-
Vagrant + VirtualBox
- Pros: Full VM isolation, scriptable with Vagrantfile
- Cons: Heavy resource usage, slower than containers
- Use case: Testing OS-level configurations
Best Practices
-
Resource Management:
- Start with minimal nodes and scale as needed
- Monitor Docker resource usage
- Clean up unused clusters regularly
-
Development Workflow:
- Use different cluster configs for different scenarios
- Version control your Kind configurations
- Automate cluster creation with scripts
-
CI/CD Integration:
# GitHub Actions example - name: Create Kind cluster uses: helm/kind-action@v1.8.0 with: cluster_name: kind config: ./kind-config.yaml
Conclusion
Kind provides an excellent way to create multi-node Kubernetes clusters locally for development and testing. Its simplicity, combined with the ability to create complex cluster configurations, makes it an invaluable tool for Kubernetes developers and operators.
Whether you’re testing distributed applications, learning Kubernetes concepts, or developing operators, Kind’s multi-node capabilities give you a production-like environment right on your laptop.
Remember to clean up your clusters when done to free up resources:
kind delete cluster --name multi-node-cluster
Happy clustering! 🚀