Introduction
Kubernetes is the de facto standard for container orchestration in production environments. But before you land on a managed cluster in the cloud (AKS, IKS, GKE), it’s worth understanding how it works from the inside — by building your own cluster from scratch. In this article we’ll set up a fully working K8s cluster using kubeadm on machines running Ubuntu 22.04.
Requirements
You need at least two machines (physical or virtual):
| Role | CPU | RAM | Disk | OS |
|---|---|---|---|---|
| control-plane (master) | 2 vCPU | 2 GB | 20 GB | Ubuntu 22.04 |
| worker-node-1 | 2 vCPU | 2 GB | 20 GB | Ubuntu 22.04 |
Each machine must have a unique hostname and a static IP address. In this example:
master— 192.168.1.10worker-1— 192.168.1.11
Step 1 — Prepare All Nodes
Run the following commands on every node (master and worker).
Disable swap
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Load kernel modules
cat <<EOF | tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
sysctl parameters for Kubernetes
cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
Install containerd
apt update && apt install -y ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | \
tee /etc/apt/sources.list.d/docker.list
apt update && apt install -y containerd.io
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
systemctl restart containerd && systemctl enable containerd
Install kubeadm, kubelet, kubectl
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | \
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] \
https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | \
tee /etc/apt/sources.list.d/kubernetes.list
apt update && apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
Step 2 — Initialise the Master Node
Run only on the master node:
kubeadm init \
--apiserver-advertise-address=192.168.1.10 \
--pod-network-cidr=10.244.0.0/16 \
--control-plane-endpoint=192.168.1.10
Configure kubectl:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Step 3 — Install the Network Plugin (CNI)
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# master Ready control-plane 2m v1.29.0
Step 4 — Join Worker Nodes
Run the kubeadm join command from the initialisation output on each worker node:
kubeadm join 192.168.1.10:6443 --token abc123.xyz \
--discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxx
If the token has expired, generate a new one:
kubeadm token create --print-join-command
kubectl get nodes -o wide
# NAME STATUS ROLES AGE VERSION
# master Ready control-plane 5m v1.29.0 192.168.1.10
# worker-1 Ready <none> 2m v1.29.0 192.168.1.11
Step 5 — First Application Deployment
kubectl create deployment nginx --image=nginx --replicas=2
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pods -o wide
kubectl get svc nginx
Output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.145.200 <none> 80:31234/TCP 30s
Application available at: http://192.168.1.11:31234
Useful Everyday Commands
kubectl cluster-info
kubectl get nodes -o wide
kubectl get pods --all-namespaces
kubectl describe node worker-1
kubectl logs <pod-name> -f
kubectl exec -it <pod-name> -- /bin/bash
kubectl delete deployment nginx
Summary
You have a working Kubernetes cluster with one master and one worker. This is a solid foundation for learning — you can now practise deployments, scaling, ConfigMaps, Secrets and Services. In the next article we’ll show how to install Helm and deploy the first application using charts.