Kubernetes series (Article 2) Kubernetes Cluster Installation
In my previous write-up, I covered the preparation and setup of the Ubuntu 24.04 cloud image on a Proxmox VM, and how to convert that VM into a reusable template. This is a foundational step, as it allows us to quickly create consistent virtual machines for Kubernetes nodes, both for the initial cluster and for any future scaling.
In this article, I’ll set up a Kubernetes cluster with one control plane node and three worker nodes. Since this is a homelab environment, I’m starting small but may add more nodes later. I’m also aware that in a production setup, at least three control plane nodes are recommended for high availability and fault tolerance.
Network Configuration
For networking, I’m using:
- pfSense as the firewall and router
- QNAP QSW-M408-4C 10GbE switch
- Proxmox default bridge:
vmbr0
, configured to be VLAN-aware
In pfSense, I’ve created VLANs to segment my home network. These VLANs are mirrored on the switch. When making a VM in Proxmox, I can assign the VM to the appropriate VLAN using its VLAN tag.
For this project, I’ve assigned VLAN 80, which corresponds to the subnet 192.168.80.0/24
. All Kubernetes nodes, control plane and workers, will operate within this VLAN.
My Network Configuration
- Network range:
192.168.80.0/24
- Controller node:
192.168.80.2
- Worker nodes:
192.168.80.3 - 192.168.80.6
- Use pfSense DHCP static IP reservation to ensure static IPs for all nodes
Static IP Assignment with pfSense
To ensure consistent networking, I’ve configured static DHCP leases in pfSense for each Kubernetes node.
Steps:
- Go to the pfSense dashboard → Services → DHCP Server.
- For the interface corresponding to VLAN 80, enable the DHCP server.
- In Proxmox, go to each VM’s Hardware → Network Device → and note the MAC address.
- In pfSense, under the DHCP Server settings for VLAN 80, create a static lease:
- MAC Address: Paste the MAC from Proxmox
- IP Address: Choose an IP outside the DHCP pool (e.g., if the pool is
.100
–.150
, use.10
,.20
, etc.)
A /24 subnet has 254 usable IP addresses. I only allocated 50 addresses for DHCP leases. The rest are available for static IPs.
With networking configured and static IPs assigned, we’re ready to begin the Kubernetes installation process.
Pre-Kubernetes Setup (All Nodes)
Install Required Packages
Install dependencies required by Kubernetes and storage systems:
sudo apt update
sudo apt install -y nfs-common containerd gpg open-iscsi
nfs-common
: Enables support for NFS volumescontainerd
: Container runtime used by Kubernetesgpg
: Required for verifying APT repository keysopen-iscsi
: Important for Longhorn. If this is not installed on all nodes, Longhorn pods may fail withCrashLoopBackOff
errors. Installing this ensures iSCSI support, which Longhorn relies on for volume attachment.
Configure containerd
Generate and modify the default containerd config:
sudo mkdir -p /etc/containerd
containerd config default | sudo tee /etc/containerd/config.toml
sudo vi /etc/containerd/config.toml
Inside the config, change SystemdCgroup = false
to:
SystemdCgroup = true
This ensures Kubernetes uses systemd
as the cgroup driver, which is the recommended setting.
System Tweaks
Prepare the system kernel and network settings:
# Disable swap (Kubernetes requires swap to be off)
sudo swapoff -a
sudo sed -i '/swap/ s/^/#/' /etc/fstab
# Enable IP forwarding and bridge netfilter
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
sudo modprobe br_netfilter
# Ensure br_netfilter loads on boot
echo 'br_netfilter' | sudo tee /etc/modules-load.d/k8s.conf
# Reboot all nodes
sudo reboot
Kubernetes Installation
Add Kubernetes Repo and Install
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
sudo apt install -y kubeadm kubelet kubectl
Clean Up for Node Template (Optional)
If you plan to use this node as a template:
sudo cloud-init clean
sudo rm -rf /var/lib/cloud/instances
sudo truncate -s 0 /etc/machine-id
After cleaning, follow the steps described in Article 1 to convert this VM into a template. Once the template is created, you can clone it to provision the required control plane and worker nodes for your Kubernetes cluster.
Kubernetes Control Plane Init (Controller Node Only)
Initialise the cluster:
sudo kubeadm init \
--control-plane-endpoint=192.168.80.2 \
--node-name=controller \
--pod-network-cidr=10.244.0.0/16
Replace 192.168.80.2
With the IP address that matches your network subnet.
control-plane-endpoint
: IP or DNS name of your control plane (single or HA)pod-network-cidr
: Must match your CNI config (Flannel uses10.244.0.0/16
)
The --pod-network-cidr=10.244.0.0/16
value passed to kubeadm init
defines the IP address range used by the Pod network—i.e., the network that your Kubernetes Pods will use to communicate with each other across nodes. Why should it be 10.244.0.0/16
This matches the default configuration expected by Flannel, which is the CNI (Container Network Interface) I am using in my setup:
If the CIDR doesn’t match what Flannel is configured for, you may experience:
- Pods stuck in
Pending
- Network communication failures between pods
- CNI-related errors in node logs
Configure kubectl Access
The command below enables admin access from the Admin VM. It copies the Kubernetes configuration file (/etc/kubernetes/admin.conf
) to the user’s local kubeconfig directory ($HOME/.kube/config
) and updates the file’s ownership so the current user can read and write it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This sets the file permissions to 0600
(read/write for owner only), ensuring secure access to the Kubernetes cluster.
Join Worker Nodes
After configuring kubectl access, you are ready to generate a token to join nodes to the cluster.
Use the following command to create the token and print the kubeadm join
command:
sudo kubeadm token create --print-join-command
This command will output several lines; one of them will look like this:
sudo kubeadm join 192.168.80.2:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
If you join another control plane node, you may also see the– control-plane Flag.
If you get an error like “certificate has expired or is not yet valid,” re-run the following command to generate a fresh join command:
sudo kubeadm token create --print-join-command
Then copy and paste the new command to the nodes you want to join.
Install Flannel CNI (on Controller)
Install Flannel as the Container Network Interface:
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
Admin VM Setup (for kubectl + Helm)
At this stage, we’re setting up an Admin VM to issue kubectl
Helm commands the cluster. It’s best practice to use a dedicated machine for cluster administration.
I created a separate Ubuntu VM on Proxmox to act as my Admin VM for this setup. Previously, I copied the kubeconfig from /etc/kubernetes/admin.conf on the controller node to this VM, allowing me to manage the cluster remotely. kubectl
.
Install Kubernetes CLI (Admin VM)
sudo apt install -y gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | \
sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt install -y kubectl
Import Kubeconfig from Controller
sudo scp /etc/kubernetes/admin.conf sanju@192.168.80.14:~/kubeconfig/config
echo 'export KUBECONFIG=~/kubernetes/kubeconfig' >> ~/.bashrc
source ~/.bashrc
You’re ready to deploy workloads or extend your cluster with tools like Longhorn for persistent storage, MetalLB for load balancing, Helm for application deployment, and Portainer for monitoring and management.
After this step, your Kubernetes cluster is fully operational and ready for use. In the upcoming articles, I’ll walk through how to integrate each component, along with optional tools like cert-manager, to build a robust and fully functional home-lab Kubernetes environment.