A Guide to Building a Kubernetes Cluster with Raspberry Pi’s

Alexander Sniffin
12 min readJul 5, 2023

A few years ago, I set up a Kubernetes Cluster on Raspberry Pi’s. At the time, the ARM architecture of Raspberry Pi’s posed some challenges. Finding applications that supported ARM was a tough task which often led me to having to manually build my own applications and containers for anything I wanted to use.

However, since then, things have significantly improved! The advent of a new 64-bit Raspberry Pi OS and the growing popularity of ARM in the industry, largely due to its cost-effectiveness for cloud deployments, have made building a Raspberry Pi cluster much simpler. I decided to rebuild my cluster, updating it to a 64-bit OS and the latest versions of both Kubernetes and Docker.

I’ve put together a guide on how you can bootstrap your own Raspberry Pi Kubernetes cluster. I hope it proves useful in your journey of building a home cluster! 🚀

Requirements

You’ll need some hardware for setting up the cluster, this includes:

  • Raspberry Pi’s (I used the 4 Model B)
  • 1x SD card / Pi
  • 1x Ethernet Cable / Pi
  • A router and/or network switch
  • USB hub
  • (optional) A case

This guide was written for Kubernetes 1.26.6, Docker 24.0.2 and Raspberry Pi Lite (64-bit) Bullseye.

OS Setup

For the first step, we’ll need to set up the OS on all of the Pi’s. Without it, the Raspberry Pi has no system to boot by default.

Download the Raspberry Pi Imager, a handy application for downloading and flashing Raspberry Pi’s. For this guide we will use the 64 bit headless version of Raspberry Pi OS (a fork of Debian).

This will work with the latest Raspberry Pi’s but still check the compatibility before you flash your SD card.

Raspberry Pi Imager

Choose your SD card and begin flashing it with the OS. Repeat this for each SD card until they’re all complete.

Enable SSH and Create a Default User

You’ll need to set up SSH as it’ll allow us to remotely configure each Pi.

Create an empty file named ssh (without any extension) in the boot partition of the SD card to enable SSH.

For setting up the user to login with, create a file called userconf in the same boot partition of the SD card. This file should contain a single line of text, consisting of {name}:{encrypted-password}. I used node for my login user but use what you want.

To generate the encrypted-password, run the following command with OpenSSL:

echo '{password}' | openssl passwd -6 -stdin

Save the file and eject the SD card. Insert the SD card into the Raspberry Pi and power it on. Make sure it’s connected to your router or network switch on your private network.

First Boot and Initial Configuration

You’ll need to get the IP for your Raspberry Pi, to do this you can check your router. I use OpenWrt, from my DHCP settings I create a static IP for each that‘s easy for me to remember.

DHCP Settings

SSH into your first node, this will be your master node that runs the control plane of your cluster. After we’ve tunneled into our Pi, we can start setting it up!

Add your user to the sudo group with the following command.

sudo usermod -aG sudo node

Now lets update the rasp-config to autoboot with the node user.

sudo raspi-config
Raspberry Pi Config Menu

Navigate to “System Options” → “Boot / Auto Login” and choose “Console Autologin”.

Docker & Kubernetes Initial Set Up

By default the cgroup memory option will be disabled, we will need to update this for Docker to be able to limit memory usage. Open /boot/cmdline.txt and append cgroup_enable=memory cgroup_memory=1.

Now lets update our apt repository and include the Kubernetes repository.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt update && sudo apt upgrade -y

Docker install:

curl -sSL https://get.docker.com | sh
sudo usermod -aG docker node

As of Kubernetes 1.20, dockershim is being deprecated. There is an open-source CRI we can use in exchange for our cluster provided by Mirantis called cri-dockerd. To install cri-dockerd and set up the service, run the following commands:

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.4/cri-dockerd-0.3.4.arm64.tgz
tar -xvzf cri-dockerd-0.3.4.arm64.tgz
sudo mv cri-dockerd/cri-dockerd /usr/bin/cri-dockerd
sudo chmod +x /usr/bin/cri-dockerd
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.service
wget https://raw.githubusercontent.com/Mirantis/cri-dockerd/master/packaging/systemd/cri-docker.socket
sudo mv cri-docker.service /etc/systemd/system/
sudo mv cri-docker.socket /etc/systemd/system/
sudo systemctl enable cri-docker.service
sudo systemctl enable cri-docker.socket
sudo systemctl start cri-docker.service
sudo systemctl start cri-docker.socket

It’s recommended to disable swap on our nodes for the Kubernetes scheduler.

sudo apt-get update && sudo apt-get install dphys-swapfile && sudo dphys-swapfile swapoff && sudo dphys-swapfile uninstall && sudo systemctl disable dphys-swapfile

If you run into problems with setting up cri-dockerd, please check this guide as some details might’ve changed from when I originally wrote this.

Finally, lets install Kubernetes!

sudo apt install -y kubelet=1.26.6-00 kubeadm=1.26.6-00 kubectl=1.26.6-00
sudo apt-mark hold kubelet kubeadm kubectl

For this guide, I’ve tested everything running 1.26.6, previous versions won’t work correctly before 1.24. We’ll mark these packages to prevent them from being updated.

Alternatively, k3s made by Rancher Labs would be a good lightweight option. Some of the advantages include a small binary size, very low resource requirements and it’s optimized for ARM. I haven’t tested it for this guide but I imagine the set up would be similar after this.

Time to initialize our cluster, to do this we’ll create a file with our InitConfiguration and ClusterConfiguration settings.

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: {token}
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.0.100
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/cri-dockerd.sock
imagePullPolicy: IfNotPresent
name: node-0
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: "10.244.0.0/16" # --pod-network-cidr

This file includes our master node’s configuration, note the criSocket will use cri-dockerd and we set our network CIDR for later.

To initialize our control plane on this node, run the following.

sudo kubeadm init --config kubeadm-config.yaml

This will output the settings for joining new nodes to cluster as well as setting up your kube-config.

Set up your kube-config following the instructions from the command and copy and save both the kube-config and join command on your workstation, we’ll need these for later!

Cluster Networking

Now we need to set up networking in our cluster. For Pods to be able to communicate with each other across our nodes, a network plugin (also referred to as a CNI or Container Network Interface) is needed.

The network plugin provides networking capabilities to the Pods, such as IP address assignment, DNS resolution and network isolation.

We’ll use Flannel to do this.

Flannel runs a small, single binary agent called flanneld on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms including VXLAN and various cloud integrations.

Run the following from the master node.

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

That’s it! Our master node is complete and we can begin adding new nodes to the cluster. Remember the join command outputted earlier? We’ll need that now.

Adding New Nodes To The Cluster

To add a new node to the cluster is fairly simple, if you’re adding a lot of nodes you’ll want to probably multiplex your session commands using a tool like tmux.

Complete the “First Boot and Initial Configuration” and work through “Docker & Kubernetes Initial Set Up” stop after the step where you install the different Kubernetes components. At this point you’ll now want to run the kubeadm join command from before, be sure to include options for the cri-socket and node-name.

sudo kubeadm join 10.0.0.100:6443 --token {token} --discovery-token-ca-cert-hash {hash} --cri-socket unix:///var/run/cri-dockerd.sock --node-name {name}

Now monitor your cluster from your master node and ensure all nodes join the cluster.

> watch kubectl get nodes
NAME STATUS ROLES AGE VERSION
node-0 Ready control-plane 20h v1.26.6
node-1 Ready <none> 19h v1.26.6
node-2 Ready <none> 19h v1.26.6

Your cluster is now ready for use! Although, you’ll probably want to access it from your workstation rather than through SSH. From your computer you can now set up your kube-config from before.

The default kube-config will give you admin privilege's and shouldn’t be shared with other people.

Export the config first to your profile.

export KUBECONFIG=~/.kube/config

Set the context:

kubectl config use-context kubernetes-admin@kubernetes

You should be able to now access your cluster remotely.

> kubectl cluster-info
Kubernetes control plane is running at https://10.0.0.100:6443
CoreDNS is running at https://10.0.0.100:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Tooling

Now to upgrade our cluster from vanilla to awesome. Let’s set up some commonly used tools that’ll let us easily deploy new applications and monitor our cluster. For this, I’ll go over installing ArgoCD, Prometheus and Grafana! Three open-source projects that’ll take our cluster to the next level.

Before continuing, I recommend creating a remote git repository for tracking all of our configuration changes for these tools. This is particular useful with ArgoCD as we add each tool or additional applications through there for deployment.

ArgoCD

For each tool, we’ll use Helm as our resource templater. Install the latest (or at least Helm v3) and lets add the ArgoCD repository.

helm repo add argo https://argoproj.github.io/argo-helm

Create a values file.

server:
serviceType: NodePort
httpNodePort: 30080
httpsNodePort: 30443

This file can be used to override any of the settings from the chart. In this case, I’m changing the Service to run as a NodePort vs. a ClusterIP. This will expose the specified ports from the cluster so that we can access it from our private network without using a reverse proxy.

Install the service.

helm install argocd -n argocd -f values.yaml argo/argocd

You’ll then want to grab the default password for the admin user.

kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

You should be able to access ArgoCD from any of your nodes by going to the NodeIP:httpsNodePort in your browser, because I use OpenWrt I’m able to set a hostname entry to my cluster and can access the login page at https://cluster.home:30443.

DNS Entry

Login to ArgoCD and we’ll come back to it shortly.

ArgoCD Login

Prometheus

We’ll use Prometheus as our timeseries metric server for gathering information about our cluster.

Before we can install it, we should set up a persistent volume for Prometheus to store query data. As this is just a home cluster, I opted to just using a spare USB drive but you can attach and use whatever you want.

Here are the steps I did to set up the volume on my master node. Create the path for our volume and a back up of our fstab as we’ll need to make changes which might break our boot volume if we make a mistake.

sudo mkdir /mnt/usb
sudo cp /etc/fstab /etc/fstab.bak

Attach the device and then append the fstab with the changes.

/dev/sda1 /mnt/usb vfat defaults,uid=youruid,gid=yourgid,dmask=002,fmask=113 0 0

Now mount the device with our user and group settings for our node user.

sudo mount -o uid=youruid,gid=yourgid,dmask=002,fmask=113 /dev/sdX1 /mnt/usb

We’ll now want to create a Kubernetes resource with our PersistentVolume and PersistentVolumeChain.

apiVersion: v1
kind: PersistentVolume
metadata:
name: prometheus-usb-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: {size of device}Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/usb"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: prometheus-usb-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: {size of device}Gi

If you’re using a git repo, place these files in a new Helm Chart under the template directory. Follow the next steps to proceed.

Let’s set up our chart with Prometheus.

helm create prometheus

Add the Prometheus subchart as a dependency in the Chart.yaml.

dependencies:
- name: prometheus
version: 22.7.0
repository: https://prometheus-community.github.io/helm-charts

We can now set up configuration to use the new PV and PVC, fix some permissions and make sure we only deploy the server to our master node.

prometheus:
alertmanager:
enabled: false
prometheus-pushgateway:
enabled: false
configmapReload:
prometheus:
enabled: false
server:
nodeSelector:
kubernetes.io/hostname: {master node}
securityContext:
runAsUser: {userid}
runAsNonRoot: true
runAsGroup: {groupid}
fsGroup: {fsid}
persistentVolume:
enabled: true
existingClaim: "prometheus-usb-pvc"
volumeName: "prometheus-usb-pv"

This also disables some extra services like the alertmanager, pushgateway and configmapreload. These can be enabled at another time if needed. Alert Manager would be useful for getting notifications for when things are behaving abnormally.

Back to ArgoCD, lets create a “New App”, name it Prometheus and add your git repo as your source and select a path. You’ll want to do this later for Grafana, so keep them in separate paths.

ArgoCD New App

Select the values file to set the custom settings we’ve created. Then create the App, you’ll need to sync it if you specified to manually sync. This is nice for when you do upgrades and want to release manually otherwise automatic syncing is useful for CD and probably the best option for home projects.

Prometheus Deployment

Grafana

Similarly to Prometheus, we should start by creating a new Helm chart in our git repo.

helm create grafana

Then add the helm repo.

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Update the chart with the repo.

dependencies:
- name: grafana
version: 6.57.4
repository: https://grafana.github.io/helm-charts

Add a values.yaml file.

grafana:
service:
enabled: true
type: NodePort
nodePort: 30180

Then the same as before, add Grafana through ArgoCD. Sync it and now you should have both running.

Applications

Before you can use Grafana you’ll need to get the admin password.

kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Login with the admin user and password from the output.

Grafana

Now to add the Prometheus data source. The Prometheus service URL can be accessed as by http://prometheus-server.monitoring.svc.cluster.local on the cluster where monitoring would be the namespace you deployed it in. Go under “Administration” → “Data sources” → “Add new data source” then add the URL and “Save & Test” to verify.

Add Prometheus Data Source

If we want to get a simple dashboard to view the state of our cluster we can use the dashboard provided Grafana Labs. It should give us a simple view on the resources being used in our cluster.

Dashboard

Next Steps

Custom Docker Images

Being able to deploy containers that might not be on the public docker.io registry, including your own custom containers is an essential step to running your cluster. I recommend setting up a private container registry if you plan on deploying lots of containers to your cluster, this can avoid the limitations of the free-tier Docker Hub. This can be done with cloud providers like GCP’s Artifact Repository which can be cheaper than Docker Hub or open-source docker repositories like Harbor.

Cluster Automation

This guide provides a manual approach to setting up a Kubernetes cluster, ideal for educational purposes or managing small personal clusters. However, for deploying a production cluster or for tasks beyond the scope of this guide, I’d recommend utilizing automation tools such as Ansible. This ensures a more efficient, scalable, and manageable deployment.

Conclusion

Setting up a Kubernetes cluster can be a non-trivial process but once completed the advantages allow for a scalable environment that goes beyond a typical standalone server.

Raspberry Pi’s are a good low cost and low power option but they still have limitations on how they scale for larger applications. An advantage of a Kubernetes cluster is that you aren’t limited to only running the same hardware. The extensibility of adding new nodes means you can mix and match different hardware, including different Raspberry Pi’s or servers to meet your needs!

Hopefully this has been a good starting point, thanks for reading!

--

--

Alexander Sniffin

Software Engineer solving the next big problem one coffee at a time @ alexsniffin.com