Archlinuxarm 64bit and Kubernetes on Raspberry Pi

Oleg Dawydiak
7 min readAug 16, 2022
Photo by author

Following the successful experiments a year ago from Kubernetes on Raspberry Pi with Archlinuxarm and the brief examination of the Internet, I realized that the task with the starting requirements and the conditions I installed myself had not enough actual guides and full instructions. Therefore, I have decided to document my work, achievements, and conclusions, which may be useful for others.

Let’s start with the requirements and conditions.

  1. The cluster has to be on a Raspberry Pi (further in the text RPi). On one hand, I have always been interested in circuit engineering, something closer to bare metal, drivers, the structure of processors, processor architectures, soldering iron… 😊. On the other hand, I am also keen on higher-level problems and tasks such as virtualization, containers, container orchestration, etc. Several years ago, my colleagues gave me three RPis for my birthday, so I didn’t think long about the question of how to use them — of course kubernetes. Because there are containers and their orchestration on the one side, and a specific and not yet widespread enough architecture of ARM processors on the other side
  2. Every RPi has to have an ArchLinux operating system, and a 64-bit one. Why do I dwell upon this OS? Historically, this is my favorite distro, because it is powerful, flexible and light at the same time. Well, if you look for the topic “kubernetes on Raspberry Pi” on the Internet, you will find a bunch of guides, but all of them will be either for the native RaspbianOS for RPi or for Ubuntu. But for Archlinuxarm on RPi… I won’t say that there aren’t any at all, there are some, but some of them are incomplete, others are old and not actual. So I have decided to fill in this gap. By the way, at the time of my first experiments (spring 2021), RaspianOS did not have aarch64 support, while ArchlinuxARM had already had it, and this is another plus for this distro.
  3. Back to 2020, kubernetes announced the deprecation of docker (more precisely, dockershim) and took the direction of CRI (more details here), so I decided to try alternatives to docker. In simple words — docker would not be installed on my RPi, which sounded more than like a challenge a year ago.

Installing ArchlinuxARM.

Regarding installation on a microSD card, there are no nuances: both 32-bit and 64-bit versions of this distro can be installed without problems. There are enough guides on the Internet (for example, the official one https://archlinuxarm.org/platforms/armv8/broadcom/raspberry-pi-4 ), I will not mention them again here.

After last years’ experiments, I have decided to improve my data storage system, that is, to abandon microSD, as they are slow and not durable. For this purpose, I purchased special X857 expansion boards from Suptronics — http://www.suptronics.com/miniPCkits/x857.html . This is a regular adapter from mSATA to USB3.0, only in the form factor “sharpened” for RPi. Together with an mSATA SSD (I bought a Kingston 256Gb), my “raspberries” got a much more serious data storage system than microSD, and, perhaps most importantly, the data access speed increased by 4. Since I plan to abandon microSD completely, the next step is OS USB boot. For 32-bit ArchlinuxARM there were no problems, but for 64-bit there are some nuances.

Preparing SATA disk with ArchLinuxARM 64 bit

You need to have:

MicroSD card with RaspbianOS (how to prepare here).

MicroSD card with ArchlinuxARM (how to prepare here). After you have prepared this microSD, you must download ArchLinuxARM-rpi-aarch64-latest.tar.gz to your computer and copy it to your prepared card in the /home/alarm folder.

  1. boot from the card with RaspbianOS
  2. sudo apt update
  3. sudo apt full-upgrade
  4. sudo rpi-eeprom-update –a
  5. sudo reboot
  6. sudo raspi-config , select Advanced Options -> Boot Order -> USB boot
  7. turn off the RPi, remove the card with RaspbianOS from RPi
  8. boot from the card with ArchlinuxARM, log in under the user alarm and the password alarm. After booting, DO NOT UPDATE(!!!) Archlinux, that is, you don’t need to execute the pacman –Syu command. We perform all further steps while in the home folder of the alarm user
  9. connect our device from which we are going to boot to USB
  10. sudo fdisk /dev/sda
  11. type o , with the help of this we clear all partitions on our device
  12. type p , with the help of this we check whether we do not have any partitions
  13. type n , then p , 1 for the first partition on the device, press ENTER to accept the first sector by default, then type +256M for the last sector
  14. type t , then c to set the type of the first partition W95 FAT32 (LBA)
  15. type n, then p, 2 for the second partition on the device, press ENTER to accept the first sector by default, then type +64G for the last sector
  16. type n, then p, 3 for the third partition on the device, press ENTER twice to accept the first and last sector by default
  17. type w to save the partition table and exit
  18. create and mount FAT:
mkfs.vfat /dev/sda1
mkdir boot
mount /dev/sda1 boot

19. create and mount ext4:

mkfs.ext4 /dev/sda2
mkdir root
mount /dev/sda2 root

20. bsdtar –xpf ArchLinuxARM-rpi-aarch64-latest.tar.gz

21. sync

22. mv root/boot/* boot

23. edit /etc/mkinitcpio.conf, add pcie_brcmstb to MODULES, save

24. mkinitcpio –P

25. sudo cp /boot/initramfs-linux.img boot

26. sudo cp /boot/initramfs-linux-fallback.img boot

27. edit root/etc/mkinitcpio.conf, add pcie_brcmstb to MODULES, save

No further comments here, remove the microSD card, leave the SSD device connected to USB, reboot the RPi — our “raspberry” is loaded from a SATA disk connected via USB. All steps should be repeated on all RPis from which you want to build a cluster.

Further configuration of ArchlinuxARM and installation and configuration of additional applications and utilities require to run a Kubernetes cluster

The steps in this section should be performed on each of your RPis. In the /etc/hostname file, specify a unique (within your network) host name. As for me, I chose pi0, pi1, pi2, respectively. In the /etc/hosts file, we set localhost and wrote down the addresses of the remaining nodes that would be included in the cluster. That’s how the contents of /etc/hosts look like on my pi0 for example:

# Static table lookup for hostnames.
# See hosts(5) for details.
127.0.0.1 localhost
192.168.1.15 pi1
192.168.1.21 pi2

Next you should edit the network interface in the /etc/systemd/network/eth.network file, so that the host always has the same IP address. It is also necessary to specify the gateway, DNS, Domains there. Your DNS must be configured to handle the Domains you specify in this file. That’s how /etc/systemd/network/eth.network looks like on my pi0 for example:

[Match]
Name=eth*
[Network]
Address=192.168.1.14/24
Gateway=192.168.1.1
DNS=192.168.1.15
DHCP=no
DNSSEC=no
Domains=dob.home

Restart the network interface by sudo systemctl restart systemd-networkd
Correct network settings are extremely important for k8s, if you have everything working as it should, check with the nslookup utility — you should get correct results both for internal hostnames (eg nslookup pi2) and for external domain names (eg nslookup docker.io).
Now we are going to install a set of necessary utilities, here is the list:
git, base-devel, htop, socat, ethtool, ebtables, cni-plugins, crun, cri-o, crictl.
We also need to install the AUR helper , which will be used to install some kubernetes packages.
Next commands to run:
sudo sh -c ‘echo “net.ipv4.ip_forward=1” >> /etc/sysctl.d/30-ipforward.conf’
sudo sh -c ‘echo “br_netfilter” >> /etc/modules-load.d/br_netfilter.conf’
sudo sh -c ‘echo “xt_set” > /etc/modules-load.d/xt_set.conf’

Edit /etc/containers/policy.json in which we should leave only the option

{
"default": [{"type": "insecureAcceptAnything"}]
}

Now let’s configure our CRI-O container environment. In order to do this, we should edit /etc/crio/crio.conf, here is its content:

#/etc/crio/crio.conf[crio.api]
listen = "/var/run/crio/crio.sock"
[crio.image]
default_transport = "docker://"
containers-registries = ["docker.io"]
pause_image = "registry.k8s.io/pause:3.7"
pause_image_auth_file = ""
pause_command = "/pause"
[crio.network]
network_dir = "/etc/cni/net.d/"
plugin_dirs = ["/opt/cni/bin"]
[crio.runtime]
cgroup_manager = "systemd"
conmon="/usr/bin/conmon"default_capabilities = [
"CHOWN",
"DAC_OVERRIDE",
"FSETID",
"FOWNER",
"NET_RAW",
"SETGID",
"SETUID",
"SETPCAP",
"NET_BIND_SERVICE",
"SYS_CHROOT",
"KILL",
]
default_runtime = "crun"
[crio.runtime.runtimes.crun]
runtime_path = "/usr/bin/crun"
runtime_type = "oci"
runtime_root = "/run/crun"

Then run

sudo systemctl daemon-reload
sudo systemctl enable crio sudo systemctl start crio

and check status running

sudo systemctl status crio

We should see something like you can see in the picture:

Edit /etc/crictl.yaml, file should have the following content:

runtime-endpoint: "unix:///run/crio/crio.sock"
image-endpoint: "unix:///var/run/crio/crio.sock"
timeout: 10
debug: false

At this stage, the configuration of the environment for working with containers is complete, pay attention that we do not have docker on the nodes, instead we have CRI-O, which works together with crun. With the crictl utility, we can perform all the actions we need with containers that we are used to doing with docker.

Installing Kubernetes specific applications and running the cluster

Run

yay -S kubelet-bin kubeadm-bin kubectl-bin

edit /usr/lib/systemd/system/kubelet.service, this is how its content should be:

[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet --container-runtime=remote --container- runtime-endpoint=unix:///run/crio/crio.sock
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target

Apply changes by:

sudo systemctl daemon-reload
sudo systemctl enable kubelet.service

We are ready to initialize the master node(my master node is pi0) running:

kubeadm init –pod-network-cidr=10.244.0.0/16

before switch to root user via sudo -i.

As a pod network in the cluster, I chose Flannel with Calico for policy (aka Canal). We should do the following:

curl https://projectcalico.docs.tigera.io/manifests/canal.yaml -O
kubectl apply –f canal.yaml

Connect other nodes (for me it is pi1 and pi2) by command

kubeadm join pi0:6443 --token <YOUR_TOKEN>

Where <YOUR_TOKEN> is the token you received when the master node was generated.

Now our kubernetes cluster is ready. The list of nodes and the list of pods can be viewed with commands

kubectl get nodes –o wide
kubectl get pods –A –o wide

This is what the current situation looks like in my cluster. As you can see — archlinuxarm kernel version 5.18.1–1-aarch64, kubernetes version 1.24.0, container-runtime cri-o://1.24.1, that is, as of June 2022, almost the latest current versions. All pods are ready and in running status. My starting conditions have been fulfilled, congratulations to both me and you 😊.

--

--