Mo’ tenancy, Mo’ problems.

A curated (but not exhaustive) list of FOSS projects addressing multi-tenancy challenges in K8s.

Divya Mohan
5 min readJul 19, 2022

When I asked my feed on LinkedIn a couple of weeks back what they’d like to see more of in the Kubernetes multitenancy area, a resounding amount (~50%) ended up voting for stories of implementation. This wasn’t the least bit surprising given the administration & maintenance nightmare the exercise turns into once you’re on the implementing side.

This is not to diss Kubernetes or to make light of the efforts that go into releasing, upgrading, and maintaining all the features that we take for granted. However, the truth of the matter is that true hard multi-tenancy is still very difficult to achieve in Kubernetes without jumping through multiple hoops. This Reddit comment also describes a basic checklist of items also describes a very basic checklist of items that you’d need to address despite the type of multi-tenancy you choose. Even though it’s from five years ago, it still is a pretty good baseline.

As the author of the comment mentions, this list isn’t even exhaustive because there would be workload-specific requirements that would need to be catered to as well.

With this post, I aim to explore the various open source projects cropping up in this space and how they address the challenges associated with multi-tenancy. Think of it like a TL; Dr documentation/project README version.

Disclaimer: There are certified Kubernetes distributions and platforms like OpenShift, Rancher, GKE, etc that offer multi-tenancy as a partial inherent capability. I do not cover those cases. Additionally, this is not an exhaustive list and is based on the research I have done.

1. KubeSlice

GitHub repo: https://github.com/kubeslice

Documentation: https://kubeslice.io

What does it do differently?

An abstraction called Slice is created across clusters or within a single cluster allowing for application namespaces aka tenants to be isolated by association with the boundary.

Visualization:

Source: https://kubeslice.io

2. Capsule

GitHub repo: https://github.com/clastix/capsule

Documentation: https://capsule.clastix.io/docs/general

What does it do differently?

Multiple namespaces within a single cluster are grouped into an abstract construct called Tenant that can be operated by users in autonomy without the cluster admin’s intervention.

Visualization:

source: https://capsule.clastix.io

3. Kamaji

GitHub repo: https://github.com/clastix/kamaji

Documentation: https://github.com/clastix/kamaji/tree/master/docs

What does it do differently?

Kamaji solves the problem of scalability of etcd clusters & saving the state by provisioning a central etcd cluster. Every cluster is a tenant and is provisioned with a control plane component that links to the central etcd cluster & an associated worker component.

Visualization:

Source: https://github.com/clastix/kamaji/

4. vCluster

GitHub repo: https://github.com/loft-sh/vcluster

Documentation: https://www.vcluster.com/docs/what-are-virtual-clusters

What does it do differently?

vCluster uses the concept of virtual Kubernetes clusters running on top of other Kubernetes clusters and can be likened to how we have multiple VMs on the same physical server. A Kubernetes hypervisor emulates networking and worker nodes inside the virtual cluster and syncs low-level resources such as pods & services from the virtual cluster to the actual one.

Visualization:

Source: https://vcluster.com/docs

5. Kiosk

GitHub repo: https://github.com/loft-sh/kiosk

Documentation: https://github.com/loft-sh/kiosk

What does it do differently?

kiosk introduces an abstraction called a Space that consists of a single namespace created with templated Kubernetes resources for running apps in isolation. The initial configuration of kiosk must, of course, be done by the cluster administrator so that the isolated namespaces can be provisioned in a self-service fashion by tenants per their requirements.

Visualization:

6. kcp

GitHub repo: https://github.com/kcp-dev/kcp

Documentation: https://github.com/kcp-dev/kcp/tree/main/docs1

What does it do differently?

kcp implements a central etcd cluster and a minimal Kubernetes API server that is divided into isolated logical clusters to enable multitenancy of cluster-scoped resources such as CRDs and Namespaces. This allows different teams, workloads, and use cases to live side by side.

Visualization:

source: https://github.com/kcp-dev/kcp/

7. KubePlus

GitHub repo: https://github.com/cloud-ark/kubeplus

Documentation: https://cloud-ark.github.io/kubeplus/docs

What does it do differently?

KubePlus enables namespace-based multi-tenancy with the help of a CRD that enables creating new Kubernetes APIs (CRDs). The new CRDs enable the creation of a Helm release per tenant with tenant-level isolation, monitoring, and consumption tracking.

Visualization:

https://cloud-ark.github.io/kubeplus/docs

That’s it for this post! In the next set of posts, I hope to dive deep into each of these projects. Are there any specific things you’d like to see more of, as a reader, when it comes to the deep dive? Do leave a comment below.

Also, these are projects that have been on my radar and I could be missing some interesting goings-on in the cloud native space. If you know of any other projects in the open-source space related to Kubernetes multi-tenancy, please do leave a comment below so that I can check them out.

--

--

Divya Mohan

Technical Evangelism @ Rancher by SUSE • SIG Docs co-chair @ Kubernetes