Kubernetes is just Linux

Eric Jalal
7 min readFeb 18, 2024

I started working with Kubernetes a couple of years ago. In the beginning this software looks as a huge technology made from scratch and is doing an extraordinary set of operations on your servers to achieve orchestration.

First of all, Kubernetes is one of many orchestrator tools, such as:
- Nomad
- Apache Mesos
- Docker Swarm
and many more.

Linux and Kubernetes share several foundational concepts due to Kubernetes being designed to run and manage containerized applications, with containers themselves being a Linux feature. However, Kubernetes extends far beyond just wrapping Linux features; it provides orchestration capabilities that are not native to Linux; But still in the heart Kubernetes is a wrapper of Linux features.

So why Kubernetes? Because it is simple!

Kubernetes is designed to make building and running complex applications easier. It has a great decoupled architecture which makes the developers open-hand to write their own operators to work with controllers or even new implementation of an interface (e.g CNI or CSI).

Kubernetes is using Linux features and automate them to achieve scalability and high availability. Distributing the computing power of servers and their storage chain through Object Storage solutions is not a new technology invented, same goes with security.

For instance:
- RBAC, abbreviated from Role-Based Access Control was released in December 22, 2000; as part of SELinux project which is an abbreviation of Security-Enhanced Linux that is a security architecture for Linux® systems that allows administrators to have more control over who can access the system. It was originally developed by the United States National Security Agency (NSA) as a series of patches to the Linux kernel using Linux Security Modules (LSM). In Kubernetes RBAC is a term that is used to determine the control model of organization operators and applications.
Of course there is other control models such as ABAC and PBAC which are not delegated into Kubernetes ecosystem.

- CGROUPS, abbreviated from Control Groups, are a Linux kernel capability that establish resource management functionality like limiting CPU usage or setting memory limits for running processes.

- CONTAINERS, Kubernetes manages containers, which are a Linux feature enabled by namespaces and cgroups. Containers provide process isolation and resource allocation, which are core to both Kubernetes and Linux.

- NAMESPACES, In Linux, namespaces are a feature for isolating process ID numbers, network interfaces, mount points, and other aspects of a process. Kubernetes also uses the concept of namespaces but at a higher level to isolate and manage groups of resources within a cluster.

So as you see, if you have already really read the Linux Bible by Christopher Negus then you are competent at Kubernetes basics!

Now what if we think about wrapped functionalities?

Kubernetes as a Wrapper or API for Linux Features

  1. Pods and Containers: Kubernetes manages containers through pods, which are the smallest deployable units in Kubernetes. This management is done using container runtimes like Docker, containerd, or CRI-O, which in turn use Linux container technologies (namespaces, cgroups).
  2. Network Policies: Kubernetes network policies allow administrators to control network access into and out of containerized applications. Underneath, these may utilize Linux networking features like iptables or eBPF to enforce the rules.
  3. Persistent Volumes (PVs): Kubernetes abstracts and manages persistent storage through the concept of Persistent Volumes and Persistent Volume Claims. These can be backed by various storage solutions, some of which rely on Linux features like NFS, iSCSI, or local filesystem mounts.
  4. Resource Limits: Kubernetes allows setting resource limits at the pod or container level, which translates to using cgroups under the hood to enforce these limits on the Linux kernel level.
  5. Service and Ingress: While Kubernetes services and ingresses are higher-level abstractions for accessing applications, they often leverage Linux network features (like iptables or IPVS) to route traffic to the correct containers.

Oh wait, what about Docker? Docker is borned because of Linux extended flexibility due to it’s magnificent architecture, how does that relate to this?

Docker and Space Savings through OCI

The Open Container Initiative (OCI) is a project under the Linux Foundation to design open standards for operating system-level virtualization, most importantly container formats and runtime. Docker and other container technologies adhere to OCI standards, which include the container image specification and the runtime specification.

Two important Docker technique that directly contributes to Kubernetes existence:

  1. Image Layering: Docker images are built up from a series of layers. Each layer represents an instruction in the image’s Dockerfile. Layers are reused across different images, which means if multiple images are built from the same base image or share common layers, these layers are stored only once on a host. This drastically reduces storage requirements and speeds up image downloads.
  2. Copy on Write (CoW): Docker uses CoW strategies for managing container filesystems. When a container modifies an existing file, the file is copied and the copy is modified, leaving the original unchanged. This means that until a file is modified, containers can share the same data blocks for their filesystems, further saving space.

This is exactly why Kubernetes (or in general a container orchestrator) scalabilty and portability.

Kubernetes, Docker, and OCI for Scalability and Functionality

Kubernetes uses container runtime interfaces (CRIs) that are compatible with OCI standards, allowing it to manage containers that are created using Docker or any other OCI-compliant container runtime. This adherence to OCI standards enables Kubernetes to leverage Docker’s space-saving capabilities and more:

  1. Scalability: Kubernetes can efficiently manage thousands of container instances across a cluster of servers. The use of Docker containers, which are lightweight due to their shared layers and efficient use of storage, makes it feasible to scale applications up or down quickly in response to demand without excessive overhead.
  2. Portability: Because Docker containers package an application and its dependencies into a standardized unit for software development, Kubernetes can run these containers on any system that supports Docker and adheres to OCI standards. This eliminates “it works on my machine” problems and facilitates consistent deployments across development, testing, and production environments.
  3. Efficient Deployment and Versioning: Kubernetes can roll out new versions of an application encapsulated in Docker containers with minimal downtime and can roll back if necessary. The efficient storage of container images (thanks to OCI concepts) makes these operations fast and reduces the amount of data transferred across the network.
  4. Resource Efficiency: Kubernetes can intelligently place containers based on their resource requirements and the current load on cluster nodes, optimizing the utilization of underlying resources. Docker’s efficient use of system resources (CPU, memory, and storage) complements Kubernetes’ scheduling capabilities, allowing for high-density container deployment.
  5. Isolation and Security: Kubernetes leverages Docker’s container isolation properties to run separate applications on the same physical machine without interference. This isolation extends to network policies, storage, and computing resources, enabling secure multi-tenancy in a Kubernetes environment.

So now we know enough about what are the concepts that Kubernetes is using from already existing softwares, lets dive into softwares outside of Linux.

Kubernetes leverages several pre-existing projects

Kubernetes indeed leverages several pre-existing software projects and technologies to provide its rich set of features. These components are integral to Kubernetes architecture and functionality. Among these, etcd is a critical component, but there are others as well.

  1. etcd: etcd is a highly available distributed key-value store that Kubernetes uses for storing all its cluster data, making it the backbone of Kubernetes’ clustering and synchronization mechanisms. It is designed for critical data management and provides a reliable way to store data across a cluster of machines. etcd predates Kubernetes; it was first released in 2013, while Kubernetes was released in 2014. Kubernetes uses etcd to keep track of the state of all Kubernetes resources within the cluster, making it possible to manage the cluster’s state reliably.
  2. Docker: Initially, Kubernetes was tightly coupled with Docker as the container runtime for running containerized applications. Although Kubernetes has since evolved to support other container runtimes through the Container Runtime Interface (CRI), Docker popularized container technology and was instrumental in the initial growth of Kubernetes. Docker provided a simple and effective way to package applications and their dependencies into containers, which Kubernetes could then orchestrate.
  3. Container Runtime Interface (CRI): While not a software in itself, the CRI is an API that allows Kubernetes to support various container runtimes without being directly integrated with them. This enables Kubernetes to use not just Docker but also other container runtimes like containerd and CRI-O, which are built to conform to the OCI standards.
  4. CoreDNS: CoreDNS is a flexible and extensible DNS server that can provide name resolution for the services running in a Kubernetes cluster. It is used in Kubernetes for service discovery and can be configured to work with multiple backends, including etcd. CoreDNS became the default DNS service in Kubernetes, replacing kube-dns.
  5. Iptables: Kubernetes uses iptables, a user-space utility program that allows a system administrator to configure the IP packet filter rules of the Linux kernel firewall, for various networking features, including implementing service load balancers, network policies, and pod networking. Iptables has been part of the Linux kernel for many years and is a critical component for network management and security in Kubernetes clusters.
  6. Flannel, Calico, and other CNI plugins: Kubernetes supports the Container Network Interface (CNI), which is a specification and a set of tools for configuring network interfaces in Linux containers. Flannel and Calico are examples of CNI-compatible networking solutions that Kubernetes can use to manage pod networking. These solutions provide network connectivity between pod networks across the cluster and implement network policies.

Each of these mentioned names do have their very own article to discuss what they are exactly and how they are working. But all in all this is basically the whole Kubernetes and how it is working. Ofcourse there is still a lot of functionalities that are developed by Kubernetes to achieve a successful orchestration but now you know in detail this software is not an “innovative technology” it is just an atuomation around other pre-existing softwares and technologies.

--

--