East-West Communication in Kubernetes — How do services communicate within a cluster?

A closer look at the dynamics of the 3 native k8s objects which enable service-to-service communication — ClusterIP Service, DNS & Kube Proxy.

Abhinav Kapoor
CodeX

--

In my previous article “North-South Communication in Kubernetes” I wrote about how clients reach services within a cluster. Once inside the cluster, we now see how the back-end services talk to each other within a cluster.

Overview

Traditional service-to-service communication

Before going into the Kubernetes ecosystem, a quick look at traditional service-to-service communicationCommunication happens over IP addresses, so in order for Service A to call Service B, one approach is that Service B is assigned a static IP address. Now either service A knows this IP Address already (which may work when dealing with a very small number of services) or Service B registers itself with a Domain Name & Service A gets the contact address of Service B via a DNS lookup.

Traditional Service to Service Communication

Kubernetes network model

Now inside a Kubernetes cluster, we have the control plane which constitutes cluster management components & a set of worker machines, called nodes. These nodes host Pods, and it’s the pods that run back-end microservices as containerised services.

Pod to Pod communication inside a cluster as per the Kubernetes network model.

As per the Kubernetes network model —

  1. Every pod in a cluster gets its own unique cluster-wide IP address
  2. All pods can talk to each inside the cluster,
  3. The communication happens without a NAT which means the destination pods get to see the real IP address of the source pod. Kubernetes considers the container network or applications running on it as trusted, not requiring authentication at the network level.

ClusterIP service — Durable abstraction over Pods

Since each pod in a cluster has its IP address, it should be easy for a pod to talk to another pod? No, because Pods are volatile and every time a pod is created it gets a new IP address. So the client service must somehow switch to the next available pod, this is not desirable.

The issue with Pods talking directly to each other directly is the ephemeral nature of the other destination Pod & secondly discovering alternative Pod IP addresses.

So Kubernetes can create a layer on top of a group of Pods which can give a single IP address to the group & can provide a basic load balancing.

Pods exposed via a Cluster IP Service on a Durable IP address, the client talks to service instead of talking directly to Pods

This abstraction is provided by a service object in Kubernetes called ClusterIP service. It spawns over multiple nodes thereby creating a single service within the cluster. It can receive a request on any port and forward it to any port on the pod.

Therefore when application service A needs to talk to service B, it calls the ClusterIP service of Service B object rather than the individual pod running the service.

ClusterIP uses the standard pattern in Kubernetes of labels and selectors to keep scanning for pods matching a selection criterion. Pods are labelled and services have selectors to look for the labels. Using this, it’s possible to have a rudimentary traffic split, where old and new versions of a microservice coexist behind the same clusterIP Service.

CoreDNS — Service Discovery within a cluster

Now that Service B has got a durable IP address, Service A still needs to know what is this IP address before it can talk to service B.

Kubernetes supports name resolution using CoreDNS. Service A is expected to know the name (& port) of the ClusterIP it needs to talk to.

  1. CoreDNS scans the cluster and whenever a ClusterIP service gets created, its entry is added to the DNS server (If configured, it’ll also add an entry for each pod but it’s not relevant for service-to-service communication).
  2. Next, CoreDNS exposes itself as a cluster IP service (by default called kube-dns) and this service is configured as the nameserver in pods.
  3. The Pod initiating the request gets the IP address of the ClusterIP service from DNS and can then initiate the request using the IP address and port.

Services are resolved using <host name>.<name of namespace>.<type>.<root>. The type can either be pod for pods (not relevant for service-to-service communication) or svc for services.

Kube-proxy — The Link between ClusterIP service and backing Pods (Destination Network Address Translation)

So far, from this article, it may appear that it's the ClusterIP service which forwards the calls to the backend Pods. But actually, it is done by Kube-proxy.

Kube-proxy runs on each node and is watching for services and their backing Pods (in reality endpoints objects).

  1. When a pod running on a node makes a request to a ClusterIP service, the kube-proxy intercepts it.
  2. By looking at the destination IP address and port, it can identify the destination cluster IP service. And it replaces the destination of this request with the address of an endpoint where the actual Pods are present to serve the request.

How does it really work together?

Interaction of ClusterIP service, CoreDNS, client Pod, Kube-Proxy, EndPoint & Destination Service Pods
  1. ClusterIP Service of the destination is registered in CoreDNS
  2. DNS Resolution: Every pod has a resolve.conf file which contains the IP address of the CoreDNS Service, pod performs DNS lookup.
  3. The Pod makes call to the clusterIP service using the IP address it received from DNS and Port which it already knows.
  4. Destination Address Translation: Kube-proxy updates the destination IP address to the address where a Pod from service B is available

Summary

We saw the native Kubernetes objects which make service-to-service communication possible. While these details are hidden from the application layer, it's good to know what is available in vanilla Kubernetes & where it would be appropriate to go for a platform/product built on top of Kubernetes.

In my next article, I’ll write about service mesh which gives a smart network to simplify service-to-service communication.

I hope the write-up conveyed what it promised. Let me know your feedback.

My related articles

Side-car pattern, Out of process architecture & Need for multi-container podshttps://medium.com/codex/communication-inside-a-kubernetes-pod-why-do-we-need-multi-container-pods-3d8d0d64c2c9

Why do we need Service Mesh in Kubernetes? — https://medium.com/codex/east-west-service-to-service-communication-what-is-service-mesh-4e56f94bc89c

Exposing Non-HTTP endpoints via Ingress Controllers & the new Gateway API https://medium.com/codex/north-south-communication-in-kubernetes-exposing-non-http-services-to-the-outside-world-4ebba4217443

North-South Communication in Kubernetes — How Does a Client Talk To Service Inside a Cluster? https://medium.com/better-programming/north-south-communication-in-kubernetes-how-does-a-client-talk-to-a-service-inside-a-cluster-8af8b27dbb9

--

--

Abhinav Kapoor
CodeX
Writer for

Technical Architect | AWS Certified Solutions Architect Professional | Google Cloud Certified - Professional Cloud Architect | Principal Engineer | .NET