Kubernetes networking and service object: Understanding ClusterIp and nodePort with hands on study

Mehmet Odabasi, PhD
8 min readSep 6, 2022

Kubernetes work with containers in the form of pods. We can either create a single pod with a pod object or multiple pods using replicasets or deployment objects. In any case we will end up with bunch of pods and at some point we will need to access to that pods.

In cluster networking, containers can reach to other containers within a pod by employing localhost communications. The way containers behave is as if they are on the same host. No problem about that!

When it comes to pod to pod communication, IP address seems to be the solution. Because each pod has a unique cluster-wide IP address. However, there is a shortcoming when it comes to this kind of communication. Pods have a habit of being terminated whenever they fail or when we scale down our replicas. Of course the terminated pods are replaced with the new ones; however, new pod means a new IP address. How are we going to keep up with all the new IP addresses?

Kubernetes service objects offers a solution to this problem. When we create a service object, it comes with a virtual IP address and that IP address remains constant as long as the service is alive. By using selectors, we can link pods to services and and all pods can communicate with each other through the service eliminating the pod IP address process. Therefore, terminating a pod and renewing it with a new IP wont matter after all. In short, `Services` provide network connectivity to Pods that work uniformly across clusters.

Kubernetes Service object also provides a kind of load balancing and kube-proxy load balances connections to the virtual IP across the group of pods backing the service.

Kubernetes offers four major service types:

• ClusterIP (default)• NodePort• LoadBalancer• ExternalName

From those four types, we will cover the first two service types: ClusterIP and NodePort.

Prerequisite: Kubernetes Cluster

For the hands on study, we will need two nodes. There are several options for you to apply. You can use Minikube with two nodes on your local to create two nodes.

Or, you can spin up two EC2 instances to create a Kubernetes Cluster. You can do this manually from AWS Console, use CloudFormation stacks or use Terraform files to create your instances. Choose the method that you are familiar with.

I personally prefer to use Terraform files. I will not go into explaining each Terraform file as it is not the subject of this story.

Services are created just like any other objects in Kubernetes. For this study, I am going to employ declarative method and create YAML files which will create the services I need. I will use both ClusterIp and NodePort types to explain the difference between them.

ClusterIP

ClusterIP is the default service type in Kubernetes. When creating a service, if you omit the service type, it will be defaulted to ClusterIP.

The documentation defines ClusterIp as “Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.”

If you do not need your pods to be accessed from outside the cluster you can use this type of service.

1. Before diving into ClusterIp let’s check that we actually have our nodes ready.

kubectl get nodesNAME STATUS ROLES AGE VERSIONkube-master Ready control-plane 49m v1.25.0kube-worker-1 Ready <none> 47m v1.25.0

2. Now, let’s create a deployment object with three replicas and name the yml file as mydeployment.yml.

3. Apply the deployment file

kubectl apply -f mydeployment.yml

4. Check the deployment and pods

kubectl get deployNAME READY UP-TO-DATE AVAILABLE AGEnginx 3/3 3 0 13s
kubectl get podsNAME READY STATUS RESTARTS AGEnginx-85575f566c-cmkgp 1/1 Running 0 19snginx-85575f566c-wnwsl 1/1 Running 0 19snginx-85575f566c-zqkrw 1/1 Running 0 19s

As you can see we created three pods running nginx images.

5. Check the IP address for the pods

kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-85575f566c-cmkgp 1/1 Running 0 99s 172.16.180.1 kube-worker-1 <none> <none>nginx-85575f566c-wnwsl 1/1 Running 0 99s 172.16.180.2 kube-worker-1 <none> <none>nginx-85575f566c-zqkrw 1/1 Running 0 99s 172.16.180.3 kube-worker-1 <none> <none>

6. As you can see each Pod has an IP address which is internal and specific to each instance. Pay attention to the IP addresses and note them as we will come back to this step after Step 13.

But first things first, let’s check whether our pods can communicate with each other. In order to that, we are going to go into a specific pod and than run a ping command to the other pods.

7. We will use exec command to log into a container in the pod.

kubectl exec -it nginx-85575f566c-n8hv5 — sh

Note: Do not forget to change the container id as my container id and your container id will not be the same (nginx-85575f566c-n8hv5)

8. Since we used a minimal image for our pods, we will need to install ping command when we go into the container.

First run

kubectl exec -it nginx-85575f566c-n8hv5 — sh

Then while you are in the container run

apt-get update

and

apt-get install iputils-ping

9. Now we can run the ping command

# ping 172.16.180.2PING 172.16.180.2 (172.16.180.2) 56(84) bytes of data.64 bytes from 172.16.180.2: icmp_seq=1 ttl=63 time=0.128 ms64 bytes from 172.16.180.2: icmp_seq=2 ttl=63 time=0.062 ms64 bytes from 172.16.180.2: icmp_seq=3 ttl=63 time=0.066 ms64 bytes from 172.16.180.2: icmp_seq=4 ttl=63 time=0.063 ms — — 172.16.180.2 ping statistics — -
4 packets transmitted, 4 received, 0% packet loss, time 3051ms
rtt min/avg/max/mdev = 0.062/0.079/0.128/0.027 ms

As you can see we can ping other pods.

10. Let’s scale down the deployment.

kubectl scale — replicas=1 deployment nginxdeployment.apps/nginx scaled

11. Let’s check the pods

kubectl get podsNAME READY STATUS RESTARTS AGEnginx-85575f566c-zqkrw 1/1 Running 0 9m28s

12. Now, we will scale up the deployment again.

kubectl scale — replicas=3 deployment nginxdeployment.apps/nginx scaled

13. Let’s check back the pods and compare the IP addresses that we noted in Steps 5 and 6.

kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESnginx-85575f566c-n8hv5 1/1 Running 0 8s 172.16.180.4 kube-worker-1 <none> <none>nginx-85575f566c-snsb4 1/1 Running 0 8s 172.16.180.5 kube-worker-1 <none> <none>nginx-85575f566c-zqkrw 1/1 Running 0 95m 172.16.180.3 kube-worker-1 <none> <none>

As you can see we have two new pods with two new IP addresses. While we were able to ping the pods, we can not rely on them since every new pod means a new IP address. Therefore we will use a service object.

11. Now let’s create a service object with ClusterIp type and name the yml file as myservice.yml

12. Note that we entered “ type: ClusterIP “ in spec field. Now let’s apply the file.

kubectl apply -f myservice.ymlservice/nginx-svc created

13. Let’s check the services

kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h24mnginx-svc ClusterIP 10.106.201.163 <none> 3000/TCP 22s

As you can see we created a service with the ClusterIp of 10.106.201.163 and port 3000.

14. Remember that we used an Nginx image for the containers. Therefore, we should be able to reach the content via curl. The best way to use curl is to create a separate pod just for curl purposes. We can use the official curl image from Docker Hub (curlimages/curl)to create a pod.

kubectl run mycurlpod — image=curlimages/curl -i — tty — sh

15. Now we have our curl pod. Let’s go into it

kubectl exec -it mycurlpod — sh

16. Now we can use the curl command with the service IP including the port number which is 3000

curl 10.106.201.163:3000or by using the service namecurl nginx-svc:3000

As you can see we could curl the contents using service IP address.

TRY THIS: Change the deployment file to scale down the number of pods and then scale them up again with new IPs (You just need to change the number in front of replicas). Then check the pods and you will see that they have new IP addresses. Then run the curl command again. You will see that you can still access to all pods using the service IP address or with the service name.

NodePort

ClusterIp works perfect but your pods will not be accessible from outside the Cluster. NodePort is another service type that solves this problem.

By using NodePort you can define a specific port number to be exposed and users can access to your application inside the pods that are attached to a service with a NodePort type. By default nodePort uses a range of ports between 30000 and 32767. You can specify it or you can let the service to assign a random port number from that pool (30000–32767).

1. Let’s edit our service file.

2. We only changed type to NodePort and added a nodePort: 30036 section to declare that we will use port 30001. Remember this last part is optional and you skip this part, you will be given a random port number between 30000 and 32767.

3. Let’s apply the file.

kubectl apply -f myservice.ymlservice/nginx-svc configured

4. Let’s check the services

kubectl get svcNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14dnginx-svc NodePort 10.98.48.16 <none> 3000:30001/TCP 3s

5. Now we have an IP address for the service and a port number which is exposed. No matter what happens to our pods (terminated and renewed with new IPs), we can still access to our application using this service IP.

6. Now it is time to see the application from a browser. You can access your application by entering `http://<public-node-ip>:30001` in your browser.

IMPORTANT NOTE: If you are using AWS EC2 instances, you need to edit the security group rules and open port 30001 for all connections.

--

--