Step by Step Guide: How to create a Dynamic Service Endpoint via K8S API

By: Andrey Orlov

Lightricks Tech Blog
Lightricks Tech Blog

--

This article will help bring clarity to some internal components of the K8S cluster, demonstrating how to interact with them using command line tools.

Redis is an open source in-memory key-value database (and more besides — you can find the details here.) Using clear, practical examples, I’m going to deploy Redis HA (Statefulset) to create a Service which will always point to the Master

Redis in HA mode

First, let’s understand why we need Redis in HA mode and a Service, which will always point to Redis Master. Based on my experience with apps in K8S which used Redis DB as backend, Redis should be deployed in HA mode.

Usually, it’s 3 nodes: 1 Master and 2 Replicas. If the Master goes down, one of the Replicas will be promoted to be the Master. For HA to work properly there’s a process called Sentinel that is responsible for promoting a Replica to become the Master.

All 3 nodes are available for Read-operation, but only the Master node will be in charge of the Write operations. If your application is unable to determine Master node on its own, you need an additional service (let’s call it redis-master) which will always point to Master.

There are some existing solutions, for example, using HA Proxy. But keep in mind that you should also have HA Proxy in HA mode if you don’t want to have a single point of failure. But for the purpose of this illustration, my goal here is to show how can we create a service we need (redis-master) from scratch

Prerequisite

Let’s install Redis HA into the K8S cluster. I’ll use the Bitnami helm chart.
Note: by default Sentinel is disabled in helm chart

helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade --install redis bitnami/redis --set sentinel.enabled=true -n db --create-namespace

After the deployment process is finished, you should see 3 redis pods:

kubectl get pods
NAME READY STATUS RESTARTS
redis-node-0 2/2 Running 0
redis-node-1 2/2 Running 0
redis-node-2 2/2 Running 0

Note that there are two services:

kubectl  get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
redis ClusterIP 10.96.17.222 <none> 6379/TCP,26379/TCP
redis-headless ClusterIP None <none> 6379/TCP,26379/TCP

The first one (redis) is a regular ClusterIP type service. It balances network traffic across a group of pods (redis nodes). The second is headless service. It still provides load balancing across pods, but through the DNS round-robin.

Test it

Now, let’s do some tests. First, let’s figure out the Redis password by decrypting the redis secret:

kubectl get secret redis -o jsonpath={.data.redis-password} | base64 -d

The output will show the password. Next, run into redis-node-2 pod:

kubectl exec -it redis-node-2 -c redis -- bash

Now we’ll connect to the first node (redis-node-0) using headless service. Run the following command (put your redis password instead of asterisks):

redis-node-2:/$ redis-cli -h redis-node-0.redis-headless.db.svc -a *******
List all keys:
redis-node-0.redis-headless.db.svc:6379> keys *
(empty array)

Create a random key and check it:

redis-node-0.redis-headless.db.svc:6379> set key1 value1
OK
redis-node-0.redis-headless.db.svc:6379> keys *
1) "key1"

It’s obvious that redis-node-0 is the Master node right now. Next, let’s simulate a switchover operation. Open new terminal window and run:

kubectl delete pod redis-node-0

Now let’s wait until pod is ready:

kubectl get pods -w
redis-node-0 2/2 Running 0

Move back to redis-cli (it’s possible that you’ll need to reconnect to redis node) List all keys:

redis-node-0.redis-headless.db.svc:6379> keys *
1) "key1"

Key exists, let’s create another one

redis-node-0.redis-headless.db.svc:6379> set key2 value2
(error) READONLY You can't write against a read only replica.

Operation failed because redis-node-0 is just a replica now. So the Master has to be redis-node-1 or redis-node-2. I don’t want to guess, I want to know where the Master is in any given time

Identifying the Master

To figure it out — let’s run pod with OS Debian

kubectl run -it debian --image=debian:stable-slim -- bash

I’m going to install redis-cli here:

root@debian:/# apt update
root@debian:/# apt install redis-server -y

And now I’m going to connect to the sentinel service. This is available on port 26379. For the hostname I will use redis.db.svc (service with Cluster IP, you can check it once again with the command kubectl get svc) because I don’t care which node I’ll connect to (Master or Replica)

root@debian:/# redis-cli -h redis.db.svc -p 26379 -a *******

To get the Master I will run the following command:

redis.db.svc:26379> sentinel get-master-addr-by-name mymaster
1) "redis-node-2.redis-headless.db.svc.cluster.local"
2) "6379"
redis.db.svc:26379> exit

K8S API

Now I would like to make a request to the K8S API from inside the debian pod. As you already know, each pod in K8S has a CA certificate and service account token under which it’s working. You can check it easily:

kubectl get pods debian -o yaml

Let’s check it inside the pod:

kubectl exec -it debian -- bash
root@debian:/# cd /var/run/secrets/kubernetes.io/serviceaccount
root@debian:/var/run/secrets/kubernetes.io/serviceaccount# ls -la
total 4
drwxrwxrwt 3 root root 140 Oct 27 03:06 .
drwxr-xr-x 3 root root 4096 Oct 27 03:30 ..
drwxr-xr-x 2 root root 100 Oct 27 03:06 ..2022_10_27_03_06_22.4189469582
lrwxrwxrwx 1 root root 32 Oct 27 03:06 ..data -> ..2022_10_27_03_06_22.4189469582
lrwxrwxrwx 1 root root 13 Oct 25 20:11 ca.crt -> ..data/ca.crt
lrwxrwxrwx 1 root root 16 Oct 25 20:11 namespace -> ..data/namespace
lrwxrwxrwx 1 root root 12 Oct 25 20:11 token -> ..data/token

That’s all we need for making requests to the K8S API. The API service is available with the name kubernetes.default.svc (this service is located in the default namespace). Let’s query the endpoints in the db namespace:

# defining vars
root@debian:/# APISERVER=https://kubernetes.default.svc
root@debian:/# SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
root@debian:/# NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
root@debian:/# TOKEN=$(cat ${SERVICEACCOUNT}/token)
root@debian:/# CACERT=${SERVICEACCOUNT}/ca.crt

I will use curl to make a request:

root@debian:/# apt install curl
root@debian:/# curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/db/endpoints
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "endpoints is forbidden: User \"system:serviceaccount:db:default\" cannot list resource \"endpoints\" in API group \"\" in the namespace \"db\"",
"reason": "Forbidden",
"details": {
"kind": "endpoints"
},
"code": 403

It seems that the default service account doesn’t have enough permission to do this. Let’s create a new service account and a new role which will have necessary permissions for the endpoint objects (click here for more details). Open another terminal and run the following commands:

# create new service account
kubectl create serviceaccount redis-monitor
# create role with permissions to get, list, watch, update and patch endpoints
kubectl create role redis-monitor --verb=get,list,watch,update,patch --resource=endpoints
# create rolebinding
kubectl create rolebinding redis-monitor --role=redis-monitor --serviceaccount=db:redis-monitor

Now we should rerun debian pod with the newly created service account that holds the permissions we require:

kubectl run -it debian --image=debian:stable-slim --overrides='{ "spec": { "serviceAccount": "redis-monitor" }  }' -- bash

Because it is a “fresh” pod, there is no redis-cli and curl. Let’s install them again:

root@debian:/# apt update
root@debian:/# apt install redis-server curl -y

Doing the same request:

# defining vars
root@debian:/# APISERVER=https://kubernetes.default.svc
root@debian:/# SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
root@debian:/# NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
root@debian:/# TOKEN=$(cat ${SERVICEACCOUNT}/token)
root@debian:/# CACERT=${SERVICEACCOUNT}/ca.crt
root@debian:/# curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api/v1/namespaces/db/endpointsYou should now see something similar to this:
{
"kind": "EndpointsList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "109312"
},
"items": [
{
"metadata": {
"name": "redis",
"namespace": "db"
…..
"port": 6379,
"protocol": "TCP"
},
{
"name": "tcp-sentinel",
"port": 26379,
"protocol": "TCP"
}
]
}
]
}
]

Here we’ll create a new service, which will be modified each time the Redis Master changes. We will leave the selector empty.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: redis-master
spec:
ports:
- protocol: TCP
port: 6379
targetPort: 6379
EOF

This service has no selector, that is why there is no endpoint for it right now. Let’s create one.

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Endpoints
metadata:
name: redis-master
subsets:
- addresses:
- ip: 10.0.0.0
ports:
- port: 6379
EOF

You can set any ip address you want here. Our goal is to dynamically modify it in case of Redis Master changes.

We can create a loop with timeout, let’s say 1 second, and check each second which node is the Master and based on this information change the endpoint through the K8S API. But I think there is a better way to do it…

Watcher

If you are not familiar with watchers I would recommend to read this documentation.

“The Kubernetes API allows clients to make an initial request for an object or a collection, and then to track changes since that initial request: a watch.”

How does it work? It’s pretty simple. Let’s run into debian pod…

kubectl exec -it debian -- bash

Now run:

root@debian:/# APISERVER=https://kubernetes.default.svc
root@debian:/# SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
root@debian:/# NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
root@debian:/# TOKEN=$(cat ${SERVICEACCOUNT}/token)
root@debian:/# CACERT=${SERVICEACCOUNT}/ca.crt
root@debian:/# curl --no-buffer --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET "${APISERVER}/api/v1/namespaces/db/endpoints?fieldSelector=metadata.name=redis&watch=1"

Open another terminal and delete redis-node pod

kubectl delete pod redis-node-0

Move back to the debian pod, you should see the event details — the object (endpoint) was modified:

Based on this event we’ll modify the redis-monitor endpoint.

Scripting it

Let’s write a simple bash script which will update the redis-monitor endpoint each time the redis endpoint changes.

Before it let’s install vim, dnsutils and jq

root@debian:/# apt install vim jq dnsutils -y
root@debian:/# vim monitor.sh

Now save it and run:

root@debian:/# run ./monitor.sh

Open another terminal and delete redis-node pod.

kubectl delete pod redis-node-0

Check the output of the script. Endpoint was updated and because of it redis-monitor service now points to another redis-node pod

Create a Deployment

Up until now, we’ve been using a Debian pod to run the script, but this is bad if the pod fails. K8S will restart it, but it will be a “fresh” Debian OS, without curl, redis-cli, etc. At this point we will create a Deployment. But first let’s create a ConfigMap which will contain the script. I added some additional steps in the script:

Save the script to local file (e.g. monitor.sh) and run the command below:

kubectl create configmap redis-monitor --from-file=./monitor.sh

Now we have the script and a config map. And here is a Deployment manifest:

kubectl apply -f redis-monitor-deployment.yaml

And that’s it — when the pod starts it will run the script, install all prerequisites and then watch for the endpoint event.

A final thought

As I mentioned, this is just an example of how to interact with the K8S API from inside the pod, showing how you can change some of the K8S API resources, using the service account with the right permissions. This is a working example, but, of course, it’s not for production usage.

Thank you for reading, I hope you found some useful and interesting things in this article — please feel free to reach out with any questions!


Create magic with us
We’re always on the lookout for promising new talent. If you’re excited about developing groundbreaking new tools for creators, we want to hear from you. From writing code to researching new features, you’ll be surrounded by a supportive team who lives and breathes technology.
Sounds like you? Apply here.

--

--

Lightricks Tech Blog
Lightricks Tech Blog

Learn more about how Lightricks is pushing the limits of technology to bridge the gap between imagination and creation.