kubelet + kube-apiserver + etcd

Elie
7 min readSep 8, 2022

This post is part of run Kubernetes components one by one series.

In this post, we will expand our previous setup(kubelet standalone) with the API Server, the API server is a component of the Kubernetes control plane that exposes the Kubernetes API. The API server is the front end for the Kubernetes control plane. The main implementation of a Kubernetes API server is kube-apiserver.

Actually, when kubelet talks to kube-apiserver then we shouldn’t call it standalone anymore, it is API Sever mode now. So instead of creating Pods by creating the POD definition yaml into the kubelet-static-pod directory, we will configure kubelet talks to kube-apiserver and create Pods from there.

First of all, Kubernetes uses etcd to store the cluster data, when we run kubectl to create a pod, it first talks to kube-apiserver, then write the requested POD definition into etcd, we can talk about the whole workflow in other posts, as there has some other components involved for creating a Pod. i.e. the kube scheduler.

OK, let’s install a etcd single node cluster first, and start it.

$ cd /home/opc/k8s/etcd/
$ wget https://github.com/etcd-io/etcd/releases/download/${ETCD_RELEASE}/etcd-${ETCD_RELEASE}-linux-amd64.tar.gz
$ tar xvf ../etcd-v3.4.20-linux-amd64.tar.gz
# add the uncompressed directory to PATH environment in ~/.bashrc
$ grep PATH ~/.bashrc
PATH=$PATH:/home/opc/k8s/etcd/etcd-v3.4.20-linux-amd64
$ source ~/.bashrc
$ etcd --version
etcd Version: 3.4.20
Git SHA: 1e26823
Go Version: go1.16.15
Go OS/Arch: linux/amd64
$ mkdir data.etcd
$ ./etcd --data-dir data.etcd
# Let's verify the etcd is working
$ etcdctl put my-first-key "my-first-value"
OK
$ etcdctl get my-first-key
my-first-key
my-first-value

OK, we have a running etcd server. Let’s install the kube-apiserver

$ wget https://dl.k8s.io/v1.25.0/bin/linux/amd64/kube-apiserver
$ chmod +x ./kube-apiserver
$ ./kube-apiserver --version
Kubernetes v1.25.0

Start kube-apiserver is not that simple as starting kubelet, as we need to pass in some required options, including a cert and key pair. So, let’s create the first

$ cd /home/opc/k8s/certs
$ openssl genrsa -out service-account-key.pem 4096
$ openssl req -new -x509 -days 365 -key service-account-key.pem -sha256 -out service-account-cert.pem
$ openssl x509 -pubkey -noout -in service-account-cert.pem > service-account-pub.pem
$ ls -all
total 16
drwxr-xr-x. 2 root root 100 Sep 6 07:02 .
drwxrwxr-x. 9 opc opc 4096 Sep 6 06:48 ..
-rw-r--r--. 1 root root 2094 Sep 6 06:45 service-account-cert.pem
-rw-------. 1 root root 3243 Sep 6 06:41 service-account-key.pem
-rw-r--r--. 1 root root 800 Sep 6 07:02 service-account-pub.pem
$

OK, time to start kube-apiserverwith the required options

./kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.0.0.0/16 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-signing-key-file=/home/opc/k8s/certs/service-account-key.pem --service-account-key-file=/home/opc/k8s/certs/service-account-pub.pem --token-auth-file=/home/opc/k8s/token_auth_file

For all options to run kube-apiserver, can check from page command-line-tools-reference. A highlighted here is the token-auth-file, kube-apiserver has removed the insecure port support, so we will have to use the secure port(6443 by default, can be changed by option secure-port).

Since we are running kube-apiserver standalone, to connect kube-apiserver securely, the simplest way is we can use the token-auth-file option to define a token for authentication.

The token file is a csv file with a minimum of 3 columns: token, user name, user uid, followed by optional group names. Our token file is shown as below

$ cat token_auth_file
kubeapiserverdummytoken,elie,1000

Let’s try to connect kube-apiserverusing curl with the token specified in request header

$ curl -k -H "Authorization: Bearer kubeapiserverdummytoken" https://127.0.0.1:6443/api
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "10.0.0.7:6443"
}
]
}
$ curl -k -H "Authorization: Bearer kubeapiserverdummytoken" https://127.0.0.1:6443/api/v1/pods
{
"kind": "PodList",
"apiVersion": "v1",
"metadata": {
"resourceVersion": "1290"
},
"items": []
}
## If we try an invalid token, will get 401 error
$ curl -k -H "Authorization: Bearer kubeapiserverdummytokeninvalid" https://127.0.0.1:6443/api
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}

WOW, we are successfully connected, and not surprisingly,, the pod list is empty.

Time to play with kubelet and kube-apiserver together, as we know, kubelet is running on each node in cluster, so we need to register the node to kube-apiserver, so how can we do that? We can pass in an option to tell kubelet where is the kube-apiserver located, and how to connect it securely.

The option here is kubeconfig, we didn’t use this option when starting kubelet in standalone mode, actually, this option is used for distinguishing kube-apiserver mode and standalone mode. Providing kubeconfig enables API Server mode, omitting kubeconfig enables standalone mode.

Ok, we create the following file kubelet.kubeconfig for kubeconfig option.

$ pwd
/home/opc/k8s/configs
$
$ cat kubelet.kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://127.0.0.1:6443
insecure-skip-tls-verify: true
name: my-cluster
contexts:
- context:
cluster: my-cluster
user: kubelet
name: kubelet-to-apiserver
current-context: kubelet-to-apiserver
users:
- name: kubelet
user:
token: kubeapiserverdummytoken
$

Let’s restart kubelet with kubeconfig option added

./kubelet --config=/home/opc/k8s/configs/kubeletConfigFile.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/cri-dockerd.sock --kubeconfig=/home/opc/k8s/configs/kubelet.kubeconfig

Let’s check whether the node is registered successfully

$ curl -k -H "Authorization: Bearer kubeapiserverdummytoken" https://127.0.0.1:6443/api/v1/nodes | jq '.items' | head
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6614 0 6614 0 0 38905 0 --:--:-- --:--:-- --:--:-- 38905
[
{
"metadata": {
"name": "instance-20220803-1159",
"uid": "cb2087f4-2fa7-4aa6-a481-956fcdee4a1b",
"resourceVersion": "2668",
"creationTimestamp": "2022-09-07T06:04:22Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
## Double check the registered node is the corrected one
$ hostname
instance-20220803-1159
$

OK, we have kubelet and kube-apiserver connected now! Next, we will create a pod from kube-apiserver side, we will use the nginx yaml created before, but one thing need to be noticed that in a real cluster, when kube-apiserver gets the pod creation request, another Kubernetes component called scheduler is responsible for selecting a node to create the pod, but in our environment, there is no scheduler running now, so we need to make a change to add nodeName property in nginx.yaml to let the cluster know where to place the pod on.

$ grep nodeName nginx.yaml
nodeName: instance-20220803-1159

From the error below, seems kube kube-apiserver does support yaml format as payload, but not sure why it is not working even adding head Content-Type: application/yaml, and the nginx.yaml itself is correct.

$ curl -k -H "Authorization: Bearer kubeapiserverdummytoken" -X POST https://127.0.0.1:6443/api/v1/namespaces/default/pods --data @ngnix.yaml
Warning: Couldn't read data from file "ngnix.yaml", this makes an empty POST.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the body of the request was in an unknown format - accepted media types include: application/json, application/yaml, application/vnd.kubernetes.protobuf",
"reason": "UnsupportedMediaType",
"code": 415
}
$ curl -k -H 'Content-Type: application/yaml' -H "Authorization: Bearer kubeapiserverdummytoken" -X POST https://127.0.0.1:6443/api/v1/namespaces/default/pods --data @nginx.yaml
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the object provided is unrecognized (must be of type Pod): yaml: mapping values are not allowed in this context (2d2d2d61706956657273696f6e3a2076316b696e643a20506f646d657461 ...)",
"reason": "BadRequest",
"code": 400
}

So, let’s convert to nginx.yaml to nginx.json(there has many online tools for the conversion, just google them out), and try again.

Great, we made a progress, but still having an issue that says we don’t have service account default created, actually, the default service account is created by the serviceAccount controller(see official page), as we don’t have the controller installed now, so will have to disable this check when creating POD.

$ curl -k -H "Content-Type: application/json" -H "Authorization: Bearer kubeapiserverdummytoken" -X POST https://127.0.0.1:6443/api/v1/namespaces/default/pods --data @nginx.json
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "pods \"nginx\" is forbidden: error looking up service account default/default: serviceaccount \"default\" not found",
"reason": "Forbidden",
"details": {
"name": "nginx",
"kind": "pods"
},
"code": 403
}

kube-apiserver has an option called disable-admission-plugins that we can use to disable the serviceAccount plugin, let’s restart kube-apiserver with the option added

$ ./kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.0.0.0/16 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-signing-key-file=/home/opc/k8s/certs/service-account-key.pem --service-account-key-file=/home/opc/k8s/certs/service-account-pub.pem --token-auth-file=/home/opc/k8s/token_auth_file --disable-admission-plugins=ServiceAccount

Let’s create the POD again

## POD creation request sent successfully this time
$ curl -k -H "Content-Type: application/json" -H "Authorization: Bearer kubeapiserverdummytoken" -X POST https://127.0.0.1:6443/api/v1/namespaces/default/pods --data @nginx.json
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "nginx",
"namespace": "default",
"uid": "3690c687-5c4a-48b2-9037-36177d0978d0",
"resourceVersion": "19821",
"creationTimestamp": "2022-09-08T05:49:00Z",
"managedFields": [
....
}
## After a while, we can see the POD is running
$ curl -k -H "Content-Type: application/json" -H "Authorization: Bearer kubeapiserverdummytoken" https://127.0.0.1:6443/api/v1/namespaces/default/pods | jq '.items[] | { name: .metadata.name, status: .status} | del(.status.containerStatuses)'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 9116 0 9116 0 0 46748 0 --:--:-- --:--:-- --:--:-- 46989
{
"name": "nginx",
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2022-09-08T07:16:48Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2022-09-08T07:17:02Z"
},
{
"type": "ContainersReady",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2022-09-08T07:17:02Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2022-09-08T07:16:48Z"
}
],
"hostIP": "10.0.0.7",
"podIP": "10.1.0.2",
"podIPs": [
{
"ip": "10.1.0.2"
}
],
"startTime": "2022-09-08T07:16:48Z",
"qosClass": "BestEffort"
}
}

Great, the POD is running. Question coming, we see the pod get IP 10.1.0.2, but why? Yes, the answer is CNI config spec we mentioned in previous post, you see the IP CIDR is defined there.

$ cat /etc/cni/net.d/01-cri-dockerd.json
{
"cniVersion": "0.4.0",
"name": "dbnet",
"type": "bridge",
"bridge": "cni0",
"ipam": {
"type": "host-local",
"subnet": "10.1.0.0/16",
"gateway": "10.1.0.1"
}
}

As we mentioned before, all cluster data are stored in etcd, we have a running pod now, let’s do a simple exploration at etcd side. All the Kubernetes data stored in etcd has the /registry prefix, and we do see the nginx pod we just created there.

$ etcdctl get --prefix /registry | wc -l
1483
$ etcdctl get --prefix /registry/pods --keys-only
/registry/pods/default/nginx

OK, time to finish this post. For now, we have kubelet + kube-apiserver + etcd combined environment, to summarize what we have learned.

  1. When we send a POD creation request to kube-apiserver, kube-apiserver will write the POD definition into etcd;
  2. with nodeName specified in POD spec, kube-apiserver will notice kubelet;
  3. kubelet will create the pod, once done, kubelet will update kube-apiserver with POD status;
  4. kube-apiserver updates the info into ectd.

Check other posts of this series on Kubernetes 1.24+ components one by one series | by Elie | Medium

--

--