Arthur Chiao
Arthur Chiao

User and workload identities in Kubernetes

June 2022


User and workload identities in Kubernetes

This is part 1 of 4 of the Authentication and authorization in Kubernetes series. More

TL;DR: In this article, you will explore how users and workloads are authenticated with the Kubernetes API server.

The Kubernetes API server exposes an HTTP API that lets end-users, different parts of your cluster, and external components communicate with one another.

Most operations can be performed through kubectl, but you can also access the API directly using REST calls.

But how is the access to the API restricted only to authorized users?

Table of content

Accessing the Kubernetes API with curl

Let's start this journey by issuing a request to the Kubernetes API server.

Suppose you want to list all the namespaces in the cluster; you could execute the following commands:

bash

export API_SERVER_URL=https://10.5.5.5:6443

curl $API_SERVER_URL/api/v1/namespaces
curl: (60) Peer Certificate issuer is not recognized.
# truncated output
If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.

The output suggests that the API is serving traffic over https with an unrecognized certificate (e.g. self-signed), so curl aborted the request.

Let's temporarily ignore the certificate verification with -k and inspect the response:

bash

curl -k $API_SERVER_URL/api/v1/namespaces
{
  "kind": "Status",
  "apiVersion": "v1",
  "status": "Failure",
  "message": "namespaces is forbidden: User \"system:anonymous\" cannot list resource \"namespaces\" ...",
  "reason": "Forbidden",
  "details": { "kind": "namespaces" },
  "code": 403
}

You have a response from the server, but:

  1. You are forbidden to access the API endpoint (i.e. the status code is 403).
  2. You are identified as the system:anonymous, and this identity is not allowed to list namespaces.

The above test reveals some important working mechanisms in the kube-apiserver:

Formally,

  • When you issued the curl request, the traffic reached the Kubernetes API server.
    1/3

    When you issued the curl request, the traffic reached the Kubernetes API server.

  • Inside the API server, one of the first modules to receive your request is the authentication. In this case, the authentication failed, and the request was labelled anonymous.
    2/3

    Inside the API server, one of the first modules to receive your request is the authentication. In this case, the authentication failed, and the request was labelled anonymous.

  • After authentication, there's the authorization module. Since anonymous requests have no permissions, the authorization component rejects the call with a 403 status code.
    3/3

    After authentication, there's the authorization module. Since anonymous requests have no permissions, the authorization component rejects the call with a 403 status code.

We can reevaluate what happened in the previous curl request and notice that.

  1. Since you did not provide user credentials, the Kubernetes Authentication module couldn't assign an identity, so it labelled the request anonymous.
  2. Depending on how the Kubernetes API server is configured, you could have also received a 401 Unauthorized code.
  3. The Kubernetes Authorization module checked if system:anonymous has the permission to list namespaces in the cluster. Since it doesn't, it return a 403 Forbidden error message.

Assuming the identity had rights to access the namespace resource, you would have received the list of namespaces instead.

It's worth noting that you issued a request from outside the cluster, but such requests may come from inside too.

The kubelet, for example, might need to connect to the Kubernetes API to report the status of its node.

The kubelet connects to the API server and authenticates itself.

The Authentication module is the first gatekeeper of the entire system and authenticates all of those requests using either a static token, a certificate, or an externally-managed identity.

Kubernetes features an authentication module that has several noteworthy features:

  1. It supports both human users and program users.
  2. It supports both external users (e.g. apps deployed outside of the cluster) and internal users (e.g. accounts created and managed by Kubernetes).
  3. It supports standard authentication strategies, such as static token, bearer token, X509 certificate, OIDC, etc.
  4. It supports multiple authentication strategies simultaneously.
  5. You can add new authentication strategies or phase out old ones.
  6. You can also allow anonymous access to the API.

The rest of the article will investigate how the authentication module works.

Please note that this article focuses on authentication. If you wish to learn more about authorization, this article on limiting access to Kubernetes resources with RBAC will introduce you to the subject.

Let's start with users.

The Kubernetes API differentiate internal and external users

The Kubernetes API server supports two kinds of API users: internal and external.

But why have such a distinction between the two?

If the users are internal to the cluster, we need to define a specification (i.e. a data model) for them.

Instead, when users are external, such specification already exists elsewhere.

We can categorize users into the following kinds:

  1. Kubernetes managed users: user accounts created by the Kubernetes cluster itself and used by in-cluster apps.
  2. Non-Kubernetes managed users: users that are external to the Kubernetes cluster, such as:
    • Users with static tokens or certificates provided by cluster administrators.
    • Users authenticated through external identity providers like Keystone, Google account, and LDAP.

Granting access to the cluster to external users

Consider the following scenario: you have a bearer token and issue a request to Kubernetes.

bash

curl --cacert ${CACERT} \
  --header "Authorization: Bearer <my token>" \
  -X GET ${APISERVER}/api

How can the Kubernetes API server associate that token to your identity?

Kubernetes does not manage external users, so there should be a mechanism to retrieve information (such as username and groups) from an external resource.

In other words, once the Kubernetes API receive a request with a token, it should be able to retrieve enough information to decide what to do.

Let's explore this scenario with an example.

Crate the following CSV with a list of users, tokens and groups:

tokens.csv

token1,arthur,1,"admin,dev,qa"
token2,daniele,2,dev
token3,errge,3,qa

The file format is token, user, uid, and groups.

Start a minikube cluster with the --token-auth-file flag:

bash

mkdir -p ~/.minikube/files/etc/ca-certificates
cd ~/.minikube/files/etc/ca-certificates
cat << | tokens.csv
token1,arthur,1,"admin,dev,qa"
token2,daniele,2,dev
token3,errge,3,qa
EOF
minikube start \
  --extra-config=apiserver.token-auth-file=/etc/ca-certificates/tokens.csv

Since we want to issue a request to the Kubernetes API, let's retrieve the IP address and certificate from the cluster:

bash

kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/learnk8s/.minikube/ca.crt
    extensions:
    - extension:
        last-update: Fri, 10 Jun 2022 12:21:45 +08
        provider: minikube.sigs.k8s.io
        version: v1.25.2
      name: cluster_info
    server: https://127.0.0.1:57761
  name: minikube
# truncated output

And now, let's issue a request to the cluster with:

bash

export APISERVER=https://127.0.0.1:57761
export CACERT=/Users/learnk8s/.minikube/ca.crt
curl --cacert ${CACERT} -X GET ${APISERVER}/api
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

The response suggests that we access the API as an anonymous user and don't have any permissions.

Let's issue the same request but with token1 (which, according to our tokens.csv file, belongs to Arthur):

bash

export APISERVER=https://127.0.0.1:57761
export CACERT=/Users/learnk8s/.minikube/ca.crt
curl --cacert ${CACERT} --header "Authorization: Bearer token1" -X GET ${APISERVER}/api
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "forbidden: User \"arthur\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {},
  "code": 403
}

While the request might look like it failed, it instead succeeded.

If you notice, Kubernetes could identify that the request came from Arthur.

So what happened?

And what are tokens.csv and the --token-auth-file API server flag?

Kubernetes has different authentication plugins, and the one you use now is called Static Token Files.

This is a recap of what happened:

  1. When the API server starts, it reads the CSV file and keeps the users in memory.
  2. A user makes a request to the API server using their token.
  3. The API server matches the token to the user and extracts the rest of the information (e.g. username, groups, etc.).
  4. Those details are included in the request context and passed to the authorization module.
  5. The current authorization strategy (likely RBAC) finds no permission for Arthur and proceeds to reject the request.

We can quickly fix that by creating a ClusterRoleBinding:

admin-binding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
subjects:
- kind: User
  name: arthur
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

You can submit the resource to the cluster with:

bash

kubectl apply -f admin-binding.yaml
clusterrolebinding.rbac.authorization.k8s.io/admin created

If you execute the command again, this time, it should work:

bash

curl --cacert ${CACERT} \
  --header "Authorization: Bearer token1" \
  -X GET ${APISERVER}/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.49.2:8443"
    }
  ]
}

Excellent!

As HTTP requests are made to the kube-apiserver, authentication plugins attempt to associate the following attributes to the request:

The details are appended to the request context and available to all subsequent components of the Kubernetes API, but all values are opaque to the authentication plugin.

  • You can use the token to issue an authenticated request to the cluster.
    1/3

    You can use the token to issue an authenticated request to the cluster.

  • Kubernetes must match the token to an identity. Since this is an external user, it will consult a user management system (in this case, the CSV).
    2/3

    Kubernetes must match the token to an identity. Since this is an external user, it will consult a user management system (in this case, the CSV).

  • It retrieves details such as username, id, group, etc. Those are then passed to the authorization module to check the current permissions.
    3/3

    It retrieves details such as username, id, group, etc. Those are then passed to the authorization module to check the current permissions.

For example, the authorization module (RBAC) invoked after the authentication can use this data to assign permissions.

In the previous example, you created a ClusterRoleBinding with the name of the user, but since the CSV specifies three groups for Arthur (admin,dev,qa), you could also write:

admin-binding.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
subjects:
- kind: Group
  name: admin
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

The static token is a simple authentication mechanism where cluster administrators generate an arbitrary string and assign them to users.

But static tokens have a few limitations:

  1. You need to know the name of all your users in advance.
  2. Editing the tokens.csv file requires restarting the API server.
  3. Tokens do not expire.

Kubernetes offers several other mechanisms to authenticate external users:

While they offer different trade-offs, it's worth remembering that the overall workflow is similar to the static tokens:

Which authentication plugin should you use?

It depends, but you could have all of them.

You can configure multiple authentication plugins, and Kubernetes will sequentially test all authentication strategies until one succeeds.

It will reject the request as unauthorized or label the access as anonymous if none do.

  • Even the authentication module isn't a single component.
    1/4

    Even the authentication module isn't a single component.

  • Instead, the authentication is made of several authentication plugins.
    2/4

    Instead, the authentication is made of several authentication plugins.

  • When a request is received, the plugins are evaluated in sequence. If all fail, the request is rejected.
    3/4

    When a request is received, the plugins are evaluated in sequence. If all fail, the request is rejected.

  • As long as one succeeds, the request is passed to the authorization module.
    4/4

    As long as one succeeds, the request is passed to the authorization module.

Now that you've covered external users, let's investigate how Kubernetes manages internal users.

Managing Kubernetes internal identities with Service Accounts

In Kubernetes, internal users are assigned identities called Service Accounts.

Those identities are created by the kube-apiserver and assigned to applications.

When the app makes a request to the kube-apiserver, it can verify its identity by sharing a signed token linked to its Service Account.

Let's inspect the Service Account definition:

bash

kubectl create serviceaccount test
serviceaccount/test created

And inspect the resource with:

bash

kubectl get serviceaccount test -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test
secrets:
- name: test-token-6tmx7

If your cluster is on version 1.24 or greater, the output is instead:

bash

kubectl get serviceaccount test -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test

Can you spot the difference?

The secrets field is only created in older clusters but not new ones.

The Secret contains the token necessary to authenticate requests with the API server:

bash

kubectl get secret test-token-6tmx7
apiVersion: v1
kind: Secret
metadata:
  name: test-token-6tmx7
type: kubernetes.io/service-account-token
data:
  ca.crt: LS0tLS1CR…
  namespace: ZGVmYXVs…
  token: ZXlKaGJHY2…

So let's assign this identity to a pod and try to issue a request to the Kubernetes API.

nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  serviceAccount: test
  containers:
  - image: nginx
    name: nginx

You can submit the resource to the cluster with:

bash

kubectl apply -f nginx.yaml
pod/nginx created

Let's jump into the pod with:

bash

kubectl exec -ti nginx -- bash

Let's issue the request with:

bash@nginx

export APISERVER=https://kubernetes.default.svc
export SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
export CACERT=${SERVICEACCOUNT}/ca.crt
export TOKEN="token here"
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.49.2:8443"
    }
  ]
}

It worked!

Since Kubernetes 1.24 or greater doesn't create a secret, how can you obtain the token?

Generating temporary identities for Service Accounts

From newer versions of Kubernetes, the kubelet is in charge of issuing a request to the API server and retrieving a temporary token.

This token is similar to the one in the Secret object, but there is a critical distinction: it expires.

Also, the token is not injected in a Secret; instead, it is mounted in the pod as a projected volume.

Let's repeat the same experiment with Kubernetes 1.24:

bash

kubectl create serviceaccount test
serviceaccount/test created

Let's create the pod with:

nginx.yaml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  serviceAccount: test
  containers:
  - name: nginx
    image: nginx

You can submit the resource to the cluster with:

bash

kubectl apply -f nginx.yaml
pod/nginx created

First, let's confirm that there are no Secrets (and no token):

bash

kubectl get secrets
No resources found in default namespace.

Let's jump into the pod with:

bash

kubectl exec -ti nginx -- bash

Verify that the token is already mounted, and you can curl the API:

bash@nginx

export APISERVER=https://kubernetes.default.svc
export SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
export CACERT=${SERVICEACCOUNT}/ca.crt
export TOKEN=$(cat ${SERVICEACCOUNT}/token)
curl --cacert ${CACERT} --header "Authorization: Bearer ${TOKEN}" -X GET ${APISERVER}/api
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.49.2:8443"
    }
  ]
}

It worked!

How is the token mounted, though?

Let's inspect the pod definition:

bash

kubectl get pod nginx -o yaml
apiVersion: v1
kind: Pod
  name: nginx
spec:
  containers:
  - image: nginx
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-69mqr
      readOnly: true
  serviceAccount: test
  volumes:
  - name: kube-api-access-69mqr
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace

A lot is going on here, so let's unpack the definition.

  1. There's a kube-api-access-69mqr volume declared.
  2. The volume is mounted as read-only on /var/run/secrets/kubernetes.io/serviceaccount.

The Volume declaration is interesting because it uses the projected field.

A projected volume is a volume that combines several existing volumes into one.

Please note that not all volumes can be combined into a projected volume. Currently, the following types of volume sources can be included: secret, downwardAPI, configMap and serviceAccountToken.

  • The kubelet mounts the projected volume in the container.
    1/2

    The kubelet mounts the projected volume in the container.

  • Projected volumes are a combination of several volumes into one.
    2/2

    Projected volumes are a combination of several volumes into one.

In this particular case, the projected volume is a combination of:

  1. A serviceAccountToken volume mounted on the path token.
  2. A configMap volume.
  3. The downwardAPI volume is mounted on the path namespace.

What are those volumes?

The serviceAccountToken volume is a special volume that mounts a secret from the current Service Account.

This is used to populate the file /var/run/secrets/kubernetes.io/serviceaccount/token with the correct token.

The ConfigMap volume is a volume that mounts all the keys in the ConfigMap as files in the current directory.

The file's content is the value of the corresponding key (e.g. if the key-value is replicas: 1, a replicas file is created with the content of 1).

In this case, the ConfigMap volume mounts the ca.crt certificate necessary to call the Kubernetes API.

The downwardAPI volume is a special volume that uses the downward API to expose information about the Pod to its containers.

In this case, it is used to expose the current namespace to the container as a file.

You can verify that it works from within the pod with:

bash@nginx

export SERVICEACCOUNT=/var/run/secrets/kubernetes.io/serviceaccount
export NAMESPACE=$(cat ${SERVICEACCOUNT}/namespace)
echo $NAMESPACE
default

Excellent!

Now that you know how tokens are mounted, you might wonder why Kubernetes decided to move on from creating tokens in Secrets.

There are a few reasons, but it boils down to:

But what if you need a token but don't need a pod?

Is there a way to obtain the token without mounting the projected volume?

Kubectl has a new command to do just that:

bash

kubectl create token test
eyJhbGciOiJSUzI1NiIsImtpZCI6ImctMHJNO…

That token is temporary, just like the one mounted by the kubelet.

You will see a different output if you execute the same command again.

Is the token just a long string?

Projected Service Account tokens are JWT tokens

Those are signed JWT tokens.

To inspect it, you can copy the string and paste it onto the jwt.io website.

The output is divided into three parts:

  1. The header describes how the token was signed.
  2. The payload — actual data of the token.
  3. The signature is used to verify that the token wasn't modified.
A JWT token is divided into three parts: the header, the payload and the signature.

If you inspect the payload for the token, you will find output similar to this:

token.json

{
  "aud": [
    "https://kubernetes.default.svc.cluster.local"
  ],
  "exp": 1655083796,
  "iat": 1655080196,
  "iss": "https://kubernetes.default.svc.cluster.local",
  "kubernetes.io": {
    "namespace": "default",
    "serviceaccount": {
      "name": "test",
      "uid": "6af2abe9-d8d8-4b8a-9bb5-3cc96442b322"
    }
  },
  "nbf": 1655080196,
  "sub": "system:serviceaccount:default:test"
}

There are a few fields worth discussing:

It's worth noting that the JWT contains even more details when it's attached to a pod.

If you retrieve the token from the nginx Pod, you can see the following:

nginx-token.json

{
  "aud": [
    "https://kubernetes.default.svc.cluster.local"
  ],
  "exp": 1686617744,
  "iat": 1655081744,
  "iss": "https://kubernetes.default.svc.cluster.local",
  "kubernetes.io": {
    "namespace": "default",
    "pod": {
      "name": "nginx",
      "uid": "a11defcb-f510-4d49-9c4f-2e8e8da1c33c"
    },
    "serviceaccount": {
      "name": "test",
      "uid": "6af2abe9-d8d8-4b8a-9bb5-3cc96442b322"
    },
    "warnafter": 1655085351
  },
  "nbf": 1655081744,
  "sub": "system:serviceaccount:default:test"
}

The name and UUID of the pod were included in the payload.

But where is this information used, exactly?

Not only you can check if the token is signed and valid, but you can also tell the difference between two identical pods from the same deployment.

This is useful because:

Workload identities in Kubernetes: how AWS integrates IAM with Kubernetes

As an example, imagine you host your Kubernetes cluster on Amazon Web Services and want to upload a file to an S3 bucket from your cluster.

Please note that the same is valid for Microsoft Azure and Google Cloud Platform.

You might need to assign a role to do so, but AWS IAM Roles cannot be assigned to Pods — you can only assign them to compute instances (i.e. AWS doesn't know what a pod is).

Since late 2019, AWS has provided a native integration between Kubernetes and IAM called IAM Roles for Service Accounts (IRSA) which leverages federated identities and the projected service account tokens.

Here's how it works.

  1. You create an IAM Policy which describes what resources you have access to (e.g. you can upload files to a remote bucket).
  2. You create a Role with that policy and note its ARN.
  3. You create a projected service account token and mount it as a file.

You add the Role ARN and projected service account token as variables in the Pod:

pod-s3.yaml

apiVersion: apps/v1
kind: Pod
metadata:
  name: myapp
spec:
  serviceAccountName: my-serviceaccount
  containers:
  - name: myapp
    image: myapp:1.2
    env:
    - name: AWS_ROLE_ARN
      value: arn:aws:iam::111122223333:policy/my-role
    - name: AWS_WEB_IDENTITY_TOKEN_FILE
      value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    volumeMounts:
    - mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
      name: aws-iam-token
      readOnly: true
  volumes:
  - name: aws-iam-token
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          audience: sts.amazonaws.com
          expirationSeconds: 86400
          path: token

If your app uses the AWS SDK to upload to S3, this is enough to make it work.

The app will use those two environment variables as a token to connect to S3.

But how?

Kubernetes, not AWS, generated the token mounted in the Pod.

How does AWS IAM know that this token is valid?

It doesn't.

So here is what happens.

The AWS SDK uses the Role ARN and the projected service account token and exchanges them for a standard AWS access and secret key.

Let me explain if you don't use the AWS SDK or want to know what happens.

The app makes a request to AWS IAM to assume a role for the current identity.

When the IAM receives the token, it verifies that the JWT token is legit by unpacking it and checking the iss field.

This field is usually configured to point to the public signing keys used to create the token.

If you recall, the URL points to the Kubernetes cluster:

nginx-token.json

{
  "aud": [
    "https://kubernetes.default.svc.cluster.local"
  ],
  "exp": 1686617744,
  "iat": 1655081744,
  "iss": "https://kubernetes.default.svc.cluster.local",

// truncated

Please notice that you should customise the issuer URL to a fully qualified domain name; otherwise, AWS IAM won't be able to reach that endpoint. You can do so with the --service-account-issuer flag.

The Issuer URL is a standard OIDC provider, and AWS IAM will look for two specific paths:

Please notice that those two endpoints exposed are note exposed by default, and it's the job of the cluster administrator to set them up.

Let's inspect the JWKS (JSON Web Key Set) endpoint:

bash

curl {Issuer URL}/openid/v1/jwks
  "keys": [
    {
      "use": "sig",
      "kty": "RSA",
      "kid": "ZO4TUgVjBzMWKVP8mmBwKLvsuyn8z-gfqUp27q9lO4w",
      "alg": "RS256",
      "n": "34a81xuMe…",
      "e": "AQAB"
    }
  ]
}

The AWS IAM retrieves the public keys and verifies the token.

The code for the verification is similar to this:

verify-jwt.js

var jwt = require('jsonwebtoken')
var jwkToPem = require('jwk-to-pem')
var pem = jwkToPem(jwk /* "kid" value from the jkws file */)
jwt.verify(token /* this is the token to verify */, pem, { algorithms: ['RS256'] }, function(err, decodedToken) {
  // rest of the code
})

If the token is valid, it generates an access token that has the permissions of the current role (e.g. uploading a file to an S3 bucket) which looks like this:

sts-response.json

{
    "Credentials": {
        "AccessKeyId": "ASIAWY4CVPOBS4OIBWNL",
        "SecretAccessKey": "02n52u8Smc76…",
        "SessionToken": "IQoJb3JpZ…",
        "Expiration": "2022-06-13T10:50:25+00:00"
    },
    "SubjectFromWebIdentityToken": "system:serviceaccount:default:test",
    "AssumedRoleUser": {
        "AssumedRoleId": "AROAWY4CVPOBXUSBA5C2B:test",
        "Arn": "arn:aws:sts::[aws account id]:assumed-role/oidc/test"
    },
    "Provider": "arn:aws:iam::[aws account id]:oidc-provider/[bucket name].s3.amazonaws.com",
    "Audience": "test"
}

You can use the credentials to access the S3 bucket from this point onwards.

  • Projected service account tokens are identities valid within a Kubernetes cluster. However, you could exchange for a valid token elsewhere.
    1/4

    Projected service account tokens are identities valid within a Kubernetes cluster. However, you could exchange for a valid token elsewhere.

  • The Amazon IAM service can receive such tokens and verify their identities by looking into the iss field of the JWT token.
    2/4

    The Amazon IAM service can receive such tokens and verify their identities by looking into the iss field of the JWT token.

  • If the identity is legit, it can issue its own token.
    3/4

    If the identity is legit, it can issue its own token.

  • The new token can be used to access services in Amazon Web Services.
    4/4

    The new token can be used to access services in Amazon Web Services.

The entire process is documented below, and a full tutorial on how to create the integration manually is available here.

This is great if you need to validate access to resources hosted outside the cluster, but should you go through the same hops when it comes to services in the cluster?

That's not necessary.

Validating Projected Service Account Tokens with the Token Review API

Tokens that are created in the cluster can also be validated from within with the Token Review API.

Let's create a token for the Service account with:

bash

kubectl create token test
eyJhbG…

Create the following YAML resource and include the token:

token-review.yaml

kind: TokenReview
apiVersion: authentication.k8s.io/v1
metadata:
  name: test
spec:
  token: eyJhbG… # <- token

Submit the resource to the cluster and pay attention to the extra -o yaml:

bash

kubectl apply -o yaml -f token.yaml

The response should look like this:

apiVersion: authentication.k8s.io/v1
kind: TokenReview
metadata:
  name: test
spec:
  token: eyJhbG…
status:
  audiences:
    - https://kubernetes.default.svc.cluster.local
  authenticated: true
  user:
    groups:
      - system:serviceaccounts
      - system:serviceaccounts:default
      - system:authenticated
    uid: eccac137-25e2-4e84-9d83-18b2f9c5e5af
    username: system:serviceaccount:default:test

The Token Review API works just like the AWS IAM integration: we can verify the identity and retrieve the details from a single token.

However, this is a more straightforward single API call rather than a more complex OIDC flow.

The token can be further customised using audiences to scope where the access can be used.

Generating Secrets for Service Account with Kubernetes 1.24 or greater

Starting with 1.24, Kubernetes won't generate Secrets automatically for ServiceAccounts.

However, you can still revert to the old behaviour if you create the service account and attach it to a secret using an annotation.

For example, the current service account test has no secret object.

But you can create a Secret (and token) with:

apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: test
  annotations:
    kubernetes.io/service-account.name: "test"

You can submit the resource to the cluster with:

bash

kubectl apply -f secret-test.yaml
secret/test created

If you inspect the Secret, you can spot the token:

kubectl describe secret test

Name:         test
Namespace:    default

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1111 bytes
namespace:  7 bytes
token:      eyJhbG…

You can also verify the identity of the token with the Token Review API:

token-review.yaml

kind: TokenReview
apiVersion: authentication.k8s.io/v1
metadata:
  name: test
spec:
  token: eyJhbG…

Submit the resource to the cluster and pay attention to the extra -o yaml:

bash

kubectl apply -o yaml -f token.yaml

The output should look like this:

response.json

apiVersion: authentication.k8s.io/v1
kind: TokenReview
metadata:
  name: test
spec:
  token: eyJhbG…
status:
  audiences:
  - https://kubernetes.default.svc.cluster.local
  authenticated: true
  user:
    groups:
    - system:serviceaccounts
    - system:serviceaccounts:default
    - system:authenticated
    uid: eccac137-25e2-4e84-9d83-18b2f9c5e5af
    username: system:serviceaccount:default:test

If you inspect the token in jwt.io, you will notice that this token has no expiry:

{
  "iss": "kubernetes/serviceaccount",
  "kubernetes.io/serviceaccount/namespace": "default",
  "kubernetes.io/serviceaccount/secret.name": "test",
  "kubernetes.io/serviceaccount/service-account.name": "test",
  "kubernetes.io/serviceaccount/service-account.uid": "eccac137-25e2-4e84-9d83-18b2f9c5e5af",
  "sub": "system:serviceaccount:default:test"
}

Which is precisely the same behaviour as for previous versions of Kubernetes.

Bonus: which authentication plugin should you use?

Kubernetes has the following plugins for authenticating users:

Which one should you use?

In the previous section, we discussed how Static Tokens have some limitations:

  1. You need to know the name of all your users in advance.
  2. Editing the CSV file requires restarting the API server.
  3. Tokens do not expire.

Static tokens are not the best choice for a production environment.

A slightly better option is to use X.509 client certificates.

With X.509 clients certificates:

  1. The kube-apiserver is configured to point to a Certificate Authority (CA) file with --client-ca-file=FILE.
  2. The admin issues client certificates to external users. Those X.509 client certificates are self-contained and include the username and groups.
  3. Users identify with the API server using the client certificate in the TLS field.
  4. The kube-apiserver verifies the client certificate against the root CA. Then, it proceeds to extract the username and group.

The workflow is similar to the static token, but there are some crucial differences:

However, X.509 clients certificates are generally not a good idea and should be discouraged.

  1. X.509 client certificates are usually long-lived (i.e. years).
  2. The CA infrastructure provides a way to revoke a certificate, but Kubernetes does not support checking for revocation.
  3. Since client certificates are self-contained, they make it very hard to use groups with RBAC.
  4. For your client to authenticate, it must have a point-to-point connection with the API server. This means no reverse proxies or web application firewalls in front of your API server.

Certificates are a good solution for emergencies where any other authentication mechanism is (temporarily unavailable).

You can use the X.509 certificate to access the cluster as a last resort.

Kubeadm and OpenShift do this by default, setting up certificates on the API masters so that kubectl can be used locally.

Other than that, you'd be probably better off using OIDC as an authentication mechanism.

Open ID Connect is especially useful if you already have an OpenID Connect infrastructure where you manage your users — in that case, you can keep managing your Kubernetes users in the same way you manage all the other users in your organisation.

Open ID Connect providers issue JSON Web Token (JWT).

That means they can be verified autonomously, without contacting the token's issuer, and they also expire.

The last two authentication plugins are:

  1. Authentication proxy.
  2. Webhook.

The Authenticating Proxy authentication plugin allows users to authenticate to Kubernetes through an external authenticating proxy transparently.

When users make a request to the Kubernetes cluster, the request is first intercepted by the authenticating proxy.

This authentication plugin is helpful if you already use an authenticating proxy in your organisation or if you want to implement a custom authentication method that is not supported by any of the other authentication plugins — this is because the authenticating proxy can implement any authentication method you like.

And finally, the Webhook Tokens authentication plugin allows users to authenticate to Kubernetes with an HTTP bearer token that is verified by an external custom authentication service.

The Webhook Token Authentication plugin is helpful if you want to implement a custom authentication method that is not provided by any other authentication plugins.

Summary

In this article, you learned how the Kubernetes API server authenticates users in the cluster.

In particular:

  1. The difference between externally managed and internal identities.
  2. How the Kubernetes API server implements different authentication plugins to authenticate users, such as static token, bearer token, X509 certificate, OIDC, etc.
  3. How Kubernetes assigns identities for internal users with Service Accounts.
  4. The difference between tokens created through Secrets and Service Account tokens created by the Kubelet.
  5. How the Projected volume combines several volumes into a single one.
  6. How to inspect Service Account tokens with a JWT inspector.
  7. How Federated OIDC works and how it can be integrated with a cloud provider such as Amazon Web Services.
  8. How to use the Token Review API to verify Service Account tokens' validity within the cluster.

After the request is authenticated, it is passed to the authorization module.

You can follow the next part in this article about limiting access to Kubernetes resources.

Your source for Kubernetes news

You are in!