Enforcing Cluster Policy with Open Policy Agent — Part 1

Chuk Lee
13 min readDec 6, 2023

Kubernetes security can be categorised broadly into runtime and request ; runtime security ensures that containers within pods are properly contained and only doing things that they are suppose to be doing.

Request time security, for a lack of a better term, involves incoming request to API Server; the request may be to get a list of all pods from the kube-system namespace or to provision a Deployment based on the manifest in the payload.

The primordial validating admission controller

A request then passes through the following 3 stages (simplified) before arriving at the API Server

  • Authentication — the request will first be authenticated. If the request is from a user, then their certificate are be validated; if it is a service account, then the token will be used for validation.
  • Authorization — after passing authentication, the roles associated with the request’s subject is lookup and check if it has the required permissions to perform the operation eg. create a Deployment.
  • Admission controllers — these are configurable modules that either modifies or permit/reject a request. Admission controllers perform sanity checks on a request; just because you are allowed to perform an operation does not mean you should. For example, a cluster administrator accidentally issues a command to delete the kube-system namespace; the request will pass authentication and authorization but the admission controller will deny the request. There are two types of admission controllers: mutating admission controllers which changes a request eg. automatically injecting sidecar containers into pods like what Istio is doing and validating admission controllers like rejecting Deployment if the label selector does not match the pod template labels

Kubernetes has a list of build in admission controllers; some of these are enabled by default, eg NamespaceLifecycle which prevents you from deleteing kube-system namespace, while others will have to be enabled explicitly with the --enable-admission-plugins option. Examples of built-in mutating admission controller includes AlwaysPullImages, DefaultIngressClass and LimitRanger; and examples of build-in validating admission controllers includes NamespaceExists, PersistentVolumeClaimResize and ImagePolicyWebhook. See this for a list of build-in admission controllers.

Dynamic Admission Webhook

There are cases when the the build-in admission controllers might not be able to meet your needs eg. all resources must have a valid dept label for cost tracking. Kubernetes allow you to deploy your own admission logic, called dynamic admission webhook, and integrate them into Kubernetes request processing pipeline; depending on the type of admission logic, you do this by configuring either one or both of the special build-in admission controllers, enabled by default, called MutatingAdmissionWebhook and ValidatingAdmissionWebhook. Once you have configured these admission controllers, Kubernetes will start sending AdmissionReview objects, as JSON documents, to your admission webhook for evaluation.

For example, you only allow signed images to be deployed in the production namespace. When request for creating Deployment arrives at the cluster, the API Server will forward, after passing authentication and validation, the request, packaged as an AdmissionReview object to your validating admission webhook (validating webhook for short). Your webhook can examine the AdmissionReview object and decide if the API Server should proceed with the request.

In this article, we will go through the intuition and steps of deploying a validating webhook. Instead of writing a validating webhook, we will be using Open Policy Agent (OPA) as our validating webhook. There are several existing validating webhooks, like Gatekeeper (use OPA under the hood) and Kyverno that can do (more!) what we are doing; but we will not be using either of them. Instead will do the ‘slightly hard-way’ of deploying OPA as a validating webhook manually.

Open Policy Agent as Validating Admission Webhook

The OPA is an engine that evaluates policies against any input JSON document and reports if the said policies have been breached by the input. OPA is also able to use data from its context during policy evaluation; for example a dictionary of username and password loaded from databases or Ingress resources scraped from a cluster.

OPA policies are written in a high level language called Rego. Rego is a declarative and logic based language like Prolog. If you are looking for a quick introduction to Rego, have a look at this article. There is free Rego course and a playground for you to learn and try Rego.

We will start by deploying a OPA as a Deployment with a corresponding Service.

Polices that are to be evaluated by OPA are deployed as ConfigMap into the opa-ns namespace; ConfigMap that contains policies must be labelled with openpolicyagent.org/policy and having the value rego.

OPA itself does not load policies or context data from the cluster. This is the responsibility of the kube-mgmt sidecar. The context data and polices are pushed by kube-mgmt to OPA over OPA’s REST interface http://localhost:8181/v1.

The following diagram summarises how kube-mgmt context and policies into OPA.

How kube-mgmt loads data into OPA

Now that we have an understanding of how OPA and kube-mgmt works, we will now go into the details of rolling out OPA as a validating webhook over 2 articles.

In this article, we will setup and deploy OPA and kube-mgmt along with all its supporting artifacts. In the the second part, we will configure OPA as a validating webhook and deploy some polices for enforcement.

Deploying the OPA

We will deploy OPA and kube-mgmt according to the following diagram:

OPA deployment architecture
  • The OPA and the kube-mgmt will be deployed as a standard Deployment and Service with 1 exposed port, 8443. The port will be used by the API Server to forward AdmissionReview objects to be validated. Since API Server communicates over HTTPS, we will need to generate a set of certificates for this service endpoint.
  • Policies can use cluster information when evaluating policies; these data must be loaded from the cluster by kube-mgmt. To load these information into OPA, kube-mgmt must be given access to these resources thru RBAC. We will need to bind the appropriate cluster roles to a service account associated with with the pod.

The OPA and kube-mgmt will be deployed according to the following steps:

  1. Generate the a RSA key pair and certificate to be used by OPA to secure the 8443 port. The certificate will be issued by the cluster.
  2. Deploy OPA and kube-mgmt. Mount the the certificates and key into the pod to be used by OPA
  3. Give the appropriate permission to kube-mgmt to scrape information for the API Server

1. Generate RSA key pair and certificate

The API Server communicates over mTLS with OPA, so we will need to generate a key pair. Generate a private key with openssl . The private key is written to opa.key file.

openssl genrsa -out opa.key 4096

Next we generate a certificate signing request (CSR) for the corresponding public key with the following command:

openssl req -new -key opa.key -out opa.csr \
-subj '/CN=system:node:opa-svc/O=system:nodes' \
-addext 'subjectAltName = DNS:opa-svc, DNS:opa-svc.opa-ns, DNS:opa-svc.opa-ns.svc, DNS:opa-svc.opa-ns.svc.cluster.local' \
-addext 'keyUsage = digitalSignature, keyEncipherment' \
-addext 'extendedKeyUsage = serverAuth'

We use openssl to generate a CSR for the public key of the provided private key (opa.key). We will need to get the CSR signed. The options we have are:

  1. Create a self signed cert
  2. Get Kubernetes to sign it for us
  3. Get it signed by a real CA

Option 2 is the easiest; we will get Kubernetes to generated a server certificate by masquerading as a Kubelet; however the certificate must follow certain rules (see Kubernetes signers); most important ones are:

  • The certificate’s common name must start with system:node:
  • The organisation's name must be system:nodes
  • Key usage must be serverAuth, keyEncipherment and digitalSignature.

The OPA service can be access with different names, so all these names will have to be included into the server certificate as alternative names (subjectAltName). In this case, the alternative names are opa-svc, opa-svc.opa-ns, opa-svc.opa-ns.svc and opa-svc.opa-ns.svc.cluster.local where opa-svc is the service name and opa-ns namespace (see above deployment diagram).

We also include the certificate’s usage (keyUsage) and its server role (serverAuth in extendedKeyUsage).

Once the CSR has been generated, use the following command to view the details

openssl req -in opa.csr -noout -text

The output is a show

Submit the CSR to the cluster for signing with a CertificateSigningRequest resource.

CSR resource

The CSR must be encoded as a base64 single line string (opa-csr.yaml line 12). The easiest way to get a single base64 line is to use the -w0 option:

cat opa.csr | base64 -w0 -

The CSR resource must also specify which signer (see Kubernetes signers) to use with the signerName (opa-csr.yaml line 7) attribute. Kubernetes has different signers depending on what you are trying to do with the certificate. If you are making a request to the API server, then your certificate may be signed by kubernetes.io/kube-apiserver-client. Since we want a server certificate, we will use kubernetes.io/kubelet-serving as the signer; don’t forget to create the CSR according to the requirements of kubernetes.io/kubelete-serving signer which I have described above.

After submitting the CertificateSigningRequest, approve (or deny) it with kubectl certificate approve (or deny) command. When the CSR have been approved, you should see the Approved,Issued against the CSR’s name. The following shows CSR approval process

Signing a CSR — k is my alias for kube-color

If you are not the CSR approver, then you can view the CSR status with

kubectl get csr

The list will display CSR that are either approved, rejected or pending. CertificateSigningRequest is a cluster wide resource so when you view it, you do not need to specify the namespace.

If your CSR is created incorrectly, eg. not following the signer’s rules for instance, then CSR will only have Approved but not Issued.

You can now extract the certificate, in base64, to the file opa.crt from the CertificateSigningRequest with the following command:

kubectl get csr/opa-csr -ojsonpath="{.status.certificate}" | base64 -d - > opa.crt

View the issued certificate

openssl x509 -in opa.crt -noout -text

The output is shown below

Issued certificate

We will now bundle the key, the certificate and CA cert as a TLS secret. We will first create a TLS secret with the following command

kubectl create secret tls opa-tls --key=opa.key --cert=opa.crt -nopa-ns \
-oyaml --dry-run=client > opa-tls.yaml

then manually add the cluster CA cert into the TLS secret resource file opa-tls.yaml. This will save us the hassle of creating a separate Secret for the CA cert. The easiest way to get the CA cert is from $HOME/.kube/config. The following shows the TLS secret with the CA cert in ca.crt key.

Deploy OPA and kube-mgmt

We are now ready to deploy OPA and kube-mgmt. But before we write the Kubernetes manifest, lets look at how we are going to run these two commands.

OPA binary is multi faceted tool; you can use it to analyse, syntax check, evaluate and test policy files, use it to build OPA bundles or start it as a server. We will be using OPA as a server so we will start it in server mode; the following is how we will be starting OPA as a server:

opa run --server \
--addr=http://localhost:8181 \
--addr=:8443 \
--tls-ca-cert-file=/path/to/ca.crt \
--tls-cert-file=/path/to/tls.crt \
--tls-private-key=/path/to/tls.key
  • run --server runs OPA in server mode
  • --addr binds 2 port for REST invocation; the first localhost:8181 will be used internally in the pod by the sidecar, kube-mgmt to update OPA with cluster context information and user policies. The second, :8443 is a TLS protected port for external invocations. The API Server will be using this port to send AdmissionReview objects to OPA.
  • Finally the --tls-* options configures the CA cert, TLS cert and key for the :8443 port.

The kube-mgmt is executed with the following options:

kube-mgmt \
--opa-url=http://localhost:8181/v1 \
--enable-policies=true \
--namespaces=opa-ns \
--policy-label=openpolicyagent.org/policy \
--policy-value=rego \
--replicate=v1/pods \
--replicate-cluster=v1/namespaces
  • --opa-url is the OPA endpoint that kube-mgmt will use to push the scraped context data and user policies into OPA. This should correspond to one of the --addr option in the OPA. The default endpoint is http://localhost:8181/v1. I am showing it here for clarity
  • --enable-policies=true request kube-mgmt to automatically discover Rego policies from ConfigMaps. true is the default value.
  • --namespaces a comma separated list of namespaces for kube-mgmt to scan for policies. In the above example, kube-mgmt will only scan ConfigMaps for policies in the opa-ns namespace.
  • --policy-label and --policy-value is the required key/value label used by kube-mgmt for filtering ConfigMaps that contains policies; this means that only ConfigMaps with the label openpolicyagent.org/policy and the value rego deployed in opa-ns namespace will be scanned for policies. The openpolicyagent.org/policy and rego are the default values.
  • --replicate and --replicate-cluster instructs what context information to load from the cluster. --replicate loads namespace scoped resources and --replicate-cluster loads cluster scoped resources. In the above example, we are scraping Pods and Namespace data. See this for more details.

See here for more kube-mgmt options.

We will create a Deployment in a Pod with the above 2 containers with the following resource:

Assuming that the opa-tls Secret has been deployed, we mount the Secret as volume called cert-vol into the pod (opa.yaml lines 28–31); these certs and key are then used by opa in the --tls-* command line options (opa.yaml lines 43–45). There is also a service account called opa-sa (opa.yaml line 27) associated with the pod which we will need to assign cluster roles and roles to.

We also have a service, opa-svc that only exposes the TLS protected port, 8443 (opa.yaml lines 75–77).

Verify that both the containers are running correctly; first check the opa container (opa.yaml lines 33–51) with the following log command

kubectl logs deploy/opa-deploy -f -copa -nopa-ns  

You should see the following

Logs from opa contrainer

Next check the kube-mgmt container (opa.yaml lines 52–61)

kubectl logs deploy/opa-deploy -f -ckube-mgmt -nopa-ns
Logs from kube-mgmt container

The reason why we are seeing these errors in kube-mgmt is because kube-mgmt is trying to scrape pods and namespaces details from the cluster but the service account opa-sa do not have the permission to do so. Recall that we configured kube-mgmt scrape Pod and Namespace to load these information into opa (opa.yaml lines 60, 61).

Roles for kube-mgmt

We will now create a cluster roles and bind it opa-sa so that the service account can read pods and namespaces from the cluster.

We will also need to give opa-sa full access to ConfigMaps in opa-ns namespace. This is because kube-mgmt needs to modify ConfigMaps that contains policies. The cluster role and roles and their bindings for opa-sa is shown in the following YAML file.

opa-rbac.yaml contains a ClusterRole called opa-cr (opa-rbac.yaml lines 2–15) which allows a service account to read Pods and Namespaces. We also have a Role, opa-r (opa-rbac.yaml lines 32–43), which allow opa-sa full control over ConfigMaps in the opa-ns namespace.

You can create the above 4 resources with the kubectl command

kubectl create clusterrole opa-cr --verb=get,list,watch \
--resource=pods,namespaces

kubectl create clusterrolebinding opa-crb --clusterrole=opa-cr \
--serviceaccount=opa-ns:opa-sa

kubectl create role opa-r --verb=* --resource=configmaps -nopa-ns

kubectl create rolebinding opa-rb -role=opa-r \
--serviceaccount=opa-ns:opa-sa -nopa-ns

Verifying context data

We can now examine kube-mgmt logs again; this time we should see that pods and namespaces information have been successfully scraped by kube-mgmt.

kube-mgmt successfully scraping data from API Server

We will also verify that kube-mgmt has successfully pushed the context information into opa. Since opa exposes port 8443 (opa.yaml line 40), we can perform a port-forward to port 8443. However, to do this, we will need a client certificate. We will not do this now.

An alternative way is to exec into opa container and perform a curl on the non-TLS port 8181 (opa.yaml line 39). But the opa image do not contain any shell or the curl command. The workaround then is to attach an ephemeral container to opa container and access opa‘s 8181 port through this ephemeral container. The following command creates an ephemeral container with nicolaka/netshoot image; the container then joins opa’s network namespace:

kubectl debug pod/opa-deploy-5788dbc66c-pn96d \
-ti --target=opa -nopa-ns \
--image=nicolaka/netshoot \
-- /bin/bash

When the bash shell is spawn, perform a curl -X GET on OPA’s Data API; for example the following command displays queries loaded namespaces name in opa

curl -s localhost:8181/v1/data | jq ".result.kubernetes.namespaces | keys[]" 

The output is

List of namespaces loaded by kube-mgmt into OPA

The following command list all the pods in opa

curl -s localhost:8181/v1/data/kubernetes/pods | jq ".result | values[] | keys[]"

The output of the above command is

List of pods loaded by kube-mgmt into OPA

The /v1/data is OPA’s data resource. kubernetes is the path (sub resource) that kube-mgmt writes the context data to. You can change this path with the --replicate-path= option when you start kube-mgmt.

Conclusion

We now have a functioning policy engine in the cluster which can be leveraged by any application to evaluate queries against a set of policies. Policies can be dyamically loaded and unloaded by simply creating or deleting ConfigMaps labelled with openpolicyagent.org/policy: rego in opa-ns namespace.

But we would like to use the deploy OPA as an admission controller. In part 2, we will look at how to configure the OPA deployment as a validating webhook and test the webhook with some policies.

Till next time.

--

--