Azure DevOps Agents on AKS with the kaniko Option

Umut Ercan
adessoTurkey
Published in
12 min readJan 21, 2023

--

Hello everyone!

I will try to demonstrate an alternative way to use Azure DevOps agents on AKS with the option of kaniko. Actually, if you follow Microsoft’s official documentation from this address, you can easily implement agents:

https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops

but there is one problem which is related to docker, (DinD)

From Microsoft

Well, we can’t use older versions of AKS.

From Microsoft

Check: https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli

Basically, we must replace Docker with something that can build our images. There are a lot of container build tools, but today kaniko will solve this problem. For building images, kaniko is a great tool, because it doesn’t need Docker daemon or special privileges for the build. That means we can easily use this inside Kubernetes. Besides that, kaniko is designed to be running as a container, we only need to indicate Dockerfile and context.

I will elaborate on these elements with examples. If you want to check the details please visit here: https://github.com/GoogleContainerTools/kaniko

So, in this article, we will be able to see that without Docker daemon we can also build images and can use these agents efficiently. From here to the end, I will call these agents aks-agents. The focus will be on technical parts, also, I pushed changes to Azure DevOps repositories, if you want to inspect them from there.

https://learn4ops@dev.azure.com/learn4ops/aks-agent/_git/aks-agent

Let’s start!

First things first, let’s begin with infrastructure, so let’s create a container registry and Kubernetes on Azure. I will use container registry tokens, which will be very useful in this example. Thus, the SKU will be premium. By the way, I will skip the installation of kubectl, az-cli, jq, etc. Also, I will use bash and will set a lot of variables for reuse, please follow them if you want to test.

az login --use-device-code

git clone https://learn4ops@dev.azure.com/learn4ops/aks-agent/_git/aks-agent

cd aks-agent

We will use these variables for many times:

RG=aks-agent-rg && ACR=aksagentacr && AKS=aks-cluster && NS=azure-devops && SP=spforaks

Check this env vars with that command:

echo $RG $ACR $AKS $NS

ACR creation & tokens:

az group create -n $RG -l westeurope

az acr create -n $ACR -g $RG --sku premium

ACR is a private repository, so kaniko will need permission for the purpose of pushing images. ACR tokens will be very handy here.

ACR_USER=kaniko-push

ACR_PASSWORD=$(az acr token create -r $ACR -g $RG -n $ACR_USER --scope-map _repositories_push --output json | jq -r .credentials.passwords[0].value)

Great! If you want you can check this token from the portal, but you can see the password only for one time so keep it as a variable.

SP & AKS creation & connect

SPPW=$(az ad sp create-for-rbac --name $SP | jq -r .password)

SPID=$(az ad sp list --display-name $SP | jq -r .[].appId)

SUBSID=$(az account show | jq -r .id) && TENANTID=$(az account show | jq -r .homeTenantId) && SUBNAME=$(az account show | jq -r .name)

az role assignment create --assignee $SPID --scope /subscriptions/$SUBSID --role Contributor

Keep appId and password for service connection.

az aks create \
-g $RG \
-n $AKS \
--generate-ssh-keys \
--node-count 1 \
--node-vm-size Standard_D2_v2 \
--service-principal $SPID \
--client-secret $SPPW \
--attach-acr $ACR

az aks get-credentials --resource-group $RG --name $AKS

kubectl get pods -A

Output must be like that:

NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system azure-ip-masq-agent-h4kc8 1/1 Running 0 6m30s
kube-system cloud-node-manager-2v8j7 1/1 Running 0 6m30s
kube-system coredns-59b6bf8b4f-4gpb9 1/1 Running 0 7m31s
kube-system coredns-59b6bf8b4f-jxjd5 1/1 Running 0 5m27s
kube-system coredns-autoscaler-5655d66f64-mbjrj 1/1 Running 0 7m31s
kube-system csi-azuredisk-node-s8wkf 3/3 Running 0 6m30s
kube-system csi-azurefile-node-gnsfv 3/3 Running 0 6m30s
kube-system konnectivity-agent-869c6dccb5-2szkh 1/1 Running 0 7m31s
kube-system konnectivity-agent-869c6dccb5-nhj69 1/1 Running 0 7m31s
kube-system kube-proxy-fnh44 1/1 Running 0 6m30s
kube-system metrics-server-7dd74d8758-hk7jk 2/2 Running 0 5m23s
kube-system metrics-server-7dd74d8758-xwkxz 2/2 Running 0 5m23s

Before moving into AKS, let’s add a service connection on Azure DevOps.

For requirements, you can use this command:

echo -e "SPPW: $SPPW \nSPID: $SPID \nSUBSID: $SUBSID \nTENANTID: $TENANTID \nSUBNAME: $SUBNAME"

So, we connected to AKS, let’s test kaniko quickly with a random Dockerfile. Before creating kaniko container, we must consider the context and Dockerfile. For this requirement, kaniko has a lot of options. For instance, you can use Azure storage account, git repository, or local tar. For testing purposes, I will use local files with a single Kubernetes pod. Another important part is the secret for push permission, let’s create it first. (Syntax is important, otherwise, it might not work correctly.)

kubectl create ns $NS

kubectl create secret generic registrysecret -n $NS --from-literal=dockerconfigjson="{\"auths\": {\"$ACR.azurecr.io\": {\"username\": \"$ACR_USER\",\"password\": \"$ACR_PASSWORD\"}}}"

Generally, a kaniko container won’t live for a long time, so I believe if we should use it as K8S Batch Job, it makes more sense. In the below example, I will use the benefit of a pod, shared volumes will help for folders. It’s a very common way to provide these files to kaniko like this. In the next sections, we will also use git for context. Anyway, we will use two containers, one for providing the required files, and one for kaniko. The first container will bring files and will be the init container; on the other hand, kaniko will do the build job and will push the image.

cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: kaniko-test
namespace: azure-devops
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 60
template:
spec:
initContainers:
- name: init-container
image: alpine
command: ["sh", "-c"]
args:
- |
while true; do sleep 1; if [ -f /workspace/Dockerfile ]; then break; fi done
volumeMounts:
- name: local-volume
mountPath: /workspace
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.1
args:
- "--context=dir:///workspace/"
- "--destination=$ACR.azurecr.io/test-image:v1"
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: local-volume
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: registrysecret
items:
- key: dockerconfigjson
path: config.json
- name: local-volume
emptyDir: {}
EOF

Since it’s Job kind, we can find pod names with commands

JOBUUID=$(kubectl get job kaniko-test -n $NS -o "jsonpath={.metadata.labels.controller-uid}") && PODNAME=$(kubectl get po -n $NS -l controller-uid=$JOBUUID -o json | jq -r .items[0].metadata.name)

Okay let’s give the docker file to the init container with this command,

kubectl cp -n $NS kaniko-test/Dockerfile $PODNAME:/workspace -c init-container

Check the output:

sleep 5 && kubectl logs -f -n $NS $PODNAME

Defaulted container "kaniko" out of: kaniko, init-container (init)
INFO[0000] Retrieving image manifest alpine
INFO[0000] Retrieving image alpine from registry index.docker.io
INFO[0000] Retrieving image manifest alpine
INFO[0000] Returning cached image manifest
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest alpine
INFO[0001] Returning cached image manifest
INFO[0001] Retrieving image manifest alpine
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Building stage 'alpine' [idx: '0', base-idx: '-1']
INFO[0001] Skipping unpacking as no commands require it.
INFO[0001] Pushing image to aksagentacr13757.azurecr.io/test-image:v1
INFO[0002] Pushed aksagentacr13757.azurecr.io/test-image@sha256:4957f1b5c01b975584c1eb4f493c68c79301c23163c67070df9f223b04b0325f

Looks perfect, successfully pushed the test image. So, kaniko is working as expected, the focus must be on azure-devops again. Azure DevOps agents could be run in containers with some environment variables. We can produce these objects from the Azure DevOps UI. Additionally, for Dockerfile and context, you can see details in the official documentation.

Let’s repeat the same steps for the aks-agent image, and this time, files will be related to Azure DevOps. This time let’s set the image name to a better one.

cat <<EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: kaniko-agent
namespace: azure-devops
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 60
template:
spec:
initContainers:
- name: init-container
image: alpine
command: ["sh", "-c"]
args:
- |
while true; do sleep 1; if [ -f /workspace/Dockerfile ]; then break; fi done
volumeMounts:
- name: local-volume
mountPath: /workspace
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.1
args:
- "--context=dir:///workspace/"
- "--destination=$ACR.azurecr.io/aks-agent-image:v1"
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: local-volume
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: registrysecret
items:
- key: dockerconfigjson
path: config.json
- name: local-volume
emptyDir: {}
EOF
JOBUUID=$(kubectl get job kaniko-agent -n $NS -o "jsonpath={.metadata.labels.controller-uid}") && PODNAME=$(kubectl get po -n $NS -l controller-uid=$JOBUUID -o json | jq -r .items[0].metadata.name)

kubectl cp -n $NS azure-devops-agent/start.sh $PODNAME:/workspace -c init-container

kubectl cp -n $NS azure-devops-agent/Dockerfile $PODNAME:/workspace -c init-container

Check the output:

sleep 5 && kubectl logs -f -n $NS $PODNAME

The image is ready, so let’s create our agent. We are going to use a service account with permission, and then we will deploy this image as a deployment. AZP_URL, AZP_TOKEN, AZP_POOL variables should be created.

So, here is the example for required variables:

AZP_URL=https://dev.azure.com/learn4ops && AZP_POOL=testpool && AZP_TOKEN='PAT'

For token, please follow this doc: https://learn.microsoft.com/en-us/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate?view=azure-devops&tabs=Windows

For the agent pool, you can use it to quickly create that:

curl -u :$AZP_TOKEN   -H "Content-Type: application/json"  -d '{"name": "testpool","autoProvision": true}' -X POST https://dev.azure.com/learn4ops/_apis/distributedtask/pools?api-version=7.0

Also, on UI:

Then the required secret:

kubectl create secret generic azdevops \
--namespace $NS \
--from-literal=AZP_URL=$AZP_URL \
--from-literal=AZP_TOKEN=$AZP_TOKEN \
--from-literal=AZP_POOL=$AZP_POOL

All yaml files for aks-agent:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: $NS
name: azure-agent
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: $NS
name: azure-agent-role
rules:
- apiGroups: [""]
resources: ["pods","pods/log"]
verbs: ["get", "watch", "list","create","patch","update","delete"]
- apiGroups: ["batch"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "create","patch","update","delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: azure-agent-role-binding
namespace: $NS
subjects:
- kind: ServiceAccount
name: azure-agent
namespace: $NS
roleRef:
kind: Role
name: azure-agent-role
apiGroup: rbac.authorization.k8s.io
EOF
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
namespace: $NS
labels:
app: azdevops-agent
spec:
replicas: 1
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: $ACR.azurecr.io/aks-agent-image:v1
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
- name: AZP_AGENT_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
resources:
requests:
memory: "512Mi"
cpu: "500m"
limits:
memory: "1024Mi"
cpu: "750m"
serviceAccountName: azure-agent
EOF
kubectl get pods -n $NS

When I check the related pod log :

Looks like everything is ready for running pipeline!

What did we do so far? We have Azure DevOps agent running on AKS and kaniko job definition tested. Well, let’s consider a normal pipeline. In any case, this kaniko job must be reusable and must be reachable from any pipeline. It means this CI part must be templated. I will use bash script for this purpose, but it also would fit the template directory, if there was one. In addition to other technical parts, I have mentioned kaniko Dockerfile and context before. Basically, there were a lot of methods for indicating these resources, so this time we can use the git option! The first thing that comes to mind is git credentials; I will use Azure DevOps built-in variable for that, but you can use another proper method for this. Another important topic will be logged, the pipeline will be executed as a script/template for kaniko job BUT inside AKS, so logs will be visible at the Kubernetes level. Thus, we will also add some commands to see logs from Azure DevOps UI.

Example for script and pipeline:

Kaniko script with the reusable option

#!/bin/bash

set -e

env

echo "****************************************************"
echo "PROJECT_NAME: $PROJECT_NAME"
echo "REPO_NAME: $REPO_NAME"
echo "KANIKO_NAME: $KANIKO_NAME"
echo "PIPELINE_ID: $PIPELINE_ID"
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SYSTEM_ACCESSTOKEN: $SYSTEM_ACCESSTOKEN"
echo "REPO_NAME: $REPO_NAME"
echo "WORKING_DIRECTORY: $WORKING_DIRECTORY"
echo "AZURE_CONTAINER_REGISTRY_NAME: $AZURE_CONTAINER_REGISTRY_NAME"
echo "IMAGE_NAME: $IMAGE_NAME"
echo "IMAGE_VERSION_GLOBAL: $IMAGE_VERSION_GLOBAL"
echo "****************************************************"


cat <<EOF | kubectl apply --force -f -
apiVersion: batch/v1
kind: Job
metadata:
name: kaniko-$KANIKO_NAME-$PIPELINE_ID
namespace: $KANIKO_NAMESPACE
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 300
template:
spec:
initContainers:
- name: git-clone
image: alpine:3.17.0
command: ["sh", "-c"]
args:
- |
apk add --no-cache git
echo "****************************************************"
echo "working on this branch $BRANCH_NAME"
echo $PIPELINE_ID
echo "****************************************************"
git clone https://:$SYSTEM_ACCESSTOKEN@dev.azure.com/learn4ops/$PROJECT_NAME/_git/$REPO_NAME --branch=$BRANCH_NAME /workspace
echo "***************************"
echo "/workspace folder:"
ls -al /workspace
echo "******************************************************"
echo "/workspace/$WORKING_DIRECTORY folder:"
echo "******************************************************"
ls -al /workspace/$WORKING_DIRECTORY
volumeMounts:
- name: git-volume
mountPath: /workspace
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.9.1
args:
- "--context=dir:///workspace/$WORKING_DIRECTORY"
- "--cache=true"
- "--destination=$AZURE_CONTAINER_REGISTRY_NAME.azurecr.io/$IMAGE_NAME:$IMAGE_VERSION_GLOBAL"
volumeMounts:
- name: kaniko-secret
mountPath: /kaniko/.docker
- name: git-volume
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kaniko-secret
secret:
secretName: registrysecret
items:
- key: dockerconfigjson
path: config.json
- name: git-volume
emptyDir: {}
EOF


JOBUUID=$(kubectl get job kaniko-$KANIKO_NAME-$PIPELINE_ID -n $KANIKO_NAMESPACE -o "jsonpath={.metadata.labels.controller-uid}")
PODNAME=$(kubectl get po -l controller-uid=$JOBUUID -o name)

echo "****************************"
echo "working on this pod $PODNAME"
echo "****************************"

while [[ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..initContainerStatuses..state..reason}') = "PodInitializing" ]]; do echo "Cloning repository to init container (PodInitializing)" && sleep 2 ; done
while [[ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..initContainerStatuses..state..reason}') != "Completed" ]]; do kubectl logs $PODNAME -c git-clone -n $KANIKO_NAMESPACE -f && sleep 2 ; done
sleep 4


if [ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi
while [[ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') != "Failed" ]]; do kubectl logs $PODNAME -n $KANIKO_NAMESPACE -f && sleep 3 ; done


if [ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi

For pipeline, the most important thing here is the System Access Token. It’s very helpful, but surely there are a lot of methods for that requirement.

trigger:
- dev


#Global variables
variables:
- name: BRANCH_NAME
value: $[replace(variables['Build.SourceBranch'], 'refs/heads/', '')]
- name: PIPELINE_ID
value: $[replace(variables['Build.BuildNumber'], '.', '-')]
- name: IMAGE_VERSION_GLOBAL
value: $(Build.BuildNumber)
- name: PROJECT_NAME
value: aks-agent
- name: REPO_NAME
value: aks-agent
- name: AZURE_CONTAINER_REGISTRY_NAME
value: 'aksagentacr'
- name: KANIKO_NAMESPACE
value: azure-devops
- name: APP_IMAGE_NAME
value: 'simple-project'





stages:
- stage: FrontendBuild
displayName: FrontendBuild
pool: testpool
dependsOn: []
variables:
IMAGE_NAME: "$(APP_IMAGE_NAME)-test"
WORKING_DIRECTORY: "$(APP_IMAGE_NAME)/application"
KANIKO_NAME: "$(APP_IMAGE_NAME)"
jobs:
- job:
steps:
- task: AzureCLI@1
displayName: '${{ variables.WORKING_DIRECTORY }}-building'
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
azureSubscription: 'kaniko-test'
scriptPath: '$(Build.SourcesDirectory)/simple-project/pipeline/kaniko.sh'
- task: AzureCLI@1
displayName: '${{ variables.WORKING_DIRECTORY }}-scanning'
inputs:
azureSubscription: 'kaniko-test'
scriptPath: '$(Build.SourcesDirectory)/simple-project/pipeline/trivy.sh'
continueOnError: true

You can define test pipeline like this:

**** BONUS ****

Did you see the last task in pipeline? With this architecture of these agents, we can also use other tools like Trivy. Trivy scans images and can be run as a container.

Trivy also needs to authenticate private registry, so we need another trick for that.

kubectl create secret generic trivysecret \
--namespace $NS \
--from-literal=TRIVY_AUTH_URL=https://$ACR.azurecr.io \
--from-literal=TRIVY_PASSWORD=$ACR_PASSWORD \
--from-literal=TRIVY_USERNAME=$ACR_USER

Here is the Trivy script:

#!/bin/bash

set -e

env

echo "****************************************************"
echo "PROJECT_NAME: $PROJECT_NAME"
echo "REPO_NAME: $REPO_NAME"
echo "KANIKO_NAME: $KANIKO_NAME"
echo "PIPELINE_ID: $PIPELINE_ID"
echo "BRANCH_NAME: $BRANCH_NAME"
echo "SYSTEM_ACCESSTOKEN: $SYSTEM_ACCESSTOKEN"
echo "REPO_NAME: $REPO_NAME"
echo "WORKING_DIRECTORY: $WORKING_DIRECTORY"
echo "AZURE_CONTAINER_REGISTRY_NAME: $AZURE_CONTAINER_REGISTRY_NAME"
echo "IMAGE_NAME: $IMAGE_NAME"
echo "IMAGE_VERSION_GLOBAL: $IMAGE_VERSION_GLOBAL"
echo "****************************************************"


sleep 10

cat <<EOF | kubectl apply --force -f -
apiVersion: batch/v1
kind: Job
metadata:
name: trivy-$KANIKO_NAME-$PIPELINE_ID
namespace: $KANIKO_NAMESPACE
spec:
backoffLimit: 0
ttlSecondsAfterFinished: 300
template:
spec:
containers:
- name: trivy
image: aquasec/trivy:0.35.0
args:
- "image"
- "--ignore-unfixed"
- "--severity"
- "HIGH,CRITICAL"
- "--vuln-type"
- "library"
- "$AZURE_CONTAINER_REGISTRY_NAME.azurecr.io/$IMAGE_NAME:$IMAGE_VERSION_GLOBAL"
- "--timeout"
- "10m"
env:
- name: TRIVY_USERNAME
valueFrom:
secretKeyRef:
name: trivysecret
key: TRIVY_USERNAME
- name: TRIVY_PASSWORD
valueFrom:
secretKeyRef:
name: trivysecret
key: TRIVY_PASSWORD
- name: TRIVY_AUTH_URL
valueFrom:
secretKeyRef:
name: trivysecret
key: TRIVY_AUTH_URL
restartPolicy: Never
EOF


JOBUUID=$(kubectl get job trivy-$KANIKO_NAME-$PIPELINE_ID -n $KANIKO_NAMESPACE -o "jsonpath={.metadata.labels.controller-uid}")
PODNAME=$(kubectl get po -l controller-uid=$JOBUUID -o name)

echo "****************************"
echo "working on this pod $PODNAME"
echo "****************************"

sleep 4

if [ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi
while [[ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') != "Succeeded" && $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') != "Failed" ]]; do kubectl logs $PODNAME -n $KANIKO_NAMESPACE -f && sleep 3 ; done


if [ $(kubectl get $PODNAME -n $KANIKO_NAMESPACE -o jsonpath='{..status.phase}') == "Failed" ]; then
exit 1;
fi

Finally, we can run the pipeline.

Agent log is looking good and Trivy is also the same.

Kaniko
Trivy

As a result, we can now use Azure DevOps agents on AKS with kaniko. We didn’t need/use Docker at all. But keep in mind these kinds of pipelines are not as flexible as normal Linux agents; we put some effort into this system. Also, there might be some other solutions/pipelines that overcome these problems. I tried to show a possible solution with full Azure components. I hope you liked it!

If you tried commands please don’t forget to delete these resources:

az ad sp delete --id $SPID

az ad app delete --id $SPID

az group delete --name $RG --yes

Have Fun :)

References:

https://github.com/GoogleContainerTools/kaniko

https://developers.redhat.com/articles/2021/06/18/perform-kaniko-build-red-hat-openshift-cluster-and-push-image-registry#setup_and_configuration_for_kaniko_on_openshift

https://learn.microsoft.com/en-us/azure/container-registry/container-registry-repository-scoped-permissions

https://github.com/GoogleContainerTools/kaniko/issues/1180

https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest#az-aks-create

https://learn.microsoft.com/en-us/azure/aks/supported-kubernetes-versions?tabs=azure-cli

https://craftech.io/blog/centralized-implementation-of-self-host-azure-agents-with-kaniko-helm-and-keda/

--

--