ArgoCD - Get Helm charts from private S3

Nov 9, 2023


A lot of people on the internet struggle in the internet to obtain charts from private S3 and despite it being doable, I didn't stumble upon any article that would straightforwardly guide me on how to do that. So here we are.

Issue

First of all, there is an open issue on Argo-CD to provide Helm S3 support natively, but for many (and me as well) it is not a good way to go.

Helm S3 Support · Issue #2558 · argoproj/argo-cd
Summary Repo server should leverage the Helm CLI for some of the calls to enable better Helm plugin support Motivation Currently our team uses S3 to store Helm charts. We are excited to use the 1st...

ArgoCD uses Helm underneath to template the manifests and apply them to the cluster and the Helm binary is responsible for obtaining the charts. So IMO, it is the Helm that should support S3 natively.

Despite Helm doesn't support it, it has a plugin for that.

Plugin

Helm can be extended with plugins very easily - a plugin can be any piece of code/script/binary that can be executed in the system.

The Helm Plugins Guide
Introduces how to use and create plugins to extend Helm’s functionality.

In terms of S3 support, big kudos to Igor Zibarev for preparing on open-sourcing S3 plugin:

hypnoglow - Overview
Golang Jedi ⎈ Kubernetes Samurai ⎈ Continuous Collaboration ⎈ Unstoppable Improvement - hypnoglow

Igor Zibarev Github profile

Helm S3 Plugin

https://github.com/hypnoglow/helm-s3

Repository

ArgoCD

Now, to the ArgoCD part. To use the plugin we must:

  • download the S3 plugin and put it into the ArgoCD repo-server container (this container is responsible for downloading Helm charts)
  • configure environment variable to point into Helm plugins directory

(the other way here is to build a custom image with plugins included, this allows you to start the repo-server container but requires more work at the start)

Downloading plugins

Below values for the ArgoCD Helm chart will spin up the init container next to the repo server to download the plugin and pass it into the repo-server container via shared volume.

repoServer:
  initContainers:
    - name: download-tools
      image: alpine:latest
      command: [sh, -ec]
      env:
        - name: HELM_S3_VERSION
          value: "0.15.1"
      args:
        - |
          rm -rf /custom-tools/*
          mkdir -p /custom-tools/helm-plugins

          mkdir -p /custom-tools/helm-plugins/helm-s3
          wget https://github.com/hypnoglow/helm-s3/releases/download/v${HELM_S3_VERSION}/helm-s3_${HELM_S3_VERSION}_linux_amd64.tar.gz -qO- | tar -C /custom-tools/helm-plugins/helm-s3 -xz;

          
      volumeMounts:
        - mountPath: /custom-tools
          name: custom-tools

  env:
    - name: HELM_PLUGINS
      value: /custom-tools/helm-plugins/
      
  volumes:
    - name: custom-tools
      emptyDir: {}
  volumeMounts:
    - mountPath: /custom-tools
      name: custom-tools

S3 Application manifest

Having Helm S3 plugin ready, just a clear showcase of how to reference a chart from S3:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: shell-operator
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  sources:
  - repoURL: s3://windkube-charts/charts
    targetRevision: '1.0.2'
    chart: generic
  destination:
    name: in-cluster
    namespace: shell-operator

The key part: repoURL: s3://windkube-charts/charts

And this should already work if S3 is public. In the case of private S3, we must grant AWS credentials to access the S3 bucket.

Using IAM Role

To use the Helm S3 plugin in ArgoCD with the IAM Role, it is required to provide IRSA (IAM Role for Service Account) for a repo-server service account.

Based on the example with eksctl, first, create IRSA and attach proper AWS policy:

# https://eksctl.io/usage/iam-policies/
iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: argocd-repo-server
      namespace: argo-cd
    attachPolicyARNs:
    - arn:aws:iam::<account-id>:policy/s3-charts-access
    roleName: argocd-repo-server

eksctl-cluster-config.yaml

Verify if the ServiceAccount exists:

❯ kubectl describe sa argocd-repo-server
Name:                argocd-repo-server
Namespace:           argo-cd
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::<account-id>:role/argocd-repo-server
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

Now in ArgoCD Helm values, disable the creation of a service account and point to using the one created via eksctl:

repoServer:
  serviceAccount:
    create: false
    name: argocd-repo-server

And redeploy (helm upgrade --install...).

Using IAM Credentials

When using AWS IAM Credentials, it is required to set up proper AWS credentials environment variables in the repo-server container. A great way to do that (and secure) is with the usage of External Secrets Operator.

Write down IAM account credentials into AWS SSM Parameter Store and obtain into secret via:

apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: argocd-aws-credentials
spec:
  secretStoreRef:
    name: parameter-store
    kind: ClusterSecretStore
  refreshInterval: 1h
  data:
  - secretKey: AWS_ACCESS_KEY_ID
    remoteRef:
      key: /app/argo-cd/AWS_ACCESS_KEY_ID
  - secretKey: AWS_SECRET_ACCESS_KEY
    remoteRef:
      key: /app/argo-cd/AWS_SECRET_ACCESS_KEY
  target:
    name: argocd-aws-credentials
  

External Secrets Operator manifest

And in ArgoCD Helm values add the following section to obtain environment variables from secret:

repoServer:
  envFrom:
  - secretRef:
      name: argocd-aws-credentials

Krzysztof Wiatrzyk

Big love for Kubernetes and the entire Cloud Native Computing Foundation. DevOps, biker, hiker, dog lover, guitar player, and lazy gamer.