Consistency was achieved, now remove gaps: Verification of Centralized Helm Charts.

Łukasz Łopusiński
project44 TechBlog
Published in
4 min readAug 1, 2023

--

In one of our previous blog posts we showcased our use Centralized Helm Charts and how we implemented it. While it is a really handy solution to help our engineers avg time, one downside as we add complexity to the base central templates/charts, we open ourselves up to bugs and logical regressions. With this, we wanted to come up with a way to test our charts to validate them.

When doing our research on testing solutions, we identified multiple options

  • Use of helm test , a built-in command that executes test hooks on a running instance of a chart.
  • Writing our own unit/integration tests with Go and terratest.
  • helm lint that examines chart for possible issues by running series of tests to verify whether chart is well-formed.
  • https://github.com/helm-unittest/helm-unittest BDD styled unit test framework for helm charts.

After investigating these options, we decided not to go with “Go” approach, since we want any of our developers to be able to contribute to Centralized Helm Charts, and we don’t want to have folks learn another language. This left us with the built-in solution and the 3rd party plugin to helm.

The following diagram shows where each solution fits into pyramid of testing, in our opinion.

We chose, when planning, to identify test solution levels on pyramid of testing. The test pyramid is not a perfect answer to tests and where to implement them, but what it gave us was clear vision of effort, cost and speed. In the test pyramid, the bottom tends to be the easiest to implement, cheap in maintenance and rapid in feedback. This led us to selecting helm-unittest as our framework for testing our charts

Helm unittest

helm-unittest is a unit test tool for helm charts in yaml, it has rich documentation and great support for helm chart features, more can be ready directly about it on it’s GitHub page.

How to write/run tests with helm unittest

We store helm unittests in a tests directory under the root archetype that contains the templates, charts and values files.

charts/
foo-archetype/
templates/
deployment.yaml
tests/
deployment_test.yaml
Chart.yaml
...
...
...

A Basic test file works on one or more templates with our predefined and overwritten values.

ex:

suite: test default deployment templates
templates:
- foo.yaml
- bar.yaml
...
tests:
- it: should build default template
values:
- ../values_file.yaml
set:
someVariable: extraSetting
asserts:
- matchRegex:
path: metadata.namespace
pattern: extraSetting
template: foo.yaml
... Other asertions

Tests can be run with, which allows rapid feedback locally when adding new defaults to our base charts

helm unittest charts/foo-archetype charts/bar-achetype ...

With this tool we can produce test reports in junit file format, which is later utilized by our CI pipeline to present the results in developer readable form.

Another important point, is that helm-unittest gives the ability to test exceptions and failures.

ex:

suite: test foo values injection
templates:
- foo.yaml
tests:
- it: should fail on template build if no value is present for env vars
values:
- ../values.yaml
set:
projectSpecificDeploymentEnvVars: |
- name: TEST_SIMPLE_ENV
value: {{ .Values.simple.value }}
asserts:
- failedTemplate:
errorMessage: nil pointer evaluating interface {}.value

Writing tests

Ideally we would follow TDD techniques to ensure that charts are created and updated based on our expectations, but as we already had these charts rolled out and working, we started one step ahead. This was done by utilizing the technique of Approval Testing.

With approval testing we write assertions with improper expected data, we run tests which generate charts and evaluate results, then we use these results to update our assertions. It may seem like bad practice, but our charts already work, so the current result is always the desired result.

With that we were able to quickly add what was missing, to cover the biggest holes in our solution. Now we have way of evaluating future changes and better playground to introduce new features.

Fitting tests into the flow

With use of GitHub Actions it was straightforward for us to add custom workflow which handles helm unit testing on each branch. With helm-unittest our test suites executes in up to 1s.

name: Feature Branch Build
...
jobs:
feature-branch-build
...
steps:
- name: Install helm-unittest
run: helm plugin install https://github.com/helm-unittest/helm-unittest.git

- name: Run charts unittests
run: helm unittest charts/foo charts/bar -o reports/test-report.xml -t JUnit

- name: Charts test Report
uses: dorny/test-reporter@v1
if: success() || failure()
with:
name: Helm charts unittests
path: reports/test-report.xml
reporter: java-junit

Running tests which pass or fail is great quality gate, as we’re forced to make sure we cover any new additions or features with tests. At this time, coverage analysis is not supported by helm-unittest yet, but it is something we would love to implement.

The following diagram shows, in red, where we implemented the above workflow and in grey where we plan to add more test phases after exploring helm lint and helm test.

Conclusion

Helm gives a lot when it comes to testing but if you are not Go developer then you are left with mostly YAML as your “language”. Thanks to helm-unittest plugin there is an accessible way of testing helm charts with any kind of chart or data configuration.

On our end, the addition of tests to our Centralized Helm Charts solutions gave us a confidence boost that any change we make won’t break the actual generation of manifests in our services CI pipeline. We now have an obvious way of catching unwanted changes and any potential regression introduced to our CHC solution, without having to analyze/validate any changes manually saving us a lot of headache and time.

--

--