Writing Kubernetes Manifests with Kustomize Feeder Repository

Christopher Lane
chick-fil-atech
Published in
8 min readMar 28, 2023

--

Introduction

In our last post on Chick-fil-A’s application platform, we shared an overview of our GitOps strategy, but only briefly mentioned a critical component: the kustomize feeder repositories that are the foundation of the process. This post will focus on the design and implementation of these repositories.

Today, a significant portion of our sales are flowing through our digital channels, including The Chick-fil-A App and chick-fil-a.com. We have seen significant growth in these channels since March 2020, when the impacts of COVID-19 started in North America.

Our digital properties are powered by our Digital Experience Engine (DXE). DXE is a cloud-based microservices architecture composed of ~150 services, running in our Amazon Elastic Kubernetes Service (EKS)-based application platform. We have hundreds of developers working on DXE and, collectively, they push thousands of commits and open hundreds of pull requests every week.

To start, let’s review our traffic pattern and GitOps-based deployments.

Traffic Pattern

We’ve shared a few examples of our daily traffic pattern previously, but let’s update those figures, as the volume of requests through DXE continues to grow:

DXE Daily Traffic

All times are US/Eastern. This figure probably isn’t terribly surprising: requests ramp up during breakfast, spike during the lunch rush, and hit another, smaller peak during dinner.

Overall, we exceed ~430K rpm at lunch, average about ~190K rpm, and handle more than 150M requests during a given day. If we’re running a promotion, these numbers can easily double.

Deployment Process

The deployment process looks like:

DXE Deployment Process

The steps:

  1. The process is started by a developer pushing a commit to the mainline branch of the application repository. This triggers the build workflow in GitHub Actions.
  2. The application container is built, all tests are run, and the image is pushed to our Artifactory registry.
  3. Next up is the focus on this post: the base manifests for the given application type are pulled from our feeder repository. We’ll discuss this in detail below.
  4. The complete application manifests are committed and pushed to the atlas.
  5. ArgoCD watches for changes to the atlas repository, and once the new manifests are pushed, it applies the changes to the cluster.
  6. The new image is pulled from Artifactory.
  7. The new version of application is deployed to cluster.

Feeder Repository

When we started migrating DXE services from Elastic Beanstalk to our new EKS-based platform in 2018, there were only a limited number of folks in the organization that had experience developing and maintaining Kubernetes manifests. (How we addressed this knowledge gap will be, yet another, blog post!) It was clear from the beginning that we didn’t want each team writing the entirety of their own manifests. This would have immediately created a barrier of entry to the new platform, in addition to a confusing mess of requirements, tooling, and team silos.

We decided to build a set of base manifests for each of our common application types. Teams could pull in these manifests and “patch-in” any changes for their particular application using kustomize. This made the learning curve we were asking teams to climb less daunting: folks didn’t need to write a (potentially gigantic) pile of yaml to deploy their applications to Kubernetes. Rather, they just needed to learn how patch the base manifests.

So, what does this feeder repository look like? We’ve published a version of our application feeder repository on our public Github organization:

https://github.com/chick-fil-a/kustomize-application

Let’s explore this repository. At the root, we have the base directory:

https://github.com/chick-fil-a/kustomize-application/tree/main/base

This is the starting point: the base layer of deployment and service manifests, as well as tangential resources like the horizontal pod autoscaler (HPA), pod disruption budget (PDB), and service monitor.

On top of base, we define our backend and frontend tiers:

https://github.com/chick-fil-a/kustomize-application/blob/main/backend/kustomization.yaml
https://github.com/chick-fil-a/kustomize-application/blob/main/frontend/kustomization.yaml

As you can see, these are just thin layers that add common labels and annotations for their respective tiers.

Finally, we have the application type layer:

https://github.com/chick-fil-a/kustomize-application/tree/main/go-api
https://github.com/chick-fil-a/kustomize-application/tree/main/java-api
https://github.com/chick-fil-a/kustomize-application/tree/main/python-api

These are the layers that teams reference in their own application repository.

Let’s write a complete set of application manifests by pointing to this layer with kustomize. We’ll assume we’re writing a Python API and we’re ready to deploy v0.0.1 to our development cluster. (We’ll use busybox:latest as a stand-in for our application in the next examples.)

First, let’s add a k8s/kustomization.yaml file to the root of our application repository:

mkdir -p k8s
<<eof >k8s/kustomization.yaml
resources:
- https://github.com/chick-fil-a/kustomize-application/python-api?ref=main

commonLabels:
app: my-k8s-api

commonAnnotations:
cm.chick-fil-a.com/system-tag: "CAP_BLOG_POST"

namePrefix: my-k8s-api-
namespace: my-k8s-api

images:
- name: application-image-placeholder
newName: busybox
newTag: latest

patches:
- target:
kind: Deployment
path: patches/deploy.yaml
eof

The key is the resources list: the first item (and only item in our example) references the base manifests for a Python API from kustomize-application.

Our application isn’t doing much, and so, doesn’t need the default memory or CPU resources. Let’s patch our resource requests and limits to reflect this:

mkdir -p k8s/patches
<<eof >k8s/patches/deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment
spec:
template:
spec:
containers:
- name: application-container
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
eof

We can now build the complete manifest using kustomize build:

mkdir -p k8s/artifacts
cd k8s
kustomize build . >artifacts/kube.yaml

This will output the application manifests to k8s/artifacts/kube.yaml:

apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
tier: backend
name: my-k8s-api-sa
namespace: my-k8s-api
---
apiVersion: v1
kind: Service
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
tier: backend
name: my-k8s-api-service
namespace: my-k8s-api
spec:
ports:
- name: app-port
port: 80
protocol: TCP
targetPort: app-port
selector:
app: my-k8s-api
owner: cfa
tier: backend
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
tier: backend
name: my-k8s-api-deployment
namespace: my-k8s-api
spec:
revisionHistoryLimit: 2
selector:
matchLabels:
app: my-k8s-api
owner: cfa
tier: backend
strategy:
rollingUpdate:
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
prometheus.io/scrape: "true"
labels:
app: my-k8s-api
owner: cfa
tier: backend
spec:
containers:
- env:
- name: AWS_REGION
valueFrom:
configMapKeyRef:
key: AWS_REGION
name: env-region-cm
- name: ENV_NAME
valueFrom:
configMapKeyRef:
key: ENV_NAME
name: env-region-cm
image: busybox:latest
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
name: my-k8s-api-application-container
ports:
- containerPort: 8080
name: app-port
readinessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
imagePullSecrets:
- name: artifactory-docker
serviceAccountName: my-k8s-api-sa
terminationGracePeriodSeconds: 30
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
tier: backend
name: my-k8s-api-pdb
namespace: my-k8s-api
spec:
maxUnavailable: 1
selector:
matchLabels:
app: my-k8s-api
owner: cfa
tier: backend
---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
tier: backend
name: my-k8s-api-hpa
namespace: my-k8s-api
spec:
maxReplicas: 10
metrics:
- resource:
name: cpu
targetAverageUtilization: 80
type: Resource
minReplicas: 2
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-k8s-api-deployment
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
cfa-base-name: python-api
cfa-base-version: v0.1.0
cm.chick-fil-a.com/system-tag: CAP_BLOG_POST
labels:
app: my-k8s-api
owner: cfa
prometheus-enabled: "true"
release: prometheus-operator
tier: backend
name: my-k8s-api-servicemonitor
namespace: my-k8s-api
spec:
endpoints:
- interval: 30s
path: /prometheus
port: app-port
scheme: http
namespaceSelector:
matchNames:
- my-k8s-api
selector:
matchLabels:
app: my-k8s-api
owner: cfa
tier: backend

And we’re done! Committing these files to our application repository will kick off the deployment process and my-k8s-api application will be deployed to our development cluster.

To put everything together, let’s review the path we took through our kustomize overlays:

You can see a few complete examples here:

https://github.com/chick-fil-a/kustomize-application/tree/main/examples

Wrap Up

We’ve used this setup to generate manifests for our applications for a few years. It’s proven flexible enough to handle requirements from many different teams running in many different environments and regions. We continue to add new application types and improve the base layers as we onboard new teams and discover new requirements (or run into issues).

One word of caution: keep the number of layers to a minimum! We don’t enforce a maximum number of overlays a team can add, but more than 5 seems to start to cause issues.

We are actively working on and investing in our Kubernetes and GitOps platform at Chick-fil-A. If this is an area of interest for you, we welcome feedback and community partnership on this. Feel free to leave us a comment or send me a message on LinkedIn!

--

--