Proper EKS with AWS LB Controller

Provision EKS cluster with support for OIDC provider, ALB, NLB

Joaquín Menchaca (智裕)
13 min readMay 28, 2023

--

Previously, I showed how to provision an EKS cluster with support for NLB and ALB external load balancers for the service object (type: LoadBalancer) and ingress object respectively. In this process from the earlier article, I showed how to add the necessary permissions in the minimalist way, but minimalist violates best practices in security, namely PoLP (principle of least privilege).

In this article, I will show how to add permissions properly so that only the services that need the permissions will have that access. This is accomplished IRSA (IAM Role for Service Account).

What is IAM Role for Service Account?

The IAM Role for Service Account allows you to set up a trust relationship between a single AWS identity and a single Kubernetes identity using an identity layer called OIDC (OpenId Connect) provider.

In AWS, you will create an IAM Role with the necessary permissions (policy) to access ELB APIs. In Kubernetes, you will create a service account with annotations that references the IAM Role.

Related Articles

Previous Article

This article has more expansive details about some of the components installed.

Tested Versions

This article was tested using a POSIX shell (such as GNU Bash or Zsh) and GNU Grep for the command line environment. Kubernetes 1.26.4 was used, so tool versions installed around this time should work.

## Core Tools versions used

* aws (aws-cli) 2.11.6
* eksctl 0.141.0
* kubectl v1.26.4
* helm v3.11.2

## Helm Chart versions used
aws-load-balancer-controller-1.5.3

Prerequisites

These are are some prerequisites and initial steps needed to get started before provisioning a cluster and installing drivers.

Knowledge

This article requires some basic understanding of networking with TCP/IP and the OSI model, specifically the Transport Layer 4 and Application Layer 7 for HTTP. This article covers using load balancing and reverse proxy.

In Kubernetes, familiarity with service types: ClusterIP, NodePort, LoadBalancer, ExternalName and the ingress resource.

Tools

These are the tools used in this article.

  • AWS CLI [aws] is a tool that interacts with AWS.
  • kubectl client [kubectl] a the tool that can interact with the Kubernetes cluster. This can be installed using adsf tool.
  • helm [helm] is a tool that can install Kubernetes applications that are packaged as helm charts.
  • eksctl [eksctl] is the tool that can provision EKS cluster as well as supporting VPC network infrastructure.
  • adsf [adsf] is a tool that installs versions of popular tools like kubectl.

Additionally, these commands were tested in a POSIX Shell, such as bash or zsh. GNU Grep was also used to extract version of Kubernetes used on the server. On Linux will likely have this installed by default, while macOS users can use Homebrew to install it, run brew info grep for more information.

AWS Setup

Before getting started on EKS, you will need to set up billing to an AWS account (there’s a free tier), and then configure a profile that has acess to an IAM User identity. See Setting up the AWS CLI for more information on configuring a profile.

After this configuration, you can test it with the following:

export AWS_PROFILE="<your-profile-goes-here>"
aws sts get-caller-identity

This should show something like the following with values appropriate to your environment:

Kubernetes Client Setup

If you use asdf to install kubectl, you can get the latest version with the following:

# install kubectl plugin for asdf
asdf plugin-add kubectl \
https://github.com/asdf-community/asdf-kubectl.git

# fetch latest kubectl
asdf install kubectl latest
asdf global kubectl latest

# test results of latest kubectl
kubectl version --short --client 2> /dev/null

This should show something like:

Client Version: v1.27.1
Kustomize Version: v5.0.1

Also, create directory to store Kubernetes configurations that will be used by the KUBECONFIG env variable:

mkdir -p $HOME/.kube

Setup Env Variables

These environment variables will be used throughout this project. If opening up a browser tab, make sure to set the environment variables accordingly.

# variables used to create EKS
export AWS_PROFILE="my-aws-profile" # CHANGEME
export EKS_CLUSTER_NAME="my-lb-cluster" # CHANGEME
export EKS_REGION="us-west-2" # change as needed
export EKS_VERSION="1.26" # change as needed

# KUBECONFIG variable
export KUBECONFIG=$HOME/.kube/$EKS_REGION.$EKS_CLUSTER_NAME.yaml

# used in automation
export POLICY_NAME="${EKS_CLUSTER_NAME}_AWSLoadBalancerControllerIAMPolicy"
export ROLE_NAME="${EKS_CLUSTER_NAME}_AmazonEKSLoadBalancerControllerRole"
ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export POLICY_ARN="arn:aws:iam::$ACCOUNT_ID:policy/$POLICY_NAME"

Setup Helm Repositories

These days helm charts come from a variety of sources. You can get the helm chart used in this guide by running the following commands.

# add AWS LB Controller (NLB/ALB) helm charts
helm repo add "eks" "https://aws.github.io/eks-charts"

# download charts
helm repo update

Provision an EKS cluster

After prerequisite tools are installed and setup, we can start provisioning cloud resources and deploy components to Kubernetes. The cluster can be brought up with the following command:

eksctl create cluster \
--version $EKS_VERSION \
--region $EKS_REGION \
--name $EKS_CLUSTER_NAME \
--nodes 3

Once this finished in about 20 minutes, install a kubectl version that matches the Kubernetes server version:

# fetch exact version of Kubernetes server (Requires GNU Grep)
VER=$(kubectl version --short 2> /dev/null \
| grep Server \
| grep -oP '(\d{1,2}\.){2}\d{1,2}'
)

# setup kubectl tool
asdf install kubectl $VER
asdf global kubectl $VER

Also, check the status of the worker nodes and applications running on Kubernetes.

kubectl get nodes
kubectl get all --all-namespaces

This should show something like the following.

Add OIDC Provider Support

The EKS cluster has an OpenID Connect (OIDC) issuer URL associated with it. To use AWS IRSA, an IAM OIDC provider must exist for the cluster’s OIDC issuer URL.

You can set this up with the following command:

eksctl utils associate-iam-oidc-provider \
--cluster $EKS_CLUSTER_NAME \
--region $EKS_REGION \
--approve

You can verify the OIDC provider is added with the following:

OIDC_ID=$(aws eks describe-cluster \
--name $EKS_CLUSTER_NAME \
--region $EKS_REGION \
--query "cluster.identity.oidc.issuer" \
--output text \
| cut -d '/' -f 5
)

aws iam list-open-id-connect-providers \
| grep $OIDC_ID \
| cut -d '"' -f4 \
| cut -d '/' -f4

Create a Policy to access ELB APIs

In this step, you will create a policy that grants access to the elasticloadbalancing APIs. This varies if you are using AWS Gov or regular AWS.

VER="v2.4.7" # change if version changes
PREFIX="https://raw.githubusercontent.com"
HTTP_PATH="kubernetes-sigs/aws-load-balancer-controller/$VER/docs/install"
FILE_GOV="iam_policy_us-gov"
FILE_REG="iam_policy"

# Download the appropriate link
curl --remote-name --silent --location $PREFIX/$HTTP_PATH/$FILE_REG.json

The downloaded file for regular AWS not used in government would look something like the following:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": "elasticloadbalancing.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeAccountAttributes",
"ec2:DescribeAddresses",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeInternetGateways",
"ec2:DescribeVpcs",
"ec2:DescribeVpcPeeringConnections",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeInstances",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeTags",
"ec2:GetCoipPoolUsage",
"ec2:DescribeCoipPools",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeLoadBalancerAttributes",
"elasticloadbalancing:DescribeListeners",
"elasticloadbalancing:DescribeListenerCertificates",
"elasticloadbalancing:DescribeSSLPolicies",
"elasticloadbalancing:DescribeRules",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetGroupAttributes",
"elasticloadbalancing:DescribeTargetHealth",
"elasticloadbalancing:DescribeTags"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"cognito-idp:DescribeUserPoolClient",
"acm:ListCertificates",
"acm:DescribeCertificate",
"iam:ListServerCertificates",
"iam:GetServerCertificate",
"waf-regional:GetWebACL",
"waf-regional:GetWebACLForResource",
"waf-regional:AssociateWebACL",
"waf-regional:DisassociateWebACL",
"wafv2:GetWebACL",
"wafv2:GetWebACLForResource",
"wafv2:AssociateWebACL",
"wafv2:DisassociateWebACL",
"shield:GetSubscriptionState",
"shield:DescribeProtection",
"shield:CreateProtection",
"shield:DeleteProtection"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateSecurityGroup"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "CreateSecurityGroup"
},
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:DeleteTags"
],
"Resource": "arn:aws:ec2:*:*:security-group/*",
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "true",
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:CreateLoadBalancer",
"elasticloadbalancing:CreateTargetGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:CreateListener",
"elasticloadbalancing:DeleteListener",
"elasticloadbalancing:CreateRule",
"elasticloadbalancing:DeleteRule"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
],
"Condition": {
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "true",
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags",
"elasticloadbalancing:RemoveTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:listener/net/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener/app/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener-rule/net/*/*/*",
"arn:aws:elasticloadbalancing:*:*:listener-rule/app/*/*/*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:AddTags"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:targetgroup/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/net/*/*",
"arn:aws:elasticloadbalancing:*:*:loadbalancer/app/*/*"
],
"Condition": {
"StringEquals": {
"elasticloadbalancing:CreateAction": [
"CreateTargetGroup",
"CreateLoadBalancer"
]
},
"Null": {
"aws:RequestTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:ModifyLoadBalancerAttributes",
"elasticloadbalancing:SetIpAddressType",
"elasticloadbalancing:SetSecurityGroups",
"elasticloadbalancing:SetSubnets",
"elasticloadbalancing:DeleteLoadBalancer",
"elasticloadbalancing:ModifyTargetGroup",
"elasticloadbalancing:ModifyTargetGroupAttributes",
"elasticloadbalancing:DeleteTargetGroup"
],
"Resource": "*",
"Condition": {
"Null": {
"aws:ResourceTag/elbv2.k8s.aws/cluster": "false"
}
}
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:RegisterTargets",
"elasticloadbalancing:DeregisterTargets"
],
"Resource": "arn:aws:elasticloadbalancing:*:*:targetgroup/*/*"
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:SetWebAcl",
"elasticloadbalancing:ModifyListener",
"elasticloadbalancing:AddListenerCertificates",
"elasticloadbalancing:RemoveListenerCertificates",
"elasticloadbalancing:ModifyRule"
],
"Resource": "*"
}
]
}

Upload the policy with the following command:

aws iam create-policy \
--policy-name $POLICY_NAME \
--policy-document file://iam_policy.json

Associate Service Account with Policy

This next step will setup the necessary identities and permissions that allow aws-load-balancer-controller to access the required privileges needed to provision external load balancers.

eksctl create iamserviceaccount \
--cluster $EKS_CLUSTER_NAME \
--region $EKS_REGION \
--namespace "kube-system" \
--name "aws-load-balancer-controller" \
--role-name $ROLE_NAME \
--attach-policy-arn $POLICY_ARN \
--approve

This command eksctl create iamserviceaccount does the following automation:

  1. Create an IAM Role with a trust policy federated by the OIDC provider associated with the EKS cluster
  2. Attach the policy needed to grant required access for Elastic Load Balancing.
  3. Create a Service Account with appropriate metadata annotations that will associate it back to the corresponding IAM Role.

To inspect the role that was created, you can run.

aws iam get-role --role-name $ROLE_NAME

This should show something like the following:

To inspect that service account that was created, you can run:

kubectl get serviceaccount "aws-load-balancer-controller" \
--namespace "kube-system" \
--output yaml

This should show something like the following:

Troubleshooting: Missing IAM Role

Verify that the IAM Role is created with the appropriate trust relationship to the OIDC provider.

aws iam get-role --role-name "$ROLE_NAME_ALBC"

You can create the IAM Role, using the following:

ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
OIDC_ID=$(aws eks describe-cluster \
--name $EKS_CLUSTER_NAME \
--region $EKS_REGION \
--query "cluster.identity.oidc.issuer" \
--output text \
| cut -d '/' -f 5
)

OIDC_PATH="oidc.eks.region-code.amazonaws.com/id/$OIDC_ID"

cat >load-balancer-role-trust-policy.json <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::$ACCOUNT_ID:oidc-provider/$OIDC_PATH"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"$OIDC_PATH:aud": "sts.amazonaws.com",
"$OIDC_PATH:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
}
}
}
]
}
EOF

aws iam create-role \
--role-name $ROLE_NAME_ALBC \
--assume-role-policy-document file://load-balancer-role-trust-policy.json

Troubleshooting: missing attached policy

Verify that the policy is attached to the role:

aws iam list-attached-role-policies --role-name $ROLE_NAME_ALBC

If this is empty list, then you can attach the policy created earlier with the following command:

ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)

aws iam attach-role-policy \
--policy-arn $POLICY_ARN_ALBC \
--role-name $ROLE_NAME_ALBC

Troubleshooting: missing serviceaccount

Verify that the service account exists and matches the IAM Role in the annotation. You can run this to verify:

kubectl get sa aws-load-balancer-controller --namespace "kube-system" \
--output jsonpath='{.metadata.annotations.eks\.amazonaws\.com/role-arn}'

This this errors because it does not exist, you’ll need to create it. If the role name is incorrect, then you’ll need to update it.

The service account can be created with this command:

cat <<EOF | kubectl apply --namespace kube-system -f -
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: aws-load-balancer-controller
name: aws-load-balancer-controller
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::$ACCOUNT_ID:role/$ROLE_NAME
EOF

Troubleshooting: waiter state transitioned to Failure

This means that something failed within the stack. It could be because the IAM Role with the same name already exists, and so the other parts of the process will error out. If this happens, you want to check to see if the role exists, and delete it, or create a new role with a different name.

Install AWS load balancer controller add-on

This add-on can be installed using the Helm chart aws-load-balancer-controller.

helm upgrade \
aws-load-balancer-controller \
eks/aws-load-balancer-controller \
--namespace "kube-system" \
--set clusterName=$EKS_CLUSTER_NAME \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller

When completed, you can check on the status of it by running:

kubectl get all \
--namespace "kube-system" \
--selector "app.kubernetes.io/name=aws-load-balancer-controller"

This should show something like the following:

Example application with a service

For the example application, we can use Apache web server, and deploy this with the following commands:

# deploy application
kubectl create namespace httpd-svc
kubectl create deployment httpd \
--image=httpd \
--replicas=3 \
--port=80 \
--namespace=httpd-svc

To create the network load balancer, you deploy a service object resource of type LoadBalancer with annotations to signal that NLB will be used.

# provision network load balancer
cat <<-EOF > svc.yaml
apiVersion: v1
kind: Service
metadata:
name: httpd
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
selector:
app: httpd
EOF

kubectl create --filename=svc.yaml --namespace=httpd-svc

After this is completed, you can run the following command to see all of the components that were deployed:

kubectl get all --namespace=httpd-svc

This should show something like the following:

You can capture the public address of the load balancer with the following command:

export SVC_LB=$(kubectl get service httpd \
--namespace "httpd-svc" \
--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'
)

With this environment variable, you can test traffic to the apache web server through NLB

curl --silent --include $SVC_LB

This should look something like the following

Also, you that same environment variable, you can inspect the configuration of the NLB with the following aws cli command:

aws elbv2 describe-load-balancers --region us-west-2 \
--query "LoadBalancers[?DNSName==\`$SVC_LB\`]"

Example application with an ingress

This example application like before will use the Apache web server, and can be deployed this with the following commands:

# deploy application 
kubectl create namespace "httpd-ing"
kubectl create deployment httpd \
--image "httpd" \
--replicas 3 \
--port 80 \
--namespace "httpd-ing"

A service of type ClusterIP will need to be created as well, as this will point virtually to the Apache web server pods.

kubectl expose deployment httpd \
--port 80 \
--target-port 80 \
--namespace "httpd-ing"

Lastly, to deploy the application load balancer, we’ll need to create an ingress resource object with class set to alb and the appropriate annotations to signaling the type of ALB that will be used:

# provision application load balancer
kubectl create ingress alb-ingress \
--class "alb" \
--rule "/=httpd:80" \
--annotation "alb.ingress.kubernetes.io/scheme=internet-facing" \
--annotation "alb.ingress.kubernetes.io/target-type=ip" \
--namespace "httpd-ing"

After this is completed, you can run the following command to see all of the components that were deployed:

kubectl get all,ing --namespace "httpd-ing"

This should show something like the following:

To fetch only the public address of the load balancer, you can capture that with this command:

export ING_LB=$(kubectl get ing alb-ingress \
--namespace "httpd-ing" \
--output jsonpath='{.status.loadBalancer.ingress[0].hostname}'
)

You can test the connection to the Apache web server through the ALB with the following curl command:

curl --silent --include $ING_LB

This should show something like the following below:

Also, you can inspect the details of the load balancer with the following aws cli command:

aws elbv2 describe-load-balancers --region us-west-2 \
--query "LoadBalancers[?DNSName==\`$ING_LB\`]"

Cleanup

The following below will clean up AWS cloud resources that were used in this guide.

Delete example Kubernetes applications

You can delete Kubernetes objects that were created with this guide using the following commands.

IMPORTANT: You want to delete any Kubernetes objects that have provisioned AWS cloud resources, otherwise, these will eat up costs.

# IMPORTANT: delete these to avoid costs 
kubectl delete "ingress/annotated" --namespace "httpd-ing"
kubectl delete "service/httpd-svc" --namespace "httpd-svc"

# deleted when cluster deleted
kubectl delete "deployment/httpd" --namespace "httpd-svc"
kubectl delete "namespace/httpd-svc"

kubectl delete "deployment/httpd" --namespace "httpd-ing"
kubectl delete "svc/httpd" --namespace "httpd-ing"
kubectl delete "namespace/httpd-ing"

Delete AWS Loadbalancer Controller

# delete aws-load-balancer-controller
helm delete \
aws-load-balancer-controller \
--namespace "kube-system"

# detach policy, delete IAM Role, delete service account
eksctl delete iamserviceaccount \
--name "aws-load-balancer-controller" \
--namespace "kube-system" \
--cluster $EKS_CLUSTER_NAME \
--region $EKS_REGION

# delete policy
aws iam delete-policy --policy-arn $POLICY_ARN_ALBC

Delete AWS cloud resources

You can delete the EKS cluster with the following command.

# delete EKS cluster
eksctl delete cluster --name $EKS_CLUSTER_NAME --region $EKS_REGION

Resources

Conclusion

In the previous article, I introduced how to install AWS Load Balancer controller in a minimalist approach, and this article I show how to do it properly with regards to security best practices called PoLP (principle of least privilege). In this process, only the pods that need access will have the required privileges using IRSA (IAM Role to Service Account) facility through an OIDC provider.

--

--

Joaquín Menchaca (智裕)

DevOps/SRE/PlatformEng — k8s, o11y, vault, terraform, ansible