Is your Kubernetes API Server exposed? Learn how to check and fix!

David Calvert
4 min readJul 6, 2022

After reading “Over 380 000 open Kubernetes API servers”, I asked myself if some of the clusters I manage could be exposed. I then decided to check and I think you should too!

Article photo by Tumisu on pixabay + Kubernetes logo
Photo by Tumisu on pixabay + Kubernetes logo

What are we talking about?

If you work with Kubernetes, there is a good chance that most of your interaction with it is thru the Kubernetes API Server. Almost each time you use kubectl, k9s, Lens or Octant, you are in fact sending HTTP requests to the API Server.

What’s interesting to know here is:

  • from where are you talking to the API Server?
  • what’s between you and the API Server?

If you are using your Kubernetes client thru a bastion host and/or a virtual private network (VPN), your are probably in a good spot. If not or in doubt, you should definitely check!

Why would it be exposed?

There could be many different answers to this question, but the most common one is probably going to be: because it’s the default!

I’m going to use Amazon Elastic Kubernetes Service (EKS) as an example here. When you create an EKS cluster on AWS, here’s what you will find by default on Step 2 “Specify networking” :

AWS Console - EKS Cluster endpoint screenshot
AWS Console - EKS Cluster endpoint screenshot

This means that all clusters created using the default configuration expose the API Server to the entire internet (0.0.0.0/0)! In other words, attackers will have a direct network access to theses clusters, and could easily gather useful information to target them.
This is a bit scary… but let’s not panic!

How to test ?

In order to test your cluster configuration, you will need to get your Kubernetes cluster API Server(s) IP or FQDN. In most cases, you will find this information in your Kubernetes configuration file:

$ grep server ~/.kube/config
server: https://XXXXXXXXX.abc.us-west-1.eks.amazonaws.com
server: https://XXXXXXXXX.abc.us-west-1.eks.amazonaws.com
server: https://XXXXXXXXX.abc.us-west-1.eks.amazonaws.com

Once you have this, you can simply try to query the /version endpoint from your terminal or any web browser:

$ curl -k https://XXXXXXXXX.abc.us-west-1.eks.amazonaws.com/version
{
"major": "1",
"minor": "20+",
"gitVersion": "v1.20.15-eks-a64ea69",
"gitCommit": "03450cdabfc4162d4e447e6d8c5037efe6d29742",
"gitTreeState": "clean",
"buildDate": "2022-05-12T18:44:04Z",
"goVersion": "go1.15.15",
"compiler": "gc",
"platform": "linux/amd64"
}

If you get similar answer, this means that you Kubernetes API Server is exposed on internet.

How to fix it?

There’s many way to get around this, I will share here a quick way to reduce the attack surface easily and also recommend a better architecture.

The quick fix

This one is straightforward and you should be able to it immediately in your AWS console.

  • For a new cluster :
    EKS > Clusters > Create EKS cluster > Specify networking
  • For an existing EKS cluster:
    EKS > Clusters > ${Your-Cluster} > Networking Tab > Manage networking

Once there, unfold “Advanced settings” and you will be able to add up to 40 CIDR block sources. If you want to allow a single IP address, the CIDR format is /32 like in the example below:

AWS Console - EKS Cluster endpoint with CIDR screenshot
AWS Console - EKS Cluster endpoint with CIDR screenshot

With this solution, your Kubernetes API server will still be exposed publicly on the internet, but only the configured CIDR block will be allowed to access it.

To get your public IP, you can use a service like whatismyip.com or use dig:

$ dig +short myip.opendns.com @resolver1.opendns.com
x.x.x.x

A better approach

Once you have limited your public exposition, you have time to think and work on a more secure and sophisticated architecture to manage your Kubernetes cluster(s).

I personally recommend to:

  • Use a VPN to access your VPC
  • Configure your EKS cluster endpoint access to Private mode
  • Create a private SSH bastion host in your VPC
  • Update your EKS Security Group to only allow inbound traffic from the bastion host
  • Enable access & audit logs on your bastion

Here’s how the architecture looks like:

AWS architecture schema made with cloudcraft.co
AWS architecture schema made with cloudcraft.co

While this article used the AWS cloud provider, you should be able to reproduce something similar on most cloud providers.

Final words

I hope this article has been useful to you! If it’s the case, feel free to give a few claps and to share it with your network. You can also send me some feedback, especially if you find any errors or think of any enhancements.

You can follow me here on Medium, or on:

--

--

David Calvert

Site Reliability Engineer. Currently focused on Observability, Reliability and Security aspects of Kubernetes clusters.