Accessing Google Kubernetes Private Cluster through Tunneling via Private VM Instance as Jumphost

Kaslie
5 min readJul 8, 2022

This article might be a simple topics but could be quite tough to tackle on for Newcomer who knows a little networking. I was googling around and find out it’s really hard to find a thorough guide regard this topic, some includes additional tools, some includes a lot of configuration and some did not works as I expected. With this article, I hope you guys no need to googling around for hours like me.

Requirements:

  • Private Cluster Google Kubernetes Engine ( GKE )
  • Private VM Instance as Jumphost
  • gcloud CLIwith kubectlinstalled
  • vim or nano
  • LINUX or MacOS. I am using Monterey. The command line in this article will use MacOS, you can find similar command line you need in the google for the approach.

Let’s Get Started!!!

We should understand that before kubectl send the requests through port 8200 to kubernetes public cluster, It will check the Kubernetes configuration file named ~/.kube/config . It should be look like this:

1. Kubectl General Flow

However It’s different if Kubernetes is Private Clusters, which means the config in .kube/config refers to 172.xx.xx.xx or other private IP . So your kubectl can’t request to Kubernetes.

2. Kubernetes Private Clusters

Here’s what we need is a Private / Public VM Instance in Google Compute Engine ( GCE ). So create a Single VM Instance in GCE with Public IP disabled. Let say VM Instance have IP 172.18.0.20 . Oh, and GKE have Private IP 172.20.2.51

3. Add a new VM Instances

We would use the VM Instance as Jumphost as it will act as Proxy between you and Kubernetes, then It will forward all request to Kubernetes Private Clusters. Please remember to setup Firewall on allow TCP port 8200 as it will port-forward the request to certain IP which define in ~/kube/.config . To ensure you can SSH to the VM, Please try this command:

gcloud compute ssh [VM_INSTANCE_NAME] —project=GCP_PROJECT_NAME —-zone=VM_ZONE

Exit If you succeed SSH into the Instance, and then You can try tunneling with this command

gcloud compute ssh [VM_INSTANCE_NAME] —project=GCP_PROJECT_NAME —-zone=VM_ZONE --tunnel-through-iap --ssh-flag='-N -L LOCAL_PORT:KUBERNETES_PRIVATE_IP:443' -q -f

You can add another --ssh-flag like -vvvvv at the end of the command line.

Replace the LOCAL_PORT with the ports you want to access from your localhost. If you write 8343, then The request will go through localhost:8343 to Kubernetes .

Replace the KUBERNETES_PRIVATE_IP with your Kubernetes private IP in GKE Dashboard. In this case it’s 172.20.2.51 . Then the flow will not change at all, it still looks like previous Image:

Why still failed ? because it still send the requests to 172.20.2.51 instead of localhost:8343 . Why it’s 8343 ? because we tunneling through port 8343 in the gcloud command above. In order to access Kubernetes Private Clusters, we must go through localhost:8343

So here we have 2 ways to get the job done:

First

  1. Edit your /etc/hostsfile. Change the line 127.0.0.1 localhost
    to 127.0.0.1 localhost kubernetes.default .
  2. Edit your .kube/config on your selected clusters, find server key with IP https://172.20.2.51 change it to https://kubernetes.default:8343 . This way the you access the Kubernetes through your localhost:8443 like image below. However there’s a disadvantage because when you re-initialize the kubernetes authentication by doing gcloud container clusters get-credentials . it will rewrite all your changes, and you will find out you are in loss what goes wrong because of the changes you did long time ago.
4. Accessing Kubernetes Cluster

Second

  1. We won’t edit the ~/.kube/config and /etc/hosts files in this step, but we would redirect all request which supposed to go tohttps://172.20.2.51 into localhost by apply the command line sudo ifconfig lo0 alias 172.20.2.51 .
    lf0 is a loopback interface.
    so here we set that any requests goes to 172.20.2.51 is a loopback IP.
  2. Now we should able to redirect from 172.20.2.51 to localhost . however we still can’t use kubectl because it’s accessing wrong port which is 443 instead of 8343 . So we need to add port-forward from 172.20.2.51:443to localhost:8343 . The command is
    echo "rdr pass on lo0 inet proto tcp from any to 172.20.2.51 port 443 -> 127.0.0.1 port 8343" | sudo pfctl -ef -

Additional Tips:

  • Make sure you put the gcloud compute ssh , ifconfig, and pfctl command into the ~/.zshrc , so it will executed each time you create a new terminal instead of execute it manually.
  • If you are a mac user like me, You can do that by creating a plist file that contains those command. then load it with launchctl load -w [PLIST_FILE_NAME] . If you are getting error Load failed: 5: Input/Output error, Just need to unload the file. When your finish, your config will load each time you log in not each time you start a terminal. Here is my config

Jumphost.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>jumphost</string>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
<key>UserName</key>
<string>YOUR_USERNAME</string>
<key>ProgramArguments</key>
<array>
<!-- Gcloud absolute path, Don't place gcloud in Documents, Downloads folders as launchd can't access those directory-->
<string>/Users/helloworld/bin/google-cloud-sdk/bin/gcloud</string>
<string>compute</string>
<string>ssh</string>
<string>stockbit-jumphost</string>
<string>--project</string>
<string>stockbit-api-prod</string>
<string>--zone</string>
<string>asia-southeast2-a</string>
<string>--tunnel-through-iap</string>
<string>--ssh-flag</string>
<string>-N -L 8443:172.19.0.18:443</string>
</array>
<!-- If you need log, you can include the line below-->
<key>StandardOutPath</key>
<string>/users/helloworld/Documents/Logs/jumphost.out</string>
<key>StandardErrorPath</key>
<string>/users/helloworld/Documents/Logs/jumphost.error</string>
</dict>
</plist>

kube-ip-redirect-to-localhost.plist

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>kube-ip-redirect-to-localhost</string>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
<key>UserName</key>
<string>YOUR USERNAME</string>
<key>ProgramArguments</key>
<array>
<!-- We are going to execute this shell script -->
<string>/bin/sh</string>
<string>/Users/helloworld/Library/LaunchAgents/kube-ip-redirect-to-localhost.sh</string>
</array>
</dict>
</plist>

kube-ip-redirect-to-localhost.sh

# Remember don't add `single quote '` or `double quote "` into YOUR_SUDO_PASSWORDecho YOUR_SUDO_PASSWORD | sudo -S ifconfig lo0 alias 172.20.2.51;
echo YOUR_SUDO_PASSWORD | sudo -S echo 'rdr pass on lo0 inet proto tcp from any to 172.20.2.51 port 443 -> 127.0.0.1 port 8343' | sudo pfctl -ef -

Hope this articles help you.

Source:

--

--