quickNotes: Handling Ingress in GKE

Pavan kumar Bijjala
6 min readOct 19, 2022

We know that in Kubernetes, there are generally 3 ways to expose workloads publicly:

  • Service (with type NodePort)
  • Service (with type LoadBalancer)
  • Ingress

NodePort Service comes almost as early as born of Kubernetes. But due to limitation on ports range (30000~32767), randomness of port, and the need to expose network of (almost) the whole cluster, NodePort Service is usually not considered as a good L4 solution in serious production workloads.

A viable solution today for L4 apps is LoadBalancer service. It's implemented differently in different Kubernetes offerings, by connecting an Kubernetes Service object with a real/virtual IaaS (Cloud) LoadBalancer, so that traffic going through LoadBalancer endpoint can be routed to destination pods properly. However each service consumes 1 IP address, hence costlier to expose service externally. There comes Ingress.

Also, in reality, L7 (e.g. HTTP) workloads are way more widely used than L4 ones. So community comes up with the Ingress concept.

Introduction

Ingress in Kubernetes, exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined in the Ingress definition. It’s possible to share same Ingress definition across multiple services, as shown below.

It can route to multiple services because its operating at L7 and routing by reading HTTP header, which is not possible for L4 as L4 is just a port & IP combination.

An Ingress controller in Kubernetes is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.

Under the hood an ingress controller (1) dealing with Ingress objects, setting up mapping rules by leveraging Nginx/Envoy/etc, can exists within the cluster and/or (2) exposing via load balancer externally, as you see in cloud env.

Ingress in Google Kubernetes Engine

(such as Ingress or Multi Cluster Ingress objects)

The good thing is that, GKE comes with an inbuilt ‘GKE ingress controller’. So you don’t have to do any additional deployment or configurations to set up an ingress controller. When you create an ingress object, GKE launches a Load Balancer (Internal or external) with all the routing rules mentioned in the ingress resource definition.

Handling of Ingress in GKE

Considerations for type of Load Balancer depends on Client’s network i.e.,

  1. internal i.e., in the same GKE cluster (ClusterIP can be used, no ingress required)
  2. within the same VPC (use Internal load Balancer for ingress, suitable for any service type)
  3. from external network (an external LB for ingress, for only LoadBalancer & Ingress as service types).

Can GKE ingress controller handle L4 traffic ? no but services can, shown in table below or use custom Ingress controllers like NGINX. Can one ingress host multiple services ? yes.

List of supported Protocols by GKE services, for a given service type (ref: service networking), only pitfall is each service deployed consumes 1 IP address.
Even though Upstream supports multiple ports in the case of L4 Load Balancer, GKE (default) service might not, as it’s a [IP+Port] type of backend configuration

Internal Layer 4 & 7 Load Balancers in GKE are regional resource not global. Global access (client from any region can connect) is enabled per-Service using the following annotation: networking.gke.io/internal-load-balancer-allow-global-access: "true".

Coming back to Ingress,

  • Ingress supports multiple-backends. One backend service corresponds to each Service referenced by the Ingress, based on URL path.
  • Any requests that don’t match the paths in the rules field are sent to a defaultBackend
  • Use CRDs to customize Ingress object definitions in GKE.

FrontendConfig and BackendConfig custom resource definitions (CRDs) allow you to further customize the load balancer. These CRDs allow you to define additional load balancer features hierarchically, in a more structured way than annotations. Prominently http-to-https redirections.

  • Services are annotated automatically (for clusters where NEG enabled) with cloud.google.com/neg: '{"ingress": true}'
  • Ingress and services resources to be in same namespace.
  • Ingress for internal or external load balancing supports the serving of TLS certificates to clients. You can serve TLS certificates through Kubernetes Secrets or through pre-shared regional SSL certificates in Google Cloud. You can also specify multiple certificates per Ingress resource.
  • IP address for Ingress is automatically allocated from the GKE node subnet, unlessnetworking.gke.io/internal-load-balancer-subnet annotation is specified in service, and not from the proxy-only subnets

Please note that GKE can customize it’sIngress controller.

List of available Ingress annotations using ingress-gce, can be obtained from ingress.go library definition.

Can GKE creates 2 Load Balancer resources, if an Ingress is created for a service of type LoadBalancer ? No, see example deploying steps, I believe annotations in service manifest control not deploying a LB for it (research TBD) if ingress is true.

Having talked about the GKE Ingress by an Ingress Controller, which is an external (external to GKE cluster) controller, let’s see the other possibilities of ingress controllers in GKE architecture.

Mesh Ingress

In-cluster Ingress refers to software Ingress controllers which have their Ingress proxies hosted inside the Kubernetes cluster itself.

istio-ingressgateway and ingress-nginx are two examples of commonly used and open source in-cluster Ingress controllers. The open-source solutions are likely to require closer management and a higher level of technical expertise, in control plane management.

Service Meshes provide client-side load balancing through a centralized control plane. Traffic Director and Anthos Service Mesh power the ability to load balance internal traffic across GKE clusters, across regions, and also between containers and VMs. Both operates its own sidecar proxies, except gRPC services that doesn’t require side car proxies.

Earlier discussed GKE Ingress controllers generally deploy middle-proxy load balancers for clients and do not have their own sidecar proxies on to the cluster.

So in a service mesh, when not to use GKE Ingress?

If a client and server are in the same mesh, then traditional application exposure can be handled via the mesh rather than middle-proxy load balancing.

Service meshes provides following benefits,

Conclusion

GKE Ingress is used for handling multiple varieties of traffic since its using Google cloud native Load Balancers, and has rich set-of ingress definitions. Ingress definitions can be extended through CRDs to bring in more native features. Ingress has limited use cases when service mesh is being used with your GKE to get rich set of service or application networking.

Appendix

Upstream support for below key features,

Further reading

Gateway: Kubernetes came up with https://kubernetes.io/blog/2022/07/13/gateway-api-graduates-to-beta/ and GKE has equivalent here, but these are on similar lines of Ingress. Gateway API spec can help to develop portable and Cloud neutral (as opposite to native) definitions i.e, without using CRDs provided by GKE in its Ingress handling.

Multi Cluster Ingress is designed to meet the load balancing needs of multi-cluster, multi-regional environments. This GCP service can be consumed standalone (without Anthos license) and details are at https://cloud.google.com/kubernetes-engine/docs/concepts/multi-cluster-ingress.

--

--

Pavan kumar Bijjala

Architect @Accenture | Cloud as your next Enterprise | App modernization | Product Engineering