Kuma Mesh multi-zone deployment on kubernetes with Pulumi.

--

In today’s cloud-native landscape, deploying and managing applications across multiple zones in a reliable and scalable manner is crucial for ensuring high availability and optimal performance. To achieve this, organisations often leverage service mesh technologies to enhance traffic management, observability, and security.

Kuma, an open-source service mesh built on Envoy, provides an elegant solution for multi-zone deployments. In this article, we will explore how to deploy and configure Kuma Mesh on an Azure Kubernetes Service (AKS) cluster using Pulumi with Typescript, enabling seamless communication between micro-services across multiple availability zones.

Table of Contents:

  1. Prerequisites
  2. Set up global control plane
  3. Provision the Infrastructure
  4. Access Kuma user interface
  5. Set up Zone Control Planes kumazone1 and kumazone2
  6. Verify control plane connectivity
  7. Configure Data Plane with Kuma Mesh and deploy sample applications on Zone-CPs
  8. Testing Multi-zone Deployment services connection
  9. Conclusion

Prerequisites:

Before we dive into the deployment process, make sure you have the following prerequisites in place:

  • Azure CLI: Install the Azure CLI to interact with Azure resources.
  • AKS Clusters for Global-CP, Zone1-CP and Zone2-CP
  • Pulumi CLI: Install the Pulumi CLI in your environment, You can install it by following the instructions provided in the Pulumi documentation here
  • kumactl: Install the kumactl in your environment by running the following commands.
curl -L https://kuma.io/installer.sh | VERSION=2.3.0 sh -
cd kuma-2.3.0/bin
PATH=$(pwd):$PATH
  • Basic knowledge of Typescript and Kubernetes concepts.

Step 1: Setting Up the Pulumi Project

To get started, create a new directory for your Pulumi project and initialize it by running the following commands in your terminal:

mkdir pulumi-kuma-mesh
cd pulumi-kuma-mesh

pulumi new azure-typescript

This will create a new Pulumi project using the Azure TypeScript template.

To work with Kubernetes and Kuma in Pulumi, we need to install the necessary dependencies. Run the following commands to install them:

npm install @pulumi/kubernetes

# pulumi config set
pulumi config set azure-native:location westus2
pulumi config set pulumi-poc:resourceGroupName ac-rg
pulumi config set pulumi-poc:resourceName global-cp

Note: Make sure you provided the correct values of Resource Group Name and AKS Cluster Name.

In the index.ts file generated by Pulumi, import the required packages and define the required infrastructure resources like resource group, AKS cluster using Pulumi’s Azure provider. Modify the index.ts file as follows:

Step 2: Set up Global Control Plane

Next, we will deploy Kuma global control plane to our global-cp AKS cluster using Helm. In the index.ts file generated by Pulumi, import the required packages and define the required infrastructure resources like resource group, AKS cluster Kubeconfig using Pulumi’s Azure provider. Modify the index.ts file as follows:

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";
import * as k8s from "@pulumi/kubernetes";

// Grab some values from the Pulumi stack configuration (or use defaults)
const projCfg = new pulumi.Config();
const resourceGroupName = projCfg.get("resourceGroupName") || "ac-rg";
const clusterName = projCfg.get("resourceName") || "global-cp";

const creds = pulumi.all([clusterName, resourceGroupName]).apply(([kubeconfigName, rgName]) =>
azure_native.containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: kubeconfigName,
}));

const encodedKubeconfig = creds.kubeconfigs[0].value!;
const kubeconfig = encodedKubeconfig.apply(kubeconfigYaml =>
Buffer.from(kubeconfigYaml, "base64").toString()
);

// Create the k8s provider using the kubeconfig from the existing AKS cluster
const k8sProvider = new k8s.Provider("k8s-provider", { kubeconfig });

// Create a namespace for the Kuma mesh
const kumaNamespaceName = "kuma-system";
const kumaNamespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: kumaNamespaceName,
},
}, { provider: k8sProvider });

// Installing PostgreSQL using Helm.
const postgreSQLHelmChart = new k8s.helm.v3.Release("postgresql", {
namespace: kumaNamespace.metadata.name,
chart: "postgresql",
version: "12.5.8",
name: "my-release",
repositoryOpts: {
repo: "https://charts.bitnami.com/bitnami",
},
}, { provider: k8sProvider, dependsOn: kumaNamespace });

// Encode "postgres" as the DB and user to use for POSTGRES_DB and POSTGRES_USER env
const encodedPostgresDB = Buffer.from("postgres").toString("base64");
const encodedPostgresUser = Buffer.from("postgres").toString("base64");

// Retrieving the Postgres password from the existing secret for POSTGRES_PASSWORD env.
const secretName = "my-release-postgresql";
const myReleasePostgresqlSecret = pulumi.all([kumaNamespaceName, secretName]).apply(
([ns, name]) => k8s.core.v1.Secret.get(name, `${ns}/${name}`, { provider: k8sProvider, dependsOn: [ kumaNamespace, postgreSQLHelmChart ] })
);
const postgresPassword = myReleasePostgresqlSecret.data.apply(data => data["postgres-password"]);

// Retrieve the secret name and encode it for POSTGRES_HOST_RW env
const postgresHost = myReleasePostgresqlSecret.metadata.apply(metadata => {
const metadataName = metadata?.name || "";
const buff = Buffer.from(metadataName, 'utf-8');
return buff.toString('base64');
});

// Create the Secret resource
const postgresSecret = new k8s.core.v1.Secret("postgres", {
metadata: {
name: "postgres",
namespace: kumaNamespace.metadata.name,
},
type: "Opaque",
data: {
POSTGRES_DB: encodedPostgresDB,
POSTGRES_HOST_RW: postgresHost,
POSTGRES_USER: encodedPostgresUser,
POSTGRES_PASSWORD: postgresPassword,
},
}, { provider: k8sProvider });

// Install global control plane
const globalControlPlane = new k8s.helm.v3.Release("global-cp", {
namespace: kumaNamespace.metadata.name,
chart: "kuma",
name: "kuma",
repositoryOpts: {
repo: "https://kumahq.github.io/charts",
},
skipCrds: true,
values: {
controlPlane: {
mode: "global",
environment: "universal",
service: {
type: "LoadBalancer",
},
secrets: {
postgresDb: {
Secret: "postgres",
Key: "POSTGRES_DB",
Env: "KUMA_STORE_POSTGRES_DB_NAME"
},
postgresHost: {
Secret: "postgres",
Key: "POSTGRES_HOST_RW",
Env: "KUMA_STORE_POSTGRES_HOST"
},
postgrestUser: {
Secret: "postgres",
Key: "POSTGRES_USER",
Env: "KUMA_STORE_POSTGRES_USER"
},
postgresPassword: {
Secret: "postgres",
Key: "POSTGRES_PASSWORD",
Env: "KUMA_STORE_POSTGRES_PASSWORD"
},
},
},

},

}, { provider: k8sProvider, dependsOn: kumaNamespace });

Step 3: Provision the Infrastructure

Once the Pulumi project is configured and the Kuma resources and configurations are defined, you can provide the infrastructure. Pulumi will deploy the necessary resources and configure Kuma according to your specifications.

Run the following command to deploy Kuma mesh multi-zone Global Control Plane on your AKS cluster using Pulumi:

# To deploy your stack with --skip-preview or other wise you will encounter with some preview issue
pulumi up --skip-preview

# To destroy resources, run the following:
pulumi destroy

Note: Make sure you used --skip-preview to deploy stack, Other wise you will encounter with some preview issue.

Confirm the changes and wait for the deployment to complete.

Once the deployment finishes, retrieve the external IP address of the global-zone-sync service, You pass this as the value of <global-kds-address> when you set up the zone control planes.

The global control-plane will propagate the zone ingress resources and all policies to all other zones over Kuma Discovery Service (KDS), which is a protocol based on xDS.

Step 4: Access Kuma user interface

Next, Retrieve the external IP address of the kuma-control-plane service in to access Kuma GUI ( http://<external IP>:5681/gui).

Step 5: Set up Zone Control Planes - kumazone1 and kumazone2

Next, we will deploy Kuma Zone control planes to our zone1-cp AKS cluster using Helm and initialize new pulumi stack for zone-cp by running the following commands in your terminal:

# Create new stack for zone-cp
pulumi stack init zone1-cp

# Create dir for zone-cp
mkdir zone1-cp
cd pulumi-kuma-mesh

# List stacks in pulumi project
pulumi stack ls

# Provide the full stack name from above command
pulumi new azure-typescript zs-amrutha/pulumi-poc/zone1-cp

# Install the necessary dependencies
npm install @pulumi/kubernetes

# pulumi config set
pulumi config set azure-native:location westus2
pulumi config set pulumi-poc:resourceGroupName ac-rg
pulumi config set pulumi-poc:resourceName zone1-cp

Note: Make sure you provided the correct values of Resource Group Name and AKS Cluster Name.

Then modify the index.ts file as follows to import the required packages and define the required infrastructure resources like resource group, AKS cluster Kubeconfig using Pulumi’s Azure provider:

Note: Make sure you update kdsGlobalAddress with <global-kds-address> as mentioned in Step 3.

import * as pulumi from "@pulumi/pulumi";
import * as azure_native from "@pulumi/azure-native";
import * as k8s from "@pulumi/kubernetes";

// Grab some values from the Pulumi stack configuration (or use defaults)
const projCfg = new pulumi.Config();
const resourceGroupName = projCfg.get("resourceGroupName") || "ac-rg";
const clusterName = projCfg.get("resourceName") || "zonal1-cp";

const creds = pulumi.all([clusterName, resourceGroupName]).apply(([kubeconfigName, rgName]) =>
azure_native.containerservice.listManagedClusterUserCredentials({
resourceGroupName: rgName,
resourceName: kubeconfigName,
}));

const encodedKubeconfig = creds.kubeconfigs[0].value!;
const kubeconfig = encodedKubeconfig.apply(kubeconfigYaml =>
Buffer.from(kubeconfigYaml, "base64").toString()
);

// Create the k8s provider using the kubeconfig from the existing AKS cluster
const k8sProvider = new k8s.Provider("k8s-provider", { kubeconfig });

// Create a namespace for the Kuma mesh
const kumaNamespaceName = "kuma-system";
const kumaNamespace = new k8s.core.v1.Namespace("namespace", {
metadata: {
name: kumaNamespaceName,
},
}, { provider: k8sProvider });

// Install Zone1 Control Plane
const zone1ControlPlane = new k8s.helm.v3.Release("zone1-cp", {
namespace: kumaNamespace.metadata.name,
chart: "kuma",
name: "kuma",
repositoryOpts: {
repo: "https://kumahq.github.io/charts",
},
values: {
controlPlane: {
mode: "zone",
zone: "kumazone1",
kdsGlobalAddress: "grpcs://20.99.215.191:5685", // <global-kds-address>
tls: {
kdsZoneClient: {
skipVerify: true,
},
},
},
ingress: {
enabled: true,
},
},

}, { provider: k8sProvider, dependsOn: kumaNamespace });

Repeat the step 5 to setup kumazone2 zone on zone2-cp AKS cluster with appropriate config changes.

Step 6: Verify control plane connectivity

You can run kumactl get zones, or check the list of zones in the web UI for the global control plane, to verify zone control planes (kumazone1 and kumazone2) connections.When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.

Zone Ingress tab of the web UI also lists zone control planes that you deployed with zone ingress. You will notice that Kuma automatically creates an mesh entity with name. default.

Step 7: Configure Data Plane with Kuma Mesh and deploy sample applications on zone-CPs

On Kubernetes the Dataplane entity is automatically created for you, and because transparent proxying is used to communicate between the service and the sidecar proxy, no code changes are required in your applications.

You can control where Kuma automatically injects the data plane proxy by labeling either the Namespace or the Pod with kuma.io/sidecar-injection=enabled . Add pulumi code snippet to zone1-cp stack index.ts file to create namespace with label and deploy demo application with docker image debianmaster/nodejs-welcome on zone1-cp cluster.

// Create a namespace with kuma sidecar injection label
const appNamespaceName = "demo-app";
const appNamespace = new k8s.core.v1.Namespace("appnamespace", {
metadata: {
name: appNamespaceName,
labels: {
"kuma.io/sidecar-injection": "enabled",
},
},
}, { provider: k8sProvider, dependsOn: [ kumaNamespace, zone1ControlPlane ] });

// deploy nodejs welcome application
const welcomeDeployment = new k8s.apps.v1.Deployment("welcomeappdeployment", {
metadata: {
name: "welcome",
namespace: appNamespace.metadata.name,
labels: {
run: "welcome",
},
},
spec: {
replicas: 1,
selector: {
matchLabels: {
run: "welcome",
},
},
template: {
metadata: {
labels: {
run: "welcome",
},
},
spec: {
containers: [{
image: "debianmaster/nodejs-welcome",
name: "welcome",
ports: [{
containerPort: 8080,
}],
resources: {}
}],
},
},
},
}, { provider: k8sProvider, dependsOn: appNamespace });

const welcomeservice = new k8s.core.v1.Service("welcomeservice", {
metadata: {
name: "welcome",
namespace: appNamespace.metadata.name,
},
spec: {
type: "LoadBalancer",
ports: [{
port: 8080,
protocol: "TCP",
targetPort: 8080,
}],
selector: {
run: "welcome",
},
},
}, { provider: k8sProvider, dependsOn: [ appNamespace, welcomeDeployment ] });

Verify the demo-app namespace created with label kuma.io/sidecar-injection=enabled as shown below.

Verify the demo applications welcome and sample and retrieve the external IP address for welcome service to access the welcome app in your web browser as shown below.

Repeat same steps to deploy sample application on kuma zone kumazone2 on zone2-cp AKS cluster with appropriate config changes.

Step 8: Testing Multi-zone Deployment service connection

Next, Find the available service address as shown below.

# Get the serviceinsight to find available service address
kubectl get serviceinsight
kubectl get serviceinsight -oyaml

Lets exec into sample application POD, Which is deployed on kumazone2 on zone2-cp AKS cluster and Send a request to welcome app service Or vice versa , you can use the generated kuma.io/service and Kuma DNS:

kubectl get pod -n < namespace >
kubectl -n < namespace > exec -it < pod Name > sh

# send request to the service
curl http://< serviceinsight_service_addressPort >

Tip: If you are not able run curl command inside pod, add it by running apk update, apk add curl.

Note: Ensure mTLS is enabled on the multi-zone meshes

MTLS is mandatory to enable cross-zone service communication. mTLS can be configured in your mesh configuration as indicated in the mTLS section. This is required because Kuma uses the Server Name Indication field, part of the TLS protocol, as a way to pass routing information cross zones.

Conclusion

Deploying a multi-zone global control plane with Kuma empowers organizations to efficiently manage and scale their distributed service mesh deployments. By following the steps outlined in this article, you can configure a resilient and secure communication infrastructure across Kubernetes clusters or environments. Leveraging Kuma’s powerful features, you’ll gain greater control over your microservices architecture while ensuring reliable.

--

--