Installing Istio multicluster deployment with Terraform

Eoneoff
15 min readJan 24, 2023

Today Kubernetes is de facto standard in the field of container orchestration systems. It is very a very powerful and versatile solution, but it lacks some fine network functionality, required for production applications. The Kubernetes foundation is working in that direction, proposing new elements of Kubernetes standard API, but today the niche of this functionality is occupied by service meshes. And one of the most popular and used service mesh is Istio.

One of the features, the service meshes (including Istio) provide is transparent and secure combining of Kubernetes clusters, so service discovery and network connectivity can be established between them through the secure encrypted tunnels.

Istio has a very detailed and clear documentation on setup of such multicluster, configuration, provided here. The only drawback of this documentation is that in concentrates solely on deployment with istioctl command line utility, which is not very comfortable for automation and production.

In fact, besides istioctl, Istio provides an installatioin with the helm chart. Unfortunately, the documentation for it is not too detailed. If you install just a regular Istio instance, it can be quite easy, but when it comes to more complicated configurations, it can become tricky.

In this article, I will describe an Istio multicluster installation from scratch to running multicluster configuration. Because we will need to install multiple components, not only Helm charts, I will use Terraform to bootstrap everything in one single config.

The prerequisites

To run this tutorial you need just two things

  1. Two Kuberntes clusters, which you will connect with Istio service mesh
  2. A Terraform client, installed on your PC

There are multiple ways to get Kubernetes clusters. You can use a local cluster, configured with tools like minikube, kind or k0s. Or, which is a way easier, you can use a number of cloud preconfigured cluster proposals, by Google, Amazon, DigitalOcean, Oracle, IBM, Linode and many other cloud providers.

Because of the lots of different ways of Kubernetes cluster configuration, I won’t describe any, you can find a lot of very detailed instructions in the network.
Let’s assume, that you have already created two clusters and have a kubeconfig file, which includes access to both of them with contexts, labelled “west” and “east”

How to install Terraform client (if you don’t have one) you can read by this link

So, to the point.
Create an istio-multiclusterfolder, where all our configuration will be contained.

If you’we read the link on multicluster installation, I’ve provided, you should know, that we should create a shared certificate and inject it into both clusters. One of the most important functions of Istio (and most other service meshes) is the transparent encription of the network traffic inside the mesh. To provide it, Istio needs a root certificate, by which it signs certificates, provided for each pod in the mesh. And for multicluster config, all clusters should share a root certificate, so pods in one cluster can recognize pods in the other one.

Injecting the certificates into Istio, on the one hand, is quite easy: you should just provide a Kubernetes secret, called cacerts in the Istio installation namespace ( istio-systemby default). On the other hand, creating this secret is, by far, the trickiest part of all this setup. In most tutorials this part is simply skipped, and authors just provide the certificate files or scripts to create ones.

But we will create those certificates by ourselves. Luckily Terraform provides us with means to do it — the TLS provider.

In the folder istio-multicluster create a folder modules and inside it a folder cacerts. Inside this folder create a file main.tf.

First we should create a root self-signed certificate, which will be shared between clusters as a certificate authority.

resource "tls_private_key" "ca" {
algorithm = "RSA"
}

resource "tls_self_signed_cert" "ca" {
private_key_pem = tls_private_key.ca.private_key_pem
is_ca_certificate = true
set_subject_key_id = true
set_authority_key_id = true
validity_period_hours = 87600

allowed_uses = [
"cert_signing",
"crl_signing"
]

subject {
common_name = "root.multicluster.com"
}

depends_on = [
tls_private_key.ca
]
}

Pay attention the is_ca_certficate, set_subject_key_id and set_authority_key_id parameters and allowed_uses. Those are required for a certifiate to be valid for signing other certificates. Validity period and subject parameters are not important.

Also we use the private key, we’ve created as the key for our self-signed certificate. So it’s wise to declare this key as a dependency, so it would be created before the certificate itself

After we have created the root CA, we need to create as many certificates as we want to create clusters. To provide some flexibility to this module, we will pass a list of cluster names to it as a variable.

Create a file variables.tf in the same folder, where our main.tf is and write into it

variable "clusters" {
type = list(string)
}

Now we can use this variable in the Terraform for_each metaargument

Write in the main.tf

resource "tls_private_key" "cert" {
for_each = toset(var.clusters)
algorithm = "RSA"
}

resource "tls_cert_request" "cert" {

for_each = toset(var.clusters)

private_key_pem = tls_private_key.cert["${each.key}"].private_key_pem

subject {
common_name = "${each.key}.intermediate.multicluster.com"
}

depends_on = [
tls_private_key.cert
]
}

resource "tls_locally_signed_cert" "cert" {
for_each = toset(var.clusters)

cert_request_pem = tls_cert_request.cert["${each.key}"].cert_request_pem

ca_private_key_pem = tls_private_key.ca.private_key_pem
ca_cert_pem = tls_self_signed_cert.ca.cert_pem

is_ca_certificate = true
set_subject_key_id = true
validity_period_hours = 87600

allowed_uses = [
"cert_signing",
"crl_signing"
]

depends_on = [
tls_cert_request.cert,
tls_private_key.ca,
tls_self_signed_cert.ca
]
}

Here for each item in the clusters list we will create the private key, cert signing request and, actually, a certificate, signed by our self-signed certificate.

This is a Terraform module, so we are going to need some way to get required parameters out of it. Create an outputs.tf file still in the same folder and write to it

output "root-cert" {
value = tls_self_signed_cert.ca.cert_pem
}

output "certs" {
value = {
for name, cert in tls_locally_signed_cert.cert : name => cert.cert_pem
}
}

output "keys" {
value = {
for name, key in tls_private_key.cert : name => key.private_key_pem
}
}

We will get 3 output parameters: the root certificate, and two dictionaries of certificates and keys, labelled by our cluster names.

Now it’s time to create the Istio deployment itself. With istioctl it is as easy as a istioctl install, but with Terraform we will use an Istio Helm chart and Terraform Helm provider.

In folder modules create a new folder istio and a main.tf file in it.

The TLS provider requires no setup, but now we will use Kubernetes and Helm providers, so we should declare them at the beginning of the module

terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
}
helm = {
source = "hashicorp/helm"
}
kubectl = {
source = "gavinbunney/kubectl"
}
}
}

Firstly, we should create a namespace for Istio installation. We could have relied on Helm create_namespace parameter, but we need to provide the cacerts secret for Istio before installation. So we create the namespace

resource "kubernetes_namespace" "istio-system" {
metadata {
name = "istio-system"

labels = {
"topology.istio.io/network" = var.network
}
}
}

We pass the network parameter to the module, so we should create a corresponding variable. In folder istio create file variables.tf and write to it

variable "network" {
type = string
}

Next we should create a cacerts secret. To be recognized by Istio in should not only have a name, but follow a certain rigid pattern. Write to main.tf

resource "kubernetes_secret" "cacerts" {
metadata {
name = "cacerts"
namespace = "istio-system"
}
data = {
"ca-cert.pem" = var.cert
"ca-key.pem" = var.key
"root-cert.pem" = var.ca-root
"cert-chain.pem" = "${var.cert}${var.ca-root}"
}

depends_on = [
kubernetes_namespace.istio-system
]
}

As you’ve probably understood, we should add

variable "cert" {
type = string
}

variable "key" {
type = string
}

variable "ca-root" {
type = string
}

to variables.tf file

No we can actually deploy Istio. Surprisingly, that’s by far is the easyest part of the whole config. It follows closely this tutorial, only having translated the Helm cli command into Terraform config and with addition of some configuration parameters.

resource "helm_release" "istio-base" {
name = "istio-base"
namespace = "istio-system"
create_namespace = true
repository = "https://istio-release.storage.googleapis.com/charts"
chart = "base"
timeout = 300
cleanup_on_fail = true

depends_on = [
kubernetes_secret.cacerts
]
}

resource "helm_release" "istiod" {
name = "istiod"
namespace = "istio-system"
create_namespace = true
repository = "https://istio-release.storage.googleapis.com/charts"
chart = "istiod"
timeout = 900
cleanup_on_fail = true
wait = true

set {
name = "global.meshID"
value = var.mesh_id
}

set {
name = "global.multiCluster.clusterName"
value = var.cluster_name
}

set {
name = "global.network"
value = var.network
}

set {
name = "meshConfig.defaultConfig.proxyMetadata.ISTIO_META_DNS_CAPTURE"
value = "true"
type = "string"
}

depends_on = [
helm_release.istio-base
]
}

We use two Helm charts from official Istio Helm repository: istio-base and istiod. The first install Istio-specific objects, like Gateways, VirtualServices, etc. The second one actually install Istio control plane into the cluster. As you must have guessed, you should also add

variable "mesh_id" {
type = string
}

variable "cluster_name" {
type = string
}

To you variables.tf file.

BTW, pay attention to that type="string" line in the set for ISTIO_META_DNS_CAPTURE. That one had costed me a lot of nerves :).

Now we have the cluster itself and are actually starting a multicluster setup. The clusters should have a way to communicate between each other transparently and securely. For that we are going to use an Istio gateway, which will effectively function as an ingress for all requests from a remote clusters.

resource "helm_release" "cross-network-gateway-west" {

name = "cross-network-gateway"
namespace = "istio-system"
create_namespace = true
repository = "https://istio-release.storage.googleapis.com/charts"
chart = "gateway"
timeout = 900
cleanup_on_fail = true

values = [
templatefile("${path.module}/cross-network-gateway-config.yaml", {
network = var.network
})
]

depends_on = [
helm_release.istiod
]
}

The variables we pass to this helm chart have quite a structure, so instead of passing them one by one with set argument, I’ve used a vaules argument with a Terraform template file.

Create cross-network-gateway-config.yaml file in the folder with istio module and write into it

env:
ISTIO_META_REQUESTED_NETWORK_VIEW: ${network}
labels:
istio: eastwestgateway
topology.istio.io/network: ${network}
networkGateway: ${network}

Now we still need one component of cluster configuration: an Istio Gateway object, which will route requests from the east-west gateway.

Write this to our main.tf

resource "kubectl_manifest" "expose-services" {
yaml_body = file("${path.module}/expose-services.yaml")

depends_on = [
helm_release.istiod
]
}

Then create an expose-services.yaml file in modules.istio folder and write to it

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cross-network-gateway
namespace: istio-system
spec:
selector:
istio: eastwestgateway
servers:
- port:
number: 15443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"

Of course, there is no resource in Terraform Kubernetes provider for every CRD, created, but we seemingly could have used the kubernetes_manifest resource from Kubernetes provider. Why we use kubectl_manifest from kubectl provider insted. You see, the flaw of the kubernetes_manifest resource, which renders it practically useless is that the CRD, which describes our custom resource should be defined in the cluster before the config is applyed. So we have ether this or that way run the apply twice, or use another resorce, as we did here.

Now there’s one last thing. Two clusters have to make mutual service discovery. Istio provides it with injecting a special “remote” secrets, which contain the data, necessary to connect to the external cluster API. In the Istio CLI such secret is created by just one command create-remote-secret. To replicate this command with Terraform config, we’ll have do quite a lot of work.

The access to the Terraform API is authorized by tokens. Each token represents a user or so called “service account”. A user or service account can be bound to a role, which is a number of permissions for the actions a user, bearing a token can perform.

Istio had already created a service account with necessary set of permissions. It is called istio-reader-service-account and is located in the istio-system namespace. Tokens, belonging to a certain service account are stored in the Kubernetes secrets. The secret with the token is also created for us, but the problem is that it’s name includes a random set of symbols, so it’s unpredictable. Luckily the service account yaml contains a field with the list of attached secrets and we “just” need to get them.

Create a remote_secret folder in the modules folder, there create a main.tf file and write to it

terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
configuration_aliases = [ kubernetes.local, kubernetes.remote ]
}
}
}


data "kubernetes_service_account" "istio-reader-service-account" {

provider = kubernetes.local

metadata {
name = "istio-reader-service-account"
namespace = "istio-system"
}
}

First look at the configuration_aliases section of the kubernetes provider. Here we need two different instances of the Kubernetes Terraform provider, because we’ll get access data from one cluster and then will push it to another

Also here we create a Terraform kubernetes_service_account data source. Next step is to use it in kubernetes_secret data source. Pay attention, that, unlike to previous modules, we explicitely define a provider.

data "kubernetes_secret" "istio-reader-service-account-token" {

provider = kubernetes.local

metadata {
name = coalesce(data.kubernetes_service_account.istio-reader-service-account.default_secret_name, coalescelist(data.kubernetes_service_account.istio-reader-service-account.secret, [{name = null}]).0.name)
namespace = "istio-system"
}

depends_on = [
data.kubernetes_service_account.istio-reader-service-account
]
}

The name parameter looks a bit hacky. The problem is that the API of the Kubernetes service accounts had changed a bit since Kubernetes 1.24. Up to Kubernetes 1.23 it has a default_secret_name property, which contains a name of the secret with a token. It was deprecated and in Kubernetes 1.24 it was finally removed. Instead there is a secret collection from the first entry of which we should read the name parameter. If you exactly know, which version of Kubernetes you clusters are running (and they all have the same), you can use simpler

name = data.kubernetes_service_account.istio-reader-service-account.default_secret_name

for Kubernetes 1.23 and down and

name = data.kubernetes_service_account.istio-reader-service-account.secret.0.name

for Kubernetes 1.24 and up. If you don’t know the exact version or you clusters run different versions, you should better stick to that hacky line (if you are curious, the coalescelist with [{name = null}] is required to avoid reading properties of null when the secret parameter is absent).

So now for the last component — the remote secret itself.

resource "kubernetes_secret" "remote-secret" {

provider = kubernetes.remote

metadata {
name = "istio-remote-secret-${var.cluster_name}"
namespace = "istio-system"
labels = {
"istio/multiCluster" = "true"
}

annotations = {
"networking.istio.io/cluster" = var.cluster_name
}
}

data = {
"${var.cluster_name}" = templatefile("${path.module}/istio-remote-secret.yaml", {
certificate_authority_data = var.ca_data,
server = var.server
name = var.cluster_name
cluster = var.cluster_name
context_name = var.cluster_name
current_context = var.cluster_name
user = var.cluster_name
token = data.kubernetes_secret.istio-reader-service-account-token.data.token
})
}

depends_on = [
data.kubernetes_secret.istio-reader-service-account-token
]
}

I think, it’s unnecessary to say, that you should create a variables.tf file and write to it

variable "ca_data" {    
type = string
}

variable "server" {
type = string
}

variable "cluster_name" {
type = string
}

Also you should create an istio-remote-secret.yaml file in the remote_secret folder and write to it

apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ${certificate_authority_data}
server: ${server}
name: ${name}
contexts:
- context:
cluster: ${cluster}
user: ${user}
name: ${context_name}
current-context: ${current_context}
kind: Config
preferences: {}
users:
- name: ${user}
user:
token: ${token}

This is the template, the Teraform will use to create the contents of the remote secret.

So now we at last come to bootstrapping it all together and actually creating an Istio multicluster deployment with a single console command. Don’t switch channels.

Create a main.tf file in our project root folder (side by side with modules folder).

While we were writing modules we did not bother about defining terraform providers, assuming that main config will pass them to modules. Now it’s time to do it.

Remember our prerequisites. We should have a single kubeconfig file with two contexts, “west” and “east” defined in it. We need Kubernetes and Helm providers both configured for both of those contexts and a way to pass those configs to modules. In older versions of Terrafrom we could have defined the providers right in the module, but now we are to define them in the root config. Let’s do it

terraform {
required_providers {
kubectl = {
source = "gavinbunney/kubectl"
}
}
}


provider kubernetes {
alias = "west"

config_path = var.kubeconfig_path
config_context = "west"
}

provider kubernetes {
alias = "east"

config_path = var.kubeconfig_path
config_context = "east"
}

provider helm {
alias = "west"

kubernetes {
config_path = var.kubeconfig_path
config_context = "west"
}
}

provider helm {
alias = "east"

kubernetes {
config_path = var.kubeconfig_path
config_context = "east"
}
}

provider kubectl {
alias = "west"
config_path = var.kubeconfig_path
config_context = "west"
}

provider kubectl {
alias = "east"
config_path = var.kubeconfig_path
config_context = "east"
}

Substitute the <path_to_your_kubeconfig_file> with a real path to your kubeconfig.

We create two versions of each provider with aliases “west” and “east”: a provider for each cluster we use. Also we had to explicitely define the kubectl provider in the required_providers block — because it’s registry is not the default hashicorp.

Now it’s just as simple as calling modules one by one and passing the data from one module to another. First the certificates.

module "cacerts" {
source = "./modules/cacerts"
clusters = ["west","east"]
}

We are going to create two control plains, so we pass a list of two names (“west” and “east”). “West” and “east” are the traditional names for such configuration.

Now two our Istio control planes

module "istio-west" {

providers = {
kubernetes = kubernetes.west
helm = helm.west
}

source = "./modules/istio"
network = "west-network"
mesh_id = "mymesh"
cluster_name= "west-cluster"
ca-root = module.cacerts.root-cert
cert = module.cacerts.certs["west"]
key = module.cacerts.keys["west"]

depends_on = [
module.cacerts
]
}

module "istio-east" {

providers = {
kubernetes = kubernetes.east
helm = helm.east
}

source = "./modules/istio"
network = "west-network"
mesh_id = "mymesh"
cluster_name= "east-cluster"
ca-root = module.cacerts.root-cert
cert = module.cacerts.certs["east"]
key = module.cacerts.keys["east"]

depends_on = [
module.cacerts
]
}

Because each control plain should be created in it’s own cluster, we pass a corresponding provider to them.

You may ask, why won’t we loop over the list of the clusters just like we did in cacerts module. And, by the way, why didn’t we do it when creating the providers.

Unfortunately, Terraform does not allow it. Because the Terraform is declarative, it can not create and assign the providers dynamically.

As simple as that. We tell Terraform, where the module file is, which provider to use and that it could be created after the corresponding main Istio module.

After we had created both istio control plains, we can plug remote certificates into them.

module "remote-secret-west" {

providers = {
kubernetes.local = kubernetes.west
kubernetes.remote = kubernetes.east
}

source = "./modules/remote_secret"

cluster_name = "west-cluster"
ca_data = var.ca_data_west
server = var.server_west

depends_on = [
module.istio-west,
module.istio-east
]
}

module "remote-secret-east" {

providers = {
kubernetes.local = kubernetes.east
kubernetes.remote = kubernetes.west
}

source = "./modules/remote_secret"

cluster_name = "east-cluster"
ca_data = var.ca_data_east
server = var.server_east

depends_on = [
module.istio-east,
module.istio-west
]
}

Here we define two Kubernetes providers for each module and bind corresponding instances to local and remote.

Also create a variables.tf file besides the main.tf and write to it

variable "ca_data_west" {
type = string
}

variable "ca_data_east" {
type = string
}

variable "server_west" {
type = string
}

variable "server_east" {
type = string
}

That’s all for Terraform config. There’s just one thing. The remote_secret module gets the token from the corresponding istio module. But where do we get the ca_data and server.

Create a terraform.tfvars file besides the main main.tf and write to it

kubeconfig_path = ""
ca_data_west = ""
ca_data_east = ""
server_west = ""
server_east = ""

Then open your kubeconfig file. It should look something like this


apiVersion: v1
kind: Config
preferences: {}

clusters:
- cluster:
certificate-authority-data: <a_lot_of_symbols>
server: <west_server_url>
name: <west_user_name>
- cluster:
certificate-authority-data: <a_lot_of_symbols>
server: <east_server_url>
name: <east_user_name

users:
...

The rest of the file is not important to us now, we are interested in the clusters section. Copy the certificate-authority-data of the west cluster to the ca_data_west value in terraform.tfvars, the server of the west cluster to server_west and the east values correspondingly. Set kubeconfig_path to the path to the file.

Now open the console in your istio-config folder and run

terraform init

and then

terraform apply -auto-approve

If you have followed the tutorial and posess some luck :) you’ll get an huge output with a deprecation warning (don’t warry for it, it’s for the deprecated default_secret_name attribute, remember it?)? ending with green

Apply complete! Resources: 22 added, 0 changed, 0 destroyed

message, without any nasty red error warnings.

So but does this thing even work? Let’s check.

Create a test foler besides the istio-multicluster folder and create two files in it. The first server.yaml

apiVersion: v1
kind: Pod
metadata:
app: nginx
labels:
name: nginx
sidecar.istio.io/inject: "true"
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
name: http
type: ClusterIP

and the other client.yaml

apiVersion: v1
kind: Pod
metadata:
name: nettool
labels:
app: nettool
sidecar.istio.io/inject: "true"
spec:
containers:
- name: nettool
image: wbitt/network-multitool
command:
- sleep
- infinity

Anyone familiar with Kubernetes will easily understand, what this manifests are for: the first one creates the pod with nginx instance and a Service, providing access to it and the second one will create a pod from a Docker image with various network diagnostic tools. Run in the console

kubectl --context west apply -f server.yaml
kubectl --context east apply -f client.yaml

This commands will create the nginx deployment on our “west” cluster and the network tool deployment on our “east” cluster”. Now if you run in the console

kubectl --context east exec nettool -- curl nginx

And see the html output of the nginx default page.

But wasn’t that just a coincidence? Go to the istio-multicluster folder and run

terraform destory -auto-approve

effectively destroying everything over which we were working so hard until now.

Now try

kubectl --context east exec nettool -- curl nginx

Ah, the error message.

Now back to the istio-multicluster and run

terraform apply -auto-approve

Again. You need to wait a little while for Istio to reread the clusters and

kubectl --context east exec nettool -- curl nginx

Works again.

That’s all for this tutorial, hope it’ll be helpful for someone.

Also you can find all the code from it on my github.

--

--