C# worker services and Kubernetes Liveness probes

(Or — the more salacious title — how to probe your cats in a container!)

James Matson
6 min readMay 29, 2023
Don’t worry, no cats were harmed in the making of this story.

In my ongoing journey to add a little bit of Kubernetes to my skillset, I’ve started learning about how to monitor the health of your containers, which led me to reading up about the different types of health checks you can enable through a combination of configuration and code.

There are liveness, readiness and startup probes you can configure to understand health at each stage of your containers lifecycle, but I’ve decided to focus on using the liveness probe to keep a check on whether a container is still responsive and happily accepting requests.

To test out the creation and use of a liveness probe, I’ve decided to put together a small .NET worker service that incorporates a health check endpoint via the TinyHealthCheck nuget package. I’ve uploaded the demo service code to my GitHub below, but I’ll walk you through it anyway!

https://github.com/kknd4eva/worker_service_kubernetes

It’s no secret to anyone that knows me that I’m a cat person (sorry dog lovers!) so I’ve decided to create a worker service in .NET that I can leave running in the background to fetch me random cat facts from the ever-useful https://catfact.ninja/ API. I want all the cat facts, all the time.

Here’s my project structure:

We’ll get to the Docker and Kubernetes stuff soon, but first let’s talk .NET. In order to get my cat facts, I’ve created a FelineService which can call the cat fact API

using System.Text.Json;
using WorkerService.Models;


namespace WorkerService.Services
{
public class FelineService : IFelineService
{
private readonly HttpClient _httpClient;


public FelineService()
{
_httpClient = new HttpClient();
}

public async Task<FelineFact> GetFelineFact()
{
var message = new HttpRequestMessage
{
Method = HttpMethod.Get,
RequestUri = new Uri("https://catfact.ninja/fact")
};


var response = await _httpClient.SendAsync(message);
string responseText = await response.Content.ReadAsStringAsync();
var fact = JsonSerializer.Deserialize<FelineFact>(responseText);
return fact;
}
}
}

The service has one method which obtains the fact and returns the following type

using System.Text.Json.Serialization;


namespace WorkerService.Models
{
public class FelineFact
{
[JsonPropertyName("fact")]
public string Fact { get; set; }
[JsonPropertyName("length")]
public int Length { get; set; }
}
}

Now because I’m using dependency injection in my worker, I need to add my scoped service, instructing .NET that when I reference IFelineService, to utilise the concrete FelineService

using TinyHealthCheck;
using WorkerService.Services;


namespace WorkerService
{
public class Program
{
public static void Main(string[] args)
{
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
services.AddScoped<IFelineService, FelineService>();
})
.Build();

host.Run();
}
}
}

Finally I inject my FelineService into my main Worker class so I can obtain my random feline fact every 10 seconds and write it to the log:

using WorkerService.Services;


namespace WorkerService
{
public class Worker : BackgroundService
{
private readonly ILogger<Worker> _logger;
private readonly IFelineService _felineService;


public Worker(ILogger<Worker> logger, IFelineService felineService)
{
_logger = logger;
_felineService = felineService;
}


protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
var fact = await _felineService.GetFelineFact();
_logger.LogInformation("Meow! Cat fact found: {fact}", fact.Fact);
await Task.Delay(10000, stoppingToken);
}
}
}
}

Done! If I ran this now, I’d have my worker busy getting cat facts every 10 seconds. Except, did I mention I really like cats? Like — REALLY. So much in fact, that it’s super important to me that when I package this service up as a container and deploy it to a Kubernetes cluster, I have a health check endpoint I can wire up to a liveness probe.

This is where my health check code comes in. Now you can go ahead and write your own custom implementation of the .NET IHealthCheck interface in order to do what I’m doing here, but I’ve opted for using the TinyHealthCheck library because it’s quick, simple and doesn’t add much to your dependencies (particularly if you just want a basic health check endpoint like this one).

Once you’ve added the TinyHealthCheck nuget package to your source, you can wire up a basic health check at the same time as you’re wiring up your other services. Just add the following AddBasicTinyHealthCheckWithUptime() to your service collection.

using TinyHealthCheck;
using WorkerService.Services;


namespace WorkerService
{
public class Program
{
public static void Main(string[] args)
{
IHost host = Host.CreateDefaultBuilder(args)
.ConfigureServices(services =>
{
services.AddHostedService<Worker>();
services.AddScoped<IFelineService, FelineService>();
services.AddBasicTinyHealthCheckWithUptime(c =>
{
c.Port = 5001;
c.Hostname = "*";
c.UrlPath = "/healthcheck";
return c;
});
})
.Build();

host.Run();
}
}
}

In the above example, I’m adding a health check endpoint at /healthcheck on port 5001. There are some built in health checks like this one in the TinyHealthCheck package, or you can create your own custom ones. I chose this particular one because it gives you uptime.

Alright, let’s test our service using Docker first, to make sure everything is working as expected. First up we’ll build our image.

PS C:\Users\JMatson\source\repos\K.Workers> docker build -t jdmcontainers.azurecr.io/poc.net6/worker:v1.1 . -f WorkerService\Dockerfile

And once it’s built, let’s run it (while ensuring to map our host port 5001 to the container port 5001)

PS C:\Users\JMatson\source\repos\K.Workers> docker run -it -p 5001:5001 a0c92aee29df

Okay — off to a good start, we’re getting cat facts!

What about our healthcheck endpoint? A quick navigation to http://localhost:5001/healthcheck gives us good news!

Phew. Cat fats can keep coming. I am at peace.

Everything is looking good. It’s time to get this service into Kubernetes. Now obviously we don’t want to have to manually hit this healthcheck endpoint all the time to check if the container is responsive, and this is where the liveness check comes in. Let’s take a look at my deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
name: cat-worker-deployment
labels:
app: cat-worker


spec:
replicas: 1
selector:
matchLabels:
app: cat-worker


template:
metadata:
name: cat-worker-pod-template
labels:
app: my-worker
spec:
containers:
- name: app
image: jdmcontainers.azurecr.io/poc.net6/worker:v1.1
livenessProbe:
httpGet:
path: /healthcheck
port: 5001
initialDelaySeconds: 3
periodSeconds: 3
imagePullSecrets:
- name: acr-secret

So because I’ve got a private container registryin Azure, I’ve pushed my image up to there and referenced it in the container image section. Additionally, I’ve got imagePullSecrets set to look at the secret stored in Minikube which allows access to the container registry.

The important bit however, is nestled in with the container specification. I’ve added a livenessProbe. It’s of type httpGet, and the path and port match the path and port from my worker service. I’ve configured it so that the first check is 3 seconds after the container starts up, and every 3 seconds after that. So let’s apply this file to minikube to get the deployment started

kubectl apply -f .\worker_service_deployment.yaml

And just like that our worker service is on its way! I can almost FEEL the cat facts washing over me.

Okay. Our worker service is running safe and sound in its pod. Let’s check on the logs and see if it’s operating as expected

PS C:\Users\JMatson> kubectl get pod

NAME READY STATUS RESTARTS AGE
cat-worker-deployment-7c7c56c4c5-2hk5m 1/1 Running 1 2m19s

PS C:\Users\JMatson> kubectl logs cat-worker-deployment-7c7c56c4c5-2hk5m

And success! The logs show us exactly what we were hoping to see. Cat facts rolling in every 10 seconds, and the kubelet is making automated health checks against our endpoint every 3 seconds. This is one well looked after container.

If for some reason the liveness check fails, that’ll signal that the pod needs to be restarted as the app is — effectively — dead. So that’s my introduction to automated health checks in Kubernetes. If you’re interested in learning more about the different types of health checks or ‘probes’ available, you could do worse than starting off at the official documentation here https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

And with that, I’m off to learn the next in a thousand things about Kubernetes! (and Cats!)

--

--

James Matson

DevOps Lead, C# and Python enthusiast, Writer, AWS Community Builder, Microsoft PowerApps Champion, AI/ML Tinkerer