Kubernetes Quick Debugging Pod
When working with Kubernetes, it is not rare to need to access a Kubernetes internal resource. While you can bridge a lot of things, you can also need to run an image from Kubernetes environment/network/… to reproduce an issue and connect faster (bridges can be slow if done until your computer, in particular over a VPN).
To solve that the easiest is to deploy a debug container which will run the time you investigate your issue (checking some database Content for example).
Recent Kubernetes versions create the notion of ephemeral container (doc). Long story short, it is a way to run a container without any resource guarantee (I see it as “low priority” container), no port binding nor probe. The main advantage of an ephemeral container is to be able to run it in a particular Pod
to check it state.
However, in practise, we often want to just pop up a container to connect to others or inspect the application state (databases, filesystem, …). And this is where ephemeral containers are not the best option….because they require you to have permissions to run them and a very recent Kubernetes version which are two hypothesis not always true as of today (hopefully it will change within the coming years).
So today, the best option to run a debug container in a Kubernetes cluster – assuming you have deploy permissions – is to run a plain old container:
kubectl run \ (1) debug \ (2) --restart=Never \ (3) --image=alpine \ (4) -it \ (5) -- /bin/sh (6)
- Run a container from
kubectl
(you can indeed replace it by a YAML), - Name this container
debug
- Don’t restart it when it is closed (= when I’ll stop it)
- Use
alpine
image - Run it in interactive mode (indeed we want to connect to it)
- Open a shell
So after that we’ll get a container which started a shell for us.
I’m using
alpine
and its package managerapk
in this post but if you feel more comfortable withubuntu
andapt
just replace the images and commands, it works the same.
And we are done :)….almost.
In some (dev) unsecured environments, it will work great, you will connect to the shell, apk update && apk add <debug tool>
and you will debug what you need. However, in secured environments it will not run because root user (default of the container) is forbidden to modify the container (apk
will print you that the system is locked):
ERROR: Unable to lock database: Permission denied ERROR: Failed to open apk database:
There are multiple options to solve that, from tweaking the deployment to give the container the permission to run the command (not that recommended if not in pure dev environment) to the creation of a pre-packaged image.
This is this last option I tend to use because it has some advantages:
- You do what you want in terms of tool installation because you run it on the build machine (often yours),
- You prepackage an utility image you can then reuse if needed, avoiding to resetup all the tools for next time,
- (optional) You can inject some aliases in the images to have shortcuts using kubernetes environment variables (my common one is
pg
to connect to the environment postgres service).
To use that last option, the first step is to create a docker image. To do that you can create this Dockerfile
in any empty folder:
FROM alpine (1) RUN apk update && apk add postgresql-client curl (2) ENTRYPOINT "/bin/sh" (3)
- Use the base image you prefer,
- Install the utilities you need (postgresql client and curl there),
- Start a shell by default (avoids the container to start and stop immediately when deployed).
Then you need to build this image. To do that, run this command from the folder you put the Dockerfile
in:
docker build . \ (1) -t $MY_REGISTRY/debug (2)
- Create the image from the
Dockerfile
in this folder, - Tag the image (name) as
$MY_REGISTRY/debug
. Here it is important$MY_REGISTRY
is the registry you will push the image to and Kubernetes will download the image from.
Now you have your debug image with the tools you need, just push it remotely to your Kubernetes/images registry:
docker push $MY_REGISTRY/debug
if your registry is authenticated (normally it is), ensure to run
docker login <registry host>
before.
At that stage we pushed our image and Kubernetes is able to run it so we just have to modify previous command to deploy our image instead of a plain alpine
one:
kubectl delete pod debug (1) kubectl run debug --restart=Never --image=$MY_REGISTRY/debug -it -- /bin/sh (2)
- We delete the previous pod deployment which should still be there
- We deploy our newly created debug image in a container.
There you can use all the tools your prepackaged in the image to debug your application.
Finally, when you are done with your debug
container, you can delete it with kubectl delete pod debug
.
There are a lot of ways to achieve this kind of thing but this technique is quite efficient and has, for me, the most advantages over the alternatives.
The last trick I like to use is to not name debug
the debug container but use my username (or team name if we are numerous). This enables to not cross-use the image and kill the container when run
ends, plus this enables to distinguish between the tools and not overwrite the image without creating a ton of tags (using a date). A very close option is to name the image debug
, tag it with your name
and name the container debug-$name
but this is way less easy to handle in time than having one image per person/group and name the container the same way than the image. Having a single name (and just using latest
in general), makes it very easy to clean up the images and containers when team rotates.
Published on Java Code Geeks with permission by Romain Manni, partner at our JCG program. See the original article here: Kubernetes Quick Debugging Pod Opinions expressed by Java Code Geeks contributors are their own. |