Let's go Cloud Native!

How does Kubernetes development work?

‘Kubernetes development’ aka ‘development of containerised microservices in a local Kubernetes cluster’ means that applications are designed and developed for a Kubernetes architecture – i. e. a developer works with a Kubernetes architecture locally. In this blog post, we’ll show you how we accomplish this task.

cloud native kubernetes development

‘Cloud Native Kubernetes development’ – or rather, ‘How can I stuff as many tech buzzwords into a short blog article as possible?’ One might also want to pop in the term ‘k8s’ somewhere, which is used as an abbreviation for ‘Kubernetes’ … but let’s not go crazy here. In order to understand this blog post, we’ll assume you have a basic understanding of Kubernetes. If that’s not the case, we can recommend this Comic by Google:

Let’s assume you’re developing a new project. You’ve identified a few independent services along the way and have now decided that it would make sense to deploy these in separate containers and have them orchestrated by Kubernetes. As it’s a bigger project, several programmers are working on it – and they’re only working on one of the services each, either individually or in small teams.

Status Quo

The project example described above has now become a pretty common scenario. How can we now ensure that our programmers are also able to use their own laptops to develop as closely to the Kubernetes architecture as possible? A common method of running Docker containers locally is docker-compose. While this one’s especially easy to manage, it does have one major drawback: a docker-compose set-up doesn’t display the eventual production environment, i.e. the Kubernetes set-up. The worst-case scenario would be that you’ve programmed something which functions locally in your docker-compose set-up but not in the production system due to the image being run differently here.

As an alternative, technologies have been developed which simulate Kubernetes clusters on local computers. Minikube is a pretty widespread solution, but there are also more and more other alternatives which have been gaining ground in recent times. Some of those worth mentioning are microk8s by Canonical, for example, or k3s and k3d by Rancher which are more resource-efficient. K3d uses k3s to simulate more Worker Nodes in the local Kubernetes cluster. Usually, kubectl is then used for the interaction with the cluster.

As a developer, you now simply have to build a Docker image of your Service and make it available to your colleagues. They can deploy the image in their local cluster and will then have local access to the most up-to-date status of your Service.

Two exciting challenges still remain at this point, however:

  1. How can I work on my Service and always have the up-to-date status available in my cluster without having to build and deploy a new image?
  2. How can I use the debugger locally?

Kubernetes (Cloud Native) Development

In the upcoming sections, let’s have a look at how we overcome these challenges. For this, we’ll be using k3d as a local Kubernetes cluster as well as PyCharm as our development environment. We’ll also be using Helm for the cluster management as well as Telepresence to manage the live code reloading. The following installation examples were all carried out on an up-to-date Ubuntu system.

K3D/K3S – Lightweight Kubernetes in Docker

k3d can be installed very easily by using the installation script provided by Rancher:

The installation of k3s is just as simple:

A new cluster can be created with the following command:

Here we have created a cluster called buzzword-counter and, amongst other things, have mapped the local port 8080 on the cluster’s internal port 80 so that we can access our cluster in the web browser via port 8080. We also enable local Docker images to be deployed in the cluster using the flag --enable-registry. The local registry is a common Docker container which can be restarted using docker restart <<container-id>> – after a computer reboot, for example. To do this, we also need a relevant entry in our /etc/hosts file:

In order for us to be able to interact with our cluster using kubectl, we can either export the KUBECONFIG environment variable or integrate the content of the respective file in ~/.kube/config:

Helm – Kubernetes Package Manager

We often use Helm to manage our Kubernetes cluster. Helm describes itself as a package manager for Kubernetes and it also enables the mapping of complex Kubernetes applications in templates. Here, the buzzword is ‘infrastructure as code’. Thanks to the templates, our application can be deployed into a new Kubernetes cluster at any time and without any major effort. To install Helm, you can simply download a binary file: to the download

Example deployment: Buzzword counter

To show you a practical example, we have created a simple deployment for this blog post and put it up on Github:

Buzzword counter

Buzzword charts

This deployment includes a simple Django application, a Celery distributed task queue with rabbitmq as message broker to process asynchronous tasks as well as a PostgreSQL databank. With our application, we can count buzzwords and add new ones, too. The adding of buzzwords is implemented as a Celery task – in this example, it’s pretty pointless, but it demonstrates the functionality of our Celery distributed task queue perfectly.

The first step of the deployment is to deploy the application as a Docker image. To do this, we first have to build the Docker image (from the Django application’s directory) and push it into our local registry:

With the following commands (from the Helm charts’ directory), the application is installed and configured in the Kubernetes cluster with the PostgreSQL and RabbitMQ dependencies.

Via kubectl, we can see whether the pods are available, for example, or we can have the log output displayed and verify that the runserver was started on the web pod and the celery worker on the worker pod:

So now we have a local Kubernetes cluster in which our application is installed and configured. In order for us to access our page locally, we still have to run three commands which we received from the helm install output:

After that, we can access our Service under http://127.0.0.1:8080. If we start a task, we can check out its output in the worker pod’s log via kubectl.

log worker pod - kubernetes development


Telepresence– Fast, local development for Kubernetes

In order to access live code reloading, i.e. to make code changes done in PyCharm immediately available in the cluster, we use telepresence. Telepresence is a so-called sandbox project by the CNCF, the Cloud Native Computing Foundation. Without live code reloading, we have to build a new Docker image and deploy it in the cluster after every change – which is pretty inconvenient and can become very time-consuming.

With Telepresence, you can run a locally built Docker image in a cluster by ‘swapping’ a deployment. This is pretty spectacular from a technical point of view. However, for this post, it’ll be sufficient to use a command to swap the buzzword counter web deployment of our Kubernetes cluster and run the specified Docker image instead. First, though, we’ll have to create the Docker image. For both commands, we’ll have to be in the directory of our Django application’s source code:

What’s more, we have mounted the current directory in the Docker container using the flag ‘-v $(pwd):/code’ so that the code changes in PyCharm are also available in the Kubernetes cluster. However, as we’re using the Django runserver, the live reloading will only work if the DEBUG=True has been set up. We can either deploy this via the Helm charts or simply export it in our swapped deployment. Afterwards, we’ll run the run script:

If we swap the container, we have to run the three above-mentioned commands again for the port-forward of the pod. Afterwards, we can change the code in PyCharm and verify that the runserver was restarted – either in the log or by opening the page in a browser.

kubernetes development - telepresence

Have a closer look and you’ll find that Telepresence is not limited to a local cluster. You can also swap deployments from remote clusters as long as they can be accessed via kubectl. This can be very useful for the debugging and tracing of bugs on test systems, for example. Caution is advised, however, as every deployment traffic is directed to the local laptop after the swap. That means that this approach is only really suited to test systems and should be avoided at all costs in the case of most production systems.

Python remote debug in Pycharm

So now we can deploy our application in the local Kubernetes cluster with live code reloading. We have accomplished our buzzword mission, the production Kubernetes environment has been replicated locally and we can carry out Cloud Native developments on our Service. The icing on the cake now is to configure the PyCharm Debugger in a way that we can also debug our application directly in PyCharm. To do this, we first have to configure Python remote debug in PyCharm:

pycharm remote debug - kubernetes development

Do bear in mind that it’s crucial that an absolute path is specified in the path mapping (the ~ shortcut for the home directory doesn’t work). As you can see in the image above, the configuration also needs a specific version of the Python package pydevd-pycharm.

In order to avoid this package being unnecessarily part of our production deployment, we’re creating a second Dockerfile which installs upgraded pip requirements. Furthermore, we’ve added a simple view to our application (in urls.py) so that we can conveniently establish a connection between our cluster and the PyCharm Debugger via URL. What’s important here is that the IP address and the port match the configuration in PyCharm.

Afterwards, we browse the Debug-URL. Here, too, we have to remember that DEBUG=True has been set up and that we have carried out the port-forward. Now we can already set up a breakpoint in PyCharm. If we browse the respective view, the application will be stopped by the debugger and we can then inspect why a reduction of the counter either resets it to 0 or why we even get an IntegrityError:

debug kubernetes


Conclusion

Thanks to the tools k3d/k3s, Helm, Telepresence and Python Remote Debug, we’ve conquered the mountain called ‘Cloud Native k8s development’. Our developers can now develop in their own local Kubernetes cluster. A particularly practical solution to this is telepresence in combination with Python remote debug.

Still, it has to be noted that the handling of the tools isn’t quite that simple and that it does take some time to get used to them. The obstacle is particularly big in comparison with docker-compose. In this case, we’re missing something that would combine the aforementioned tools and could still be operated without any major initial hurdles. Stay tuned – we’ve recently been creating a solution which has already stood the test internally.

And finally, let’s not forget the buzzword counter: I got to 23 unique buzzwords in total. Did you count along and get to a different number? Go on then, let us know.