Service architectures have become quite popular for complex applications in the past few years. They implement a modular style based on the philosophy of dividing overwhelming software projects into smaller and more controllable parts. Highly specialized applications have become popular with developers, but they have introduced new challenges to the IT operation.
However, with the advent of Kubernetes and the Kubernetes-native development philosophy, a couple of new possibilities to run those service-based applications even better, have emerged. Kubernetes is a wonderful answer for many IT-operation challenges. Yet, cloud-native development requires some congruence of the production and development setup with regard to infrastructure. Modern applications can only draw all benefits from Kubernetes once they are developed directly within Kubernetes. Unfortunately, Kubernetes comes with high entry barriers for software developers, especially when there is additional (often project-specific) complexity caused by Helm, Istio or K8s derivative specifics. In order to cast off the difficulties around Kubernetes while maintaining productivity and lowering onboarding challenges, development teams tend to implement workarounds with docker-compose, virtual envs and other tools. Especially infrastructure-related issues are postponed to a later and potentially more expensive stage in the development cycle - the integration environment or wherever the application hits Kubernetes for the first time.
Figure 1: Diverse development setups, troubles in the transition of applications, no Kubernetes-native features
In addition to that, developers must anticipate the interaction with Kubernetes mechanisms like probes, custom K8s resources, service mesh, configuration and scaling. Building Kubernetes-native features seems almost impossible.
So the question is: how can we provide real cloud-native development? Since the operation teams already describe the production infrastructure, how can we bring that valuable work to our developers in order to get rid of the differences between all stages in the cycle? When there is a platform distributing Kubernetes manifests, why not perform automatic security profiling on the manifests and compute enhancement suggestions? Reading the manifests, the platform could automatically generate documentation (especially for environment variables or service mesh) for the development teams.
That is where Unikube enters the game. Unikube is a process platform that incorporates highly optimized and configurable tooling to implement a real cloud-native development approach. All developers using Unikube will be able to provision their own Kubernetes cluster and start working directly within an exact copy of the production environment. It's not about getting applications to run in a Kubernetes cluster, it's about integrating them as tightly as possible to harness the full power of that platform. To be widely applicable, Unikube does not make assumptions about the infrastructure itself or development workflows and CI/CD pipelines.
Unikube was developed from the idea of how a Kubernetes-native development process should look like. It provides software teams with a well-structured and reasonable flow to collaboratively build and run modern applications. To achieve that, Unikube wraps many other open-source tools under the hood.
This document describes the current state of the product and the vision for Unikube as a platform product.
Developers ought to stick with the coding tools they are used to. That includes an IDE, source-code versioning and others.
The Unikube CLI (command line interface) replaces other container- or infrastructure-related tools like Docker (for most cases), docker-compose, vagrant, virtual env and others.
A project is a collection of multiple packages, packages contain multiple interconnected applications. Applications are executables that are distributed in container images. Development teams work on these applications which are tracked in one or more source-code repositories. A container image is created for instance through a Dockerfile on the developer’s machine or a central building service.
The developer activates the project, a package or an application and Unikube spins up the Kubernetes cluster with the manifests provided by the operations team through the Unikube platform. The CLI connects either to a local cluster provider (currently k3d or microK8s) or to a dedicated remote development cluster. The connection parameters (i.e. kubectl configuration) are transparently created or distributed by the Unikube platform as well. Once the cluster is ready, Unikube automatically pulls the K8s manifests (Helm charts are pre-rendered; including secrets) from the platform and installs them in the developer’s cluster. No further interaction is needed.
Figure 2: Developers and their interaction with Kubernetes clusters
At this point, the developer is running a congruent copy of the production environment (or what’s profiled for her). To add new containers to the cluster or to work on existing deployments, Unikube provides a switch operation that spins up a locally running container instance of the application in question and connects it with its counterpart in the cluster (including a port-forwarding, environment variables and so on). This way the local container instance is part of the actual service mesh in the cluster and can be modified locally at the same time. This may include code hot reloading (if the source-code is mounted to the local container), debugging or the temporary modification of volumes, environment variables and other parameters. That is particularly useful for debugging real traffic and application behaviour since the requests in the cluster are piped to the local container instance, too. That’s all possible with the Unikube CLI and a Unikubefile which is part of the application repository.
All in all, Unikube brings Kubernetes to the developer without them needing any knowledge of kubectl, Helm, sops, k3d, microk8s, minikube, etc.
Deactivating a project with the Unikube CLI causes a local cluster to hibernate or a remote cluster to disconnect. The last state of the cluster will be restored once the project is activated again.
Reading the Kubernetes manifests, Unikube’s platform already knows the connection parameters e.g. databases and hence can install states to the development cluster. The Unikube CLI provides a pull operation that automatically loads the selected asset (database dumps, images, videos, ML training data) from the platform and installs it without any further interaction. This comes in particularly handy for bug hunting (e.g. bugs that are not reproducible with development data) or when application source-code and data is highly coupled (e.g. in machine learning scenarios, frontend applications with many visuals).
IT-Operators (or Kubernetes-specialists) link their Kubernetes resources (manifests) to the Unikube platform via the web frontend (GUI). It takes a few clicks to create a project, which is linked to a manifest repository (currently a git repository). This could include Helm resources and plain Kubernetes manifests likewise. The Unikube platform includes permission management which limits access to the resources.
Once a project is created, the Unikube platform parses the resources to provide dynamic parameters and the configuration of the developer’s setup. That includes the specification of development-specific variables and required encryption for secrets. Unikube supports AWS KMS, Google Cloud KMS and PGP Keys for the secret’s decryption operation - basically, all supported SOPS providers.
Upon request, the Unikube platform renders the Kubernetes manifests entirely and distributes them to the CLI in order to provide the requested package (a collection of applications/services forming a service mesh).
Figure 3: The Unikube big picture
Once a new version of the manifests is available, the developer will get a notification through the Unikube CLI. Developers can apply these manifest updates and hence stay on par with the current state of the infrastructure. This approach reduces friction in the negotiation of environment configuration, e.g. finding environment variables and their values, service discovery and service behaviour. The latter is particularly difficult to find out when data is responsible for a service’s behaviour.
Operators can provide data-dumps and other assets to the developers easily using the Unikube platform. In the future, the platform will be able to ad-hoc draw assets from other environments like staging or production upon request. Currently, Unikube supports the provisioning of assets (e.g. database dumps, media assets, training data) through the web interface. Developers can pull these assets access-controlled using the CLI.
Reading the Kubernetes manifests the Unikube platform could offer many additional services around these resources in the future. It is planned to have an automatic security profiling for given resources. That will include configurable corporate policy enforcement (e.g. restrictive network policies, explicit resource limits and requests, volumes) and automatic enhancement suggestions (based on best practices).
Many documentation tasks can be automated by reading the Kubernetes manifests. Especially existing environment variables and their values across different stages can be of interest. Unikube helps with the negotiation of new configuration parameters between operator and developer in a collaborative approach. The developer can request a new parameter from the application’s operator using the CLI or the web frontend. Then, the request can be fulfilled, discussed or rejected. The Unikube platform keeps track of the changes and may commit new configuration parameters directly to the Kubernetes manifest repository.
Many applications in an organizational context follow similar infrastructure stacks. For instance, this could be a Redis as cache layer, a RabbitMQ or Kafka for event queuing and PostgreSQL as the primary database. However, a super-comprehensive Helm chart does not seem to be an ideal solution especially when application specific changes are required in the future.
The Unikube platform will provide a Helm templating service which creates Helm charts for multiple stages and requirements. These skeletons will allow developers to work through a standardized questionnaire mechanism to generate manifests based on their application’s needs. The generated templates will then get their own repository for individual changes. That way almost no Kubernetes-specialist is required to set up a new application and developers can start their work on fundamentally approved Kubernetes manifests.
Although the Unikube platform will provide community skeletons to start with, an organization can supply their private skeletons (as a git repository), too. These private skeletons may already contain organization-specific configuration parameters or follow special guidelines. The maintenance of these templates (do not confuse them with Helm charts) is done by the Kubernetes-specialists of the organization.