Containers are a common component of a PaaS-based cloud solution.
They might be there from the moment you’ll architect your first workload to provide it with compute. But this is a simple use case.
When you’ll start thinking and researching more complex ones, you will discover Kubernetes as it is the most popular platform to manage and orchestrate containers.
Unfortunately, Kubernetes is quite complex. This complexity often leads to architects not utilizing containers, even if they would be beneficial. I want to help you avoid that.
Containers are a solution to two important problems of building software in the cloud. One is how to run software reliably in a computing environment you don’t control. The other is how to deliver software to that environment.
Containers solve those problems by bundling your code together with configuration files, libraries, and dependencies required to run it. The output is a single executable package.
This is huge. Thanks to containerized approach developers have guaranteed that their code will run in a predictable environment. No more hours spent together with the operations team to make sure that there are no differences between the test and production environment. You create a container image in an automated way and deploy it.
There is also one more promise that containers are bringing – high scalability. And this is where things start to get tricky.
What containers directly enable is isolation and resource efficiency. Thanks to isolation you can run multiple containers on the same server without impacting each other. Thanks to resource efficiency you can achieve high density within that server.
Both those things are prerequisites for efficient horizontal scalability, but to achieve it you will need orchestration.
What do I mean when I’m saying that you need to orchestrate your containers? It depends on a specific scenario, but the general requirements are:
All this (and much more) is provided by Kubernetes. As a result, it has become a de-facto standard for orchestrating containers. But with great power comes… complexity.
Kubernetes’ learning curve is steep for many developers and can be a considerable investment for an organization. Therefore you, as an architect, should make an aware decision when it comes to its usage. Your first question should always be, do I need it?
Not every containerized app must (or even should) use Kubernetes. PaaS model provides options for orchestrating containers in a different way for certain types of applications.
Let me give you two examples.
1. The first one is standalone web applications and web APIs.
Azure provides fully managed hosting for those through App Service, which can host containers. Of course, just being able to host containers is not enough. First, you want automatic deployments. To achieve that you can integrate App Service with Azure Container Registry.
In fact, App Service will do it for you, you just need to point an instance of ACR to it. App Service will register a webhook in ACR, ACR will trigger that webhook whenever a new version of the image is available and App Service will pull it.
How can you get a new version of the image into ACR? You can create a long-lived ACR task that will watch for changes in your repositories and build new images. This gives you fully automated continuous deployment.
If you add App Service deployment slots to the picture, you will have blue-green deployments. All that is left is to configure autoscaling and enable health checks. Now you have an orchestrated containerized web application.
2. The second example is machine learning.
It is a standard approach to deploy machine learning models as containers.
This doesn’t mean that your predictive or recommendation service requires Kubernetes. If your models are small and their number doesn’t go into thousands, Azure Container Instances will be a solution just for you.
We can’t talk about complete orchestration in this case, because out-of-the-box autoscaling is not available but Azure Machine Learning Service will take care of building, managing, and deploying your models.
This allows you to keep the solution simple and focus on choosing a model training tool (Data Science Virtual Machine, Azure Databricks, etc.).
As you can see containerized apps can be built without Kubernetes and it shouldn’t be your default option.
That said, there are scenarios where Kubernetes is the right tool for the job.
Your solution may grow to the point where the number of container-based workloads is too high to orchestrate them separately and you need a platform. You may need other Kubernetes features like storage and secrets management for your microservices-based solution.
You just need to remember the first rule of working with the Kubernetes cluster – never build your own cluster.
Yes, you want to use a managed service when it comes to Kubernetes.
Such a service, like Azure Kubernetes Service, will do a lot of heavy lifting for you when it comes to infrastructure by providing cluster autoscaling (spinning up and down VMs for nodes) and node auto-repair (automatic maintenance of VMs).
Despite that, there is still a lot of concerns you need to address by yourself like networking (network model, topology, ingress, etc.), storage options, or monitoring. Just take a look at this baseline architecture for an AKS cluster. In the end, Kubernetes is a platform for building platforms, and building a platform is never quick, cheap, or easy.
What if you don’t want to build a platform? What if you just want to run your containerized app? There is a relatively new offering (in preview and not yet available in all Azure regions) in containerized apps space – Azure Container Apps.
Azure Container Apps pushes Kubernetes one layer lower by doing what you would have to do – building an opinionated platform on top of it. It does so by combining three open-source projects:
Azure Container Apps is a fully managed service optimized for running microservices. It doesn’t even provide you with direct access to the underlying Kubernetes, instead, it abstracts Kubernetes by introducing concepts of environments, apps, and revisions.
The environment is a secure boundary around a group of apps. These apps live in the same virtual network and can easily intercommunicate with each other. An app is a group of containers that are deployed and scaled together.
The autoscaling is event-driven and all event types supported by KEDA can be used. You can, but don’t have to, use Dapr APIs to abstract communication, secret management, state management, or even apply actor patterns.
Revision is an immutable snapshot of an app. New revisions are created automatically and through proper routing configuration, they enable blue-green deployments, canary testing, and A/B testing.
Is such an abstraction, the future of Kubernetes?
In my opinion, Kubernetes will be pushed lower in the stack. Your teams will want to build complex containerized apps which will require Kubernetes to run, but they will not necessarily need and want to deal directly with its APIs and concepts.
With fully managed services they won’t have to. Azure Service Apps is the first example, optimized for general workloads and event-driven architectures. It’s very likely that more of such offerings will come, and Kubernetes will become a commodity.
Most people are still used to carrying around their wallets full of cards everywhere they go. However, not all of the ca...
„Design is not just what it looks like and feels like. The design is how it works.” Steve Jobs In t...