How to manage feature flags in ASP.NET Core apps with Azure?
No matter how simple or complex an application is, choosing the right configuration provider right at the start will mak...
The power of container portability makes them an attractive option when building and architecting cloud solutions.
That’s why for most of us, containers are associated with building cloud-native applications and microservices platforms. However, they can be really helpful in other scenarios too – for example, DevOps automation.
In this article, I would like to talk about how containers together with Azure DevOps can support DevOps automation when it comes to Microsoft Azure solutions.
The answer here is twofold. First, containers may be a more efficient technical option for your solution, given the limitations of Virtual Machines (I will get to it in a moment).
Another reason is that it’s a good FinOps practice. What is FinOps? In short, it’s an approach whereby you make the most of your cloud resources by optimizing costs and not spending more than necessary.
So, how do we involve containers? Let me set the stage a little bit. Let’s assume that we use Azure DevOps to keep our solution source code together with Azure infrastructure code.
The application consists of many different Azure Services operating within an Azure Virtual Network. Here is a diagram to illustrate it:
To keep our environment secure, we are using Azure Virtual Network and integrating Function Apps with VNET using Private Links.
You might already know this, but with such an architecture and configuration, using Microsoft-Hosted Azure DevOps Agents can be problematic. Why? Because by default they will not be able to reach Azure Functions to deploy your code. As Azure resources are located in the Azure Virtual Network, direct access to them is blocked.
We have three additional options to still be able to deploy our code using an Azure pipeline.
It enables access to Virtual Network from Azure DevOps. This is not a perfect solution because we cannot enable access for a specific Microsoft-Hosted Agent. Instead, we have to add a whole range of IP addresses.
Sometimes service tags are not an option due to security restrictions. In this scenario, our Virtual Network is opened to the whole region of Azure DevOps Microsoft-Hosted Agents that is not owned by us.
We can use an Azure Virtual Machine integrated with our VNET and install a Self-Hosted Azure DevOps Agent. This way we can solve the problem with network access but instead we will face challenges related to VM cost and maintenance.
We can also use Self-Hosted Agents as running containers. In this case, we have three ways to run them: Azure Container Instances, Azure Container Apps, and Azure Kubernetes Service (AKS). The key benefit of such an approach is reduced cost in comparison to Virtual Machines.
So, let’s review the recommended two approaches for running Self-Hosted Agents in the Azure cloud to deploy our code in resources within Azure Virtual Network.
The most popular way to use Azure DevOps Self-Hosted Agents is to host them using an Azure Virtual Machine integrated with Azure Virtual Network. With this method, we can easily deploy code to resources like Azure Web Apps and Function Apps in the same Azure Virtual Network.
Unfortunately, this approach is not without certain problems.
Azure Virtual Machines are quite expensive. Below is an example Linux machine with low usage. As you can see, the estimated monthly cost is around $13.19.
Of course, it can be lower/higher depending on usage. In this example, at $13.19 per month, the yearly cost will be $158.28. If you decide to use a Windows machine, the cost will increase.
As you probably know, Azure Virtual Machines are under the IaaS (Infrastructure as a Service) cloud service model. It means that there are many things we have to take care of, such as operating system updates, installation of proper applications, and security.
The upside of this approach is the fact that we have full control over VM and tools that can be installed. This can be helpful when we have a complex CI/CD process.
Even if containers are associated with building cloud-native applications, they are also the perfect match for running Self-Hosted Azure DevOps Agents. The Azure Cloud offers many container services. Here I will write a bit about the three most popular and useful options when it comes to DevOps automation.
Azure Container Instances let you run a container in Azure without managing Virtual Machines and without a higher-level service. They are useful for scenarios that can operate in isolated containers, including simple applications, task automation, and build jobs used in our pipelines.
If we take a look at pricing, we can notice that cost is quite low if we compare it to Azure Virtual Machines:
A single container group with 1GB memory at 60 seconds is priced at a fraction of a cent: $0.0000012.
What’s important here is that when we create Azure Container Instances, we can choose to run a single container or a group of containers:
A container group is a collection of containers that get scheduled on the same host machine. Containers in a container group share a lifecycle, resources, local network, and storage volumes. It means that we cannot scale them dynamically.
Once a Container Instance is created, there is no option to scale the number of groups (or pods in the Kubernetes world).
To make it easier to understand the pricing, let’s do a simple calculation using the Microsoft calculator.
We create a Linux container group with a 1.3 vCPU and 2.15 GB, running 50 times daily for a month (30 days). The container group duration is 150 seconds. In this example, the vCPU and memory usage must be rounded up to calculate the total cost.
Memory duration = number of container groups * memory duration (seconds) * GB * price per GB * number of days
50 container groups * 150 seconds * 2.2 GB * $0.00000124 per GB * 30 days = $0.612
vCPU duration = number of container groups * vCPU duration (seconds) * vCPU(s) * price per vCPU * number of days
50 container groups * 150 seconds * 2 vCPU * $0.00001125 per vCPU * 30 days = $5.063
Total cost = memory duration (seconds) + vCPU duration (seconds)
In our scenario, this amounts to:
$0.612 + $5.063 = $5.675
It is worth mentioning that price can vary based on the operating system used for running containers (Windows or Linux). There is an additional charge of $0.000012 per vCPU second for Windows software duration on Windows container groups.
We can use Azure Container Instances to run Azure DevOps Self-Hosted Agents. Once the Container Instance is created, we can schedule the Azure DevOps Pipeline run. In a typical case we will run one Self-Hosted Agent in one Azure Container Instance:
It means that if we want more Self-Hosted Agents to handle scheduled Pipelines runs, we will have to create more Azure Container Instances. There is no dynamic or event-driven scaling available. If we have three separate jobs scheduled in our Pipelines in Azure DevOps, each job will be queued and run after another.
There are of course some disadvantages of this approach. For example, it is not possible to scale a specific ACI instance. If you want more CPU or memory, you need to redeploy that container again.
There is no option to scale Azure Container Instances horizontally based on the amount of pipeline runs pending in a given agent pool. For example, to scale to five container instances, you create five distinct container instances. It means that we have to architect our pipelines in the way that they first create Azure Container Instance and execute the actual job.
But there are of course some benefits to using Azure Container Instances:
To summarize, Azure Container Instances is a perfect match when we need to run Azure DevOps Self-Hosted Agents, we do not want to maintain Azure Virtual Machines, and we want to reduce the cost.
It is worth noting that Azure Container Instances can be deployed to Azure Virtual Network so we will be able to reach out to other resources in this network.
Personally, this is my favorite container service in the Azure Cloud. Azure Container Apps enables you to run microservices and containerized applications on a serverless platform, so you can forget about managing complex Kubernetes clusters but still benefit from Kubernetes concepts.
One of the biggest advantages here is autoscaling. Applications built on Azure Container Apps can dynamically scale based on the following characteristics:
Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of it are created on demand. These instances are known as replicas. When you first create a container app, the scale rule is set to zero. No charges are incurred when an application scales to zero.
Can we use Azure Container Apps to run Azure DevOps Self-Hosted Agents? Of course we can! Here is how.
The biggest advantage over Azure Container Instances is the fact that we can automatically scale the number of containers with our Self-Hosted Agents.
With KEDA we can scale Azure Container Apps based on agent pool queues for Azure Pipelines. Where is the tricky part? You can run the agents with KEDA as a Deployment or a Job and scale them accordingly with a ScaledObject or a ScaledJob.
At the time of writing this article, Azure Container Apps with KEDA supports only ScaledObject. The problem is when container apps scale down, they can stop any instance, including a long-running agent pipeline.
When running your agents as a deployment you have no control over which pod gets killed when scaling down. Using a ScaledJob is the preferred way to autoscale your Azure Pipelines agents if you have long-running jobs. I recommend you read more details in this article.
Additionally, as I’m writing this, there is also an Issue opened by Jeff Hollan on GitHub to Support the KEDA ScaledJobs / Jobs Pattern.
Anyway, we can still host multiple Azure DevOps Self-Hosted Agents with Azure Container Apps and deploy them to resources in the Azure Virtual Network because of the fact that you can create Container Apps Environment in an existing Azure Virtual Network.
The pricing for Azure Container Apps is quite attractive. Let me start with the important information that the following resources are free during each calendar month, per subscription:
Of course, in the case of Azure DevOps Agents, the last item above does not apply. Why? Because of the fact that the Self-Hosted Agent communicates with Azure Pipelines or Azure DevOps Server to determine which job it needs to run, and to report the logs and job status. This communication is always initiated by the agent.
When it comes to Azure Container Apps cost for HTTP requests, we talk about the number of HTTP requests our Azure Container App receives.
In this case, when we have one Agent running and always ready to execute the pipeline, the average cost can be around $30-40 if we use less CPU and memory.
When a revision is scaled to zero replicas, no resource consumption charges are incurred. This is why I am waiting for full KEDA support with ScaledJobs. With it, we will be able to scale agents based on the number of queued pipelines.
To summarize, even if we decide that there is always one Self-Hosted Agent running on Azure Container Apps, the cost will be much lower than using Azure Virtual Machines.
Importantly – you do not have to use KEDA at all and just have one or a few replicas running with Self-Hosted Agents to handle queued pipelines.
In this case, similarly to Azure Container Instances and Azure Container Apps, we can run Self-Hosted Agents in Docker containers. With this approach, we have full control over running containers, so we can utilize the full potential of autoscaling Azure Pipelines Agents with KEDA.
Here we can utilize ScaledJobs. If you run your agents as a Job, KEDA will start a Kubernetes job for each job that is in the agent pool queue. The agents will accept one job when they are started and terminate afterward.
Since an agent is always created for every pipeline job, you can achieve fully isolated build environments by using Kubernetes jobs.
In this scenario, I will not talk about the cost because this is a special case valid only when we have an existing AKS cluster. Obviously, it does not make sense to create one only for running Self-Hosted Agents.
Again – you do not have to use KEDA at all and just have one or a few pods running with Self-Hosted Agents to handle queued pipelines.
We talked about three different approaches to running Self-Hosted Agents with containers but we did not talk about the registry for Agent’s docker images. Why am I mentioning it? Because this also implicates cost.
In the Azure cloud, we can utilize the Azure Container Registry service to store Docker images. There are three different tiers and pricing is different for each of them. Let’s assume that we use Standard tier (the middle one). In this case, we have:
To summarize, the estimated monthly cost will be around $20:
It is worth remembering this aspect when calculating the overall cost. Still, a combination of Azure Container Registry and Azure Container Apps/Instances is much cheaper than using Azure Virtual Machines.
There is also one more great benefit of using Docker for Azure DevOps Self-Hosted Agents. We can define Agent configuration in the Docker file and make sure that we have a consistent setup every time someone uses our Agent to run CI/CD pipelines.
In this article, I described the three most popular and helpful approaches to running Azure DevOps Self-Hosted Agents and reducing the cost related to utilizing Azure Virtual Machines. With all three approaches, we can deploy to resources located within Azure Virtual Network which could be impossible with Microsoft-Hosted Agents.
As you can see, you don’t have to use Virtual Machines for deploying Azure pipelines. For less complex deployments with higher security needs, you can use container services. This way you can reduce your costs and maintenance efforts. At the same time, you can simplify and automate your deployments.
I hope this article will help you efficiently architect pipelines for your workload deployments. If you’d like to discuss this topic further, just contact me.
The original version of this post was published at TechMindFactory.com.
Read similar articles