Building an application or a microservice isn’t quite as simple as writing code. You also need to think about storage, sharing, scalability, updates, and many other parts. Luckily though, Microsoft offers platforms that take care of a lot of these matters for you. Read on to learn how they’ll help you develop better apps.
Inside, the platform offers an orchestrator called Azure Service Fabric: a highly-scalable distributed system platform used to build and manage microservices, where many Microsoft systems are running.
Figure 1. Applications using Azure Service Fabric
The concepts behind these microservices have been around for many years, from object-oriented languages to Service Oriented Architectures (SOA).
(Source)
The idea is to build a module to be a part of an application, with well-defined interfaces and operations. Each module represents one feature of the application.
Microservices should be atomic and should integrate using HTTP or a service bus. Thanks to this approach, if something goes wrong with one service, the application can still work correctly while the developer or DevOps fixes the affected areas.
A good microservice should include the following:
(Source)
It should manage and monitor resources. In my opinion, it should also take care of scalability, control of networking, and security.
Microsoft has created its service orchestration called Azure Service Fabric, which can be deployed on-premise or in any cloud.
Azure Service Fabric supports full Application Lifecycle Management of microservice-based cloud applications: from development to deployment, daily management, and eventual decommissioning. It also provides system services to deploy, upgrade, detect, and restart failed services; discover service location; manage state; and monitor health.
Figure 2. Azure Service Fabric features
You might be asking, why did Microsoft create Mesh if we already have Service Fabric? The answer is easy: because Service Fabric has a high entry threshold. This means a developer and the team must have knowledge about this platform and what you can do there.
At the time of writing this post, Mesh can only manage Docker container images. But, I am sure that in the future we will be able to write a native application and use the actor pattern stateful service on Mesh.
The Service Fabric Mesh is an inter-service communication infrastructure that deals with workloads by tapping into applications running on thousands of servers that connect millions of users. The Mesh should therefore be:
We manage the following information/options in the Service Fabric Mesh, while the system handles the rest of the requirements, such as:
Figure 3. Service Fabric Mesh interface
All services use the same private network on a defined address or an IP prefix. Thus, we cannot set a different IP address for a specific service. Instead, we leverage DNS for communication between services. Presently, the service can only communicate with other services within the same application.
The DNS address to a service is composed of an Application Name, a Service Name, and the exposed service port.
In a private network, all services have access to the internet to fetch data, but applications from the outside cannot see or ping any Mesh services.
Figure 4. High-level networking architecture
A Service Fabric Mesh runs on a private network that no other system can reach. In order to open an application to the outside world, we need to use a Gateway.
The Gateway is an Envoy proxy (Envoy is an open source edge and service proxy, designed for cloud-native applications) that provides L4 routing and L7 routing for advanced HTTP(S) application routing.
The Gateway routes traffic from an external network into yours and then selects which service to route it to. The external network could be an open network (essentially, the public internet) or an Azure virtual network, thus allowing you to connect to other Azure applications and resources.
The Service Fabric Mesh has a Load Balancer that can be enabled at different layers in the OSI model for networking. It has two types of load balancers in layer 4 and layer 7, and the Gateway uses both.
In the OSI Model, we refer to this as the “transport layer.”
(Source)
Based on this quotation we can configure our Load Balancer in the transport layer. The Service Fabric Mesh supports TCP communication. However, this load balancer has a limitation – we can only use the IP address and ports.
In the OSI Model, we refer to this as the “application layer.”
(Source)
Here, we can use more complex configurations. Because the layer uses HTTP, HTTPS or WebSocket, it allows for more advanced routing.
The platform provides an auto-scale policy that allows you to set conditions that trigger the service to be horizontally replicated. The auto-scale policy consists of two parts:
Other things what we can manage are volumes. These can include Azure Files Storage, or a temporary container disc called Service Fabric Reliable volume.
Volumes are directories that are mounted inside your container instances that you can use to maintain a persist state. They give you general-purpose file storage and allow you to read/write files using normal disk I/O file APIs .
The advantage of Service Fabric Reliable volume is speed, because it’s a Docker temporary volume. However, it has a huge disadvantage of being only temporary. If the image falls, we lose all the data.
An alternative to that is the Azure File Storage, where all data is stored. Here, we can use an out-of-the-box reliable data feature, but the performance is lower because this feature uses the network to read and write data.
Figure 5. Example of defining autoscaling
The last feature that I want to mention is Service Fabric Mesh secret. This can be any sensitive text information such as storage connection strings, passwords, or other values that should be stored and transmitted securely.
In the code, you can use secrets like a standard I/O file, where you need to provide an absolute path using an environment variable “Fabric_SettingPath” and a name of your secret file.
(Source)
The Service Fabric Mesh is a young service that was announced at the MS Ignite 2018. It has big potential and I believe this year it will mature further.
At the time of my writing, early 2019, the service is yet not ready for production deployment. But you will see my first fully working application shortly.
The main problem that I noticed is that not all of the components are functional. But even with said limitations, I was able to deploy my application successfully.
Stay tuned. When I finish my application, I will write a white paper about its architecture, along with when to use the event sourcing and microservices approach. I am planning to deploy everything on Service Fabric Mesh!
Figure 6. A working microservice in Mesh
The whole concept is great. As a software engineer, I shouldn’t care about the network stuff, scaling mechanisms and upgrading versions of my application. I just want to write my code and expect everything to work.
Using Service Fabric as the orchestrator and Mesh engine was a great idea. Since everything is tested by Microsoft (a lot of Microsoft applications use Service Fabric), I don’t need to care about the inner-workings of my application. I trust that everything will be working according to design.
I promise that in 2020 I will write the next part about Mesh, where I will look over the changes.
Below you can find my template to deploy a Mesh application and my Dockerfile:
Wish to know more? Contact us!
Read other similar articles