Company Culture: How to achieve better results and build trust within your team
Although seemingly unrelated, accountability plays a huge part in the success not only of a single project but also ...
‘Microservices’ and ‘Function as a Service’ are treated as buzzwords since 2017. Everyone has heard about them, almost everyone tried to write an application using this approach or wrote it at least once. If you have done it too, you should understand how hard is to manage all the components and the business logic.
Why do I think so? The technology is changing all the time, but the fundamentals are the same. If someone knows how the technology works, he or she shouldn’t have big problems to learn the new stuff.
The real problem is to understand and implement the logic correctly. Why? Because every company has different processes, rules, people, and is working in the different area.
To define the domain and the area you should use an Event Storming (more details about it can find: here). Event Storming outlines the fact that all the actions generate an event. This process shows exactly the complexity of your business logic, and how many paths and branches you can choose from. The event informs you that something has happened. All events must be aggregated and stored. Thanks to this we can describe the business logic in a better way. This is called (Event Sourcing).
As you can see all the events and the business logic do not follow a single path; your code must handle all the processes, coordinate all re-tries, wait for user input, or allow to resume everything from the place where it has broken.
The so-called ‘Orchestrator’ exist in the world of IT as the entity which manages complex processes. “Orchestration in this sense is about aligning the business request with the applications, data, and infrastructure. It defines the policies and service levels through automated workflows, provisioning, and change management. This creates an application-aligned infrastructure that can be scaled up or down based on the needs of each application.” For more information about Orchestration, you will find here.
Sound cool? It is cool but it is tough to implement. Microsoft has prepared an open source framework for this and it is called The Durable Task and you can read more about it here.
According to what I have mentioned before The Durable Task is an orchestrator framework which should help manage the business logic of our microservices or if some prefer the monolith application.
What is more, when we use the Event Sourcing, we keep all the data and information in a storage regardless of time. Customers can reproduce all the business path or if something has broken, they can resume to the place where it has crashed.
The Durable Task stores all the orchestrator invocation (start each orchestration instance, invoke each activity) as the event in a storage. By invoking to the next step or activity, the durable task fetches the stream of events from a storage and reproduces the actual state from the memory. This process ensures that all the action will be invoked only once. If something goes wrong the application is able to continue the work from the place when it has crashed after resolving the issue.
We keep all the information, all the metadata about finished activities and Orchestrator in Azure Table Storage. During the processing of the activities, The Durable Task blocks the instances if it is using a partitioning so that The Durable Task keeps a cohesion.
The Service Bus communicates between Task Hub Worker and Task Hub Client. Someone can think why is that? Let me explain it to you. It is the same application but all the actions of the activities are asynchronous. That is why the actions can be done at the same time or those actions will be waiting for another trigger. For this architecture, Orchestrator is working when we need it and it is not consuming the computer resources.
The developer can define all the logic and the process of invoking code in one place and The Durable Task will be automatically invoking properly the code or service.
The above example presents the management of all the distributed logic in one place. At first, it looks strange but it is very easy. All the magic is inside this framework.
The framework consists of Task Hub, Task Activities, Task Orchiestrarions, Task Hub Worker, Task Hub Client.
Task Orchestrations and Task Activities are used to handle the business logic, exactly where we need to provide our code. Task Orchestration like the name suggests should only be used to manage the processing logic. We have to put all the code or an independent logic in the Task Activity.
Task Hub serves only to communication between the orchestration instance, a task worker and a client. It keeps the information about the service bus and Azure Storage.
Task Hub Worker defines all the available orchestration and activities. The main responsibility is turning on the engine.
Task Hub Worker Client creates and runs the instance and put input for specific orchestration.
My main goal was to present you The Durable Task Framework. The users of this solution have created it because a lot of teams have a problem to keep a huge logic in one place. Additionally, the processing can last a long time, or one of the business requirements is to prepare a state machine or to use a workflow engine. The Durable Task is an ideal solution for this kind of work. It can be a link between many microservices or functions in one distributed component.
This framework uses the event sourcing to store all the context and metadata. Thanks to that we can repeat again finished action whenever we need it.
The above-mentioned framework focuses on managing asynchronous operations and actions. What’s more, everything in there is implemented and adjusted to this way of programming.
If we want to have cohesion, history, one-time invocation and build user or action interaction system we should use this framework.
If yes, you need to know that The Durable Function uses The Durable Task Framework.
I should also mention one more thing, this solution is not to very fast as The Azure Table Storage keeps everything in one place. During the invocation, we read the history from The Azure Table Storage and that is why sometimes it can work slower. It is a relatively new framework and the community is working on it to improve it. We can expect that in the future or maybe in a few months time this will be very fast.
The link for the proof of concept you can find here.
Read other similar articles