It is now the end of 2018, and it finally looks like the world got the message regarding the cloud. We are happy to have fully embraced the cloud from both perspectives: as a customer who adopted the cloud-first strategy years ago; and as a consulting company delivering solutions to our customers who often choose cloud-based services.
Let’s ask ourselves: what’s ahead for us in 2019 and beyond?
Let’s face it, there is no single solution that fits all. Organizations are at different maturity levels, and are fighting different levels of technical debt and inertia. This affects how we adopt new technology: how we view it and what works best for us.
There are some common elements and trends that often surface and affect how we think about solutions. They also affect what our work environment and how applications will look like in the end. One such trend is that visible and on-prem server infrastructure and support models are becoming rare. They are all but disappearing from the picture.
You might disagree with this approach. You might still be maintaining most of your solutions on servers in a co-location or at your office. Trust me, don’t hesitate, move your applications to Azure IaaS or other cloud provider as soon as you can. You will still continue to work with servers, but with new future-proof processes and methodologies.
Don’t worry, old-fashioned servers wont disappear overnight. They will likely still be around for a long time, as there are many solutions, apps and requirements that need to run on legacy infrastructure.
Remember, the server is invisible for the end-user and consumer of our solutions. They have no idea, nor do they care, what type of system or infrastructure they are accessing.
Remember a decade ago, when you were setting up your email client, you had to enter a server name, IP, port, and a bunch of other information just to get email to work? Today, all you have to do is select an email provider (Google, Exchange, or iCloud) and enter your credentials.
The same goes for your business applications and solutions. In the past, an end-user may have even asked you for the IP of a server to connect to. That’s because back then, using the solution meant understanding the architecture and how it was deployed.
Today deploying and your solution is closer to plugging your device into a power grid. You plug in your appliance and don’t think about how it works–as long as it works.
Tip #1: The servers and/or infrastructure that you use for your application give your organization little, if any, advantage and become invisible for end-users. Don’t spend too much time thinking about or investing in them, unless you are in a niche market or have unique requirements for control, privacy or separation of services!
Containers are our industry’s new darlings, for for a good reason.
For those who don’t know, containers allow you to deploy your application together with key dependencies like libraries, additional software, and even configuration. Like a launch pad, they provide isolated areas for running your applications.
They offer many benefits: provide a way to better utilize hardware, support developers and operations in automation, isolate running processes, streamline the deployment of applications, abstract the complexity away from developers and separate the accountability from those who configure the container, and much more.
You can imagine how much easier it is for us to work with the cloud services when using containers. This really is the preferred use of cloud services for development and successful adoption. I wrote about it in one of my previous posts.
Containers make deployment easier and speed up development and testing. But you still need to pay close attention to the infrastructure underpinning your application.
Containers make it easier to scale. You can easily deploy, delete and rebuild your solution again. It is easy to script it, automate deployments and plug into automation pipelines.
You still need to think about planning it, maintaining it, upgrading versions and patching it.
I was painfully reminded with the latest bug in Kubernetes (container orchestration platform) where all admins had to scramble to get their platform up-to-date rather than working on end-user features and business value.
Tip #2: However detached servers or automated containers and their orchestration platform are, they are still another manifestation of infrastructure, which will become more and more invisible to the user and developer.
We now have managed services offered by Azure and other providers. Azure Kubernetes Service makes it simpler to run and deploy your containers, but you are still working with the infrastructure to enable it.
Where is it all headed?
By now you must be thinking that I drank too much serverless Kool-aid!
If you look closely, the trend of consumption-based serverless apps is gaining traction. Many technology vendors and consumers are utilizing this model.
There are still “servers” in a serverless offering. Your code needs to run somewhere, and all the components still need to be maintained.
In this model, servers and infrastructure disappear from your value chain and from your business application. They hidden from your management layer, and it is not your responsibility or business anymore to manage or build them.
The keyword here is utility. Suddenly, computing is becoming like electricity: you plug in and use.
Do you know where your electricity comes from when you use it? Or how it is delivered?
Then why should you care about computing resources running your code and providing business functions? Why should you worry about running them better than providers like Microsoft, Amazon or Google who have invested millions in the cloud infrastructure and built their businesses on it?
Serverless service models based on consumption and on-demand code execution mark a major shift in business process and applications. They provide the ability to do the following benefits:
Tip #3: The serverless model is bringing a major change in the way we build, operate and calculate the cost of running business processes and applications. It requires a new approach to software architecture and development. I recommend that you invest in the skillsets required to properly operate cloud-based, on-demand systems. This trend will only accelerate–and you want to be in the loop.
You might be thinking that serverless is only a fad. Remember, not more than a decade ago some said that no one will use the public cloud. Ever earlier, some even claimed that an iPhone / touch screens were just a trend too. History likes to repeat itself!
At the moment, serverless is mostly connected to specific services like Azure Functions. After all, that’s where it started. This approach is more about architecture and patterns in software than a particular service or technology.
Over time new patterns and services will emerge. These will be likely be built on top of what we now consider serverless.
Tip #4: Serverless is not a particular technology or service. It is an architectural and business pattern of services design, architecture, and billing.
Here are five Azure Services that create the crucial backbone of serverless applications.
Azure Functions provides a serverless compute environment. You can execute your code written in many languages, on-demand, and get billed only for the resources that it consumes. Start it up when needed and shut down when it’s finished.
It can be triggered, on-demand, by events or based on intervals. It will do the job and then shut down when done without provisioning of the infrastructure, networking or other interaction from you. Just provide the code to execute.
Another thing that you may want to check out is Azure Functions extension. These are also called Durable Functions. They allow you to write stateful functions (think about it as chaining the functions and putting them in order of execution). It does come in handy in some scenarios.
Where Azure Functions provides serverless compute resources, Azure Logic Apps provide a serverless workflow environment where you can graphically create the entire process.
The workflow can span across multiple elements, execution of code, and connections to the specific services and integration points.
One of possible use of this feature is to call Azure Functions to process data. You can also run a container with your logic as part of the Azure Logic Apps workflow.
This is a very powerful tool for building application flows quickly. If you want to see an example of it in action, you can check our enterprise social network analytics, built and run in the serverless environment.
Where Azure Function executes your code, Azure Logic Apps can execute your entire workflow. You need something to bind them together with multiple signals and events. This is where Event Grid comes into the picture.
The Event Grid combines multiple sources of events as Topics. It also allows you to catch these events and trigger actions through Event Subscriptions.
The receiver of these subscriptions might be your Azure Function or Logic Apps. It might also be other services like Storage Queue, which can trigger further Azure Functions down the road.
Event Grid is a switchboard for events in your serverless architecture.
When it comes to the cloud, there is always a question of security and access control. All these components mentioned above address this within Azure platform security model.
In case you need an additional layer of publishing and access control, this is where Azure API Management can can help.
Because serverless execution models are gaining prominence, Microsoft adjusted Azure API management with a new consumption-based pricing model. You don’t need to provide the service up front. Only execute it when needed for your serverless apps.
There are a few different storage mechanisms you can use in your serverless architecture based on Azure.
However, databases are still the way to go for many things. For now, Cosmos DB is not a serverless service itself. It provides capabilities that can greatly improve the way you build your serverless applications with an event-driven approach.
Tip #5: Serverless applications are not the same as standard architecture of apps you used in the past. There are many elements you need to consider when designing them. Do not approach this process in the same way as before and educate yourself in this area before going into it fully.
So about that prediction for 2019 that I mentioned at the beginning:
I don’t think so.
We will still run IaaS machines for a while yet, either because of legacy apps or because of cases where computing power and type of operations forwards to VMs or containers. But in many instances, serverless will be the architecture of choice.
Get ready for the the shift by sharpening your skills and informing your organization. Remember, if you need help, we will be happy to assist you on your serverless journey.
More than 20 years of experience on IT market taught me that very rarely someone is completely right or wrong about something. We make good bets based on our biases, experience and observations.
The serverless approach is emerging and gaining traction. It is visible in the media, at the conferences and in vendor announcements.
For you – please bookmark this article and put a task on your to-do list to check it a year from now.
For me – a year from now I will come back to this topic and provide an update on the status of serverless and how it developed through 2019.
See you next year!
Most people are still used to carrying around their wallets full of cards everywhere they go. However, not all of the ca...
„Design is not just what it looks like and feels like. The design is how it works.” Steve Jobs In t...