Skip to main content

Transforming Synchronous Web Apps with Asynchronous Messaging

About this video

This session was presented at CSharp TV.

Imagine this: your application is gaining traction, users are pouring in, and suddenly—everything slows to a crawl. Requests timeout, services fail, and data is lost during peak hours. Sounds familiar?
As applications scale, synchronous architectures often become bottlenecks, leading to performance issues, fragile integrations, and frustrated users. A single delay in one service cascades across the entire application.
Enter asynchronous messaging, with the benefits of better scalability, increased reliability, better performance and decoupling of components. What is it and how can it help us achieve a more robust, resilient and reliable system? Join me to get answers to these questions. In my session, we will look at how we can introduce and transform an existing web application using asynchronous messaging while understanding the ins and outs of this architectural choice. Whether you’re an engineer or architect, this talk will equip you with the lessons needed to future-proof your applications for growth and resilience.

đź”—Transcription

00:00 Poornima Nayar
So thank you again for having me here, to the entire conference organizers. And today, I'll be speaking about synchronous web apps and how we can approach web apps from a messaging architecture point of view. As developers, one of our greatest and perfect dreams is to get our hands dirty with code. And what I mean by saying is that as developers, nothing excites us than getting to work on a project with tons of integration patterns, right? Creating databases, entities, defining workflows. And if you have a ton of external APIs, you know what? Let us just get through it. You can't just wait to work on such a project.
00:41 Poornima Nayar
Sounds perfect, right? And I think that's a very relatable situation for all of us who are developers. And it's a dream come true as well. So we work on such projects and then we put our projects live, fantastic. It's a moment of pride and joy when your project hits production and starts making money for the business. And there's no other joy like showcasing your own work. Again, something relatable. And then comes that first campaign that goes out. There's a mass email to how many people, just about a few hundred thousand people that the business has gained as the customer base. And then starts your rainy day.
01:20 Poornima Nayar
How? Because you notice sluggish front ends or you get phone calls saying you have sluggish front ends. You notice that the backend services are not scaling quite well or they are struggling to put up with the load. You might be taking payments on the site and that doesn't seem to work. Or your payment processing gateway is hitting rate limiting territories. And at some point, you know what? The site just gives up. Saying that, "Can't deal with this. I cannot deal with this spikes, just do whatever you can." Now this is something that we all have been there done in the past. I have been there, done that myself. But there are other ways to face this reality more confidently upfront. And messaging is one such technology.
02:08 Poornima Nayar
And sorry, my slides are frozen. Why? What happened there? Okay. No. Hi, I'm Poornima Nayar. I'm a software engineer and solutions architect at Particular Software where we build NServiceBus and the entire Service platform around NServiceBus. I'm speaking to you from Buckshire in United Kingdom and if you have any questions, reach out to me, DM me at Poornima Nayar. I am Ask Poornima Nayar on LinkedIn, X and Bluesky. So any questions, feel free to DM me and I'll try to answer you with as many answers, the best answers that I can.
02:42 Poornima Nayar
So in the next 35 minutes, we'll pick up some basics on asynchronous messaging and how we can use messaging as an architecture to have really high throughput, high performant mission-critical systems. And we'll see some code as well. When it comes to mission-critical or high performance system, the first thing I like to talk about is the order processing system. Why? Because it's a very relatable workflow for everyone. It is the holy grail when it comes to building distributed systems and talking about distributed systems. And what is a typical workflow?
03:16 Poornima Nayar
We go to the website, put a few products into the basket, click checkout, put our details in, then fill in the credit card details and then say "Place order." Boom, and that order goes into the backend system of the online shop and then it gets processed. That's the happy day. That's not the rainy day, that's the sunny day that you can see just like the day I have out here in UK. That's a perfect case scenario. Now go back and rewind to that campaign scenario. Where were you at that point when you had that rainy day? So we had sluggish day, we started off with sluggish front ends. Sales are reported to be very low.
03:56 Poornima Nayar
Fine, you know what? We are in Azure. Let's just scale the front end. Autoscaling is what? 10 minutes max. Fine, we do that. Have we solved the problem? Not yet because our backend services are struggling to cope up. It's still slow. Fine. We are going to autoscale our backend services. Now there's a catch there, legacy backends. Who has dealt with a backend system or API server hosted in an old server somewhere in the basement of a building? Been there, done that. There's an API which just couldn't be scaled. That's a catch in itself. Let's just not talk about it.
04:33 Poornima Nayar
But let us say that we have started scaling our backend services as well. And then at one point we hit the wall, the database. So we have been moving the bottleneck from the front end to the backend services to the database where it just goes, boom. "You know what? I cannot cope up with the amount of traffic that is hitting me," says the database. There's so much resources that you can throw at it, but what really happens is cascading failures. The database fails, it then goes and fails the backend services that we have scaled out and that goes and fails the sluggish front end, which is also being scaled out horizontally, vertically, all way, but it then just blows up on the face of the user.
05:14 Poornima Nayar
We are not making money as a business. And that is the rainy day that I've been talking about. Yeah? So I've been there, done that myself. Years ago I was working on a system and when the mass email campaign went out to their very loyal customer base, we had tons of people hitting the website and we had a legacy API. And there was a point where I live monitored a system with my colleague over a phone call with a customer. And that went on for a couple of rounds. And that is where we start thinking about did I use the right tool for the project? Could I have done this better?
05:51 Poornima Nayar
So what did I miss out? Did I miss a memo? And the memo that I missed out happens be the fallacies of distributed computing. So what we have been speaking about so far is a distributed system where two or more computers speak to each other. So as developers, when we develop distributed systems, we take many, many things for granted. And those false assumptions have been coined together as the fallacies of distributed computing, the things which we failed to see upfront. This is a list of fallacies that was put together originally by Peter Deutsch. I will be leaving a link to all of this in my resources at the end. So keep on watching. But let us talk about these policies for a minute.
06:35 Poornima Nayar
We assume that the network is reliable. We assume that network will always be up and network will... Everything that we send will be received. But in reality, network can fail, information can get lost, and network could become congested. Right? We assume that latency is zero. We assume that requests will complete instantly, but in reality there is always a delay, millisecond, microseconds to even more in some cases. We assume that the bandwidth is infinite, but bandwidth is finite. There's a limit to the bandwidth. You can have slowdowns and bottlenecks. We just cannot send as much data as we want at any point. We assume that the network is always secure.
07:21 Poornima Nayar
We assume that now my network is secure, there's no one coming to attack it. But we need to understand that in flight requests, there's always a chance that someone could intercept it. We assume that topology doesn't change. We assume that servers, nodes, routes, they all are static and constant and they don't change. But in reality, machines crash, they restart, they get migrated, they scale up and down. So we cannot assume that there is no change. In fact, change is constant and as developers, we need to embrace it. We assume that there's just a single administrator, but in reality there is nothing called a centralized control these days.
08:04 Poornima Nayar
It's more about distributed teams. You could have multiple teams working on the same distributed system. And when that happens, if you don't have enough inter-team communication, inconsistencies and failures happen. But is there something called as enough inter-team communication? I don't think so. We assume that the transport cost is zero. Even if we have infinite bandwidth, the transport cost is not zero. There is a little bit of that extra that you pay at the end of every month with your Azure bill that comes around.
08:37 Poornima Nayar
That is what telling you is that transport cost is not zero. There is always cost associated even with CPU performance and usage. And we assume that network is homogeneous, but is the network homogeneous? Is the behavior always uniform? Never. Because no two systems or no two servers are alike. Let us go back to our order processing workflow and see what might the workflow be. So we have our website, which is pushing the orders, or when you hit that place order button goes into a big service called your order processing service. The order processing service then might reach out to the sales service and say, "Hey, create this order for me."
09:19 Poornima Nayar
And the sales comes back with a response. Then the order processing goes and talks to the order payments, sorry, the payments is hidden here. The payments then says, "Hey, bill this for me." And then somehow it gives a response magically and then it goes and talks to the customer status service to update the loyalty points. And then it might then give the response back to the order processing. And then the response goes back all the way back to the client. That is when the client gets to know, yeah, the order has gone through.
09:54 Poornima Nayar
Now what you might have, compared this with the fallacies that we just spoke about, what if one of the services is down? What if the network is congested and the data doesn't reach on time? There could be that chance of [inaudible 00:10:09] time out exception. Right? What if one of the services has been updated by a team and the other services just cannot cope up? There is the risk of that too. So what we have is a scenario of very tight coupling. Each service, if you look at the order processing service, it needs to know that it needs to call the sales, it needs to call the order, it needs to call the order payment and it needs to call the customer status.
10:33 Poornima Nayar
Why? Why can't just the order processing hand off and then just move on to do other things? That is what you want to do. Right? Here, there's too many responsibilities on any service that is doing it. And you might think at this point that you can bring in Polly to keep doing the retries if there's failures happening. But that's not enough because what about the data and the idea of potency around it? Do you want to take the payment twice? No. The underlying problem here is what's known as temporal coupling. This is the tight coupling that I've been talking about.
11:06 Poornima Nayar
The idea that for two services to communicate to each other, they should be up and running at the same time. So if there's a sales service and the order processing both needs to be up and running, I need to get the response back after talking to B directly. And only then can I move on. So if B crashes, there is no request, there is no response, right? And at that point, A is completely blocked, A cannot do anything, and there's failure on the face of the user. And while that request has been sent to B, A still waiting, doing nothing. Yeah, that is money lost probably during that time in compute power.
11:48 Poornima Nayar
There's also the request response pattern that we see here. I'm not saying that this is a problem, but when it comes to high throughput, high performance scenarios, we cannot rely on request response because it follows the synchronous processing technique. Because HTTP by nature is synchronous, we cannot have async processing with request response. So this is also that something cannot really fit into this high throughput, high performance scenario. Mission-critical systems. So what we have built here is a house of cards, and not just a house of cards, but one that is sitting on top of ocean waves.
12:24 Poornima Nayar
Now this is not about bashing HTTP, APIs or gRPC, GraphQL, whatever, because they are all battle tested. They are there for a purpose. What I'm talking about is mission-critical, high throughput, high performance systems where time is money, where you cannot have failures, and probably that an HTTP-based API is not something that you should be using. And how can we change this? How can we have better foundations? By introducing what is known as a message queue. So in reality, it is not as simple as this. There's a lot of thinking about the problem and the solution before you go into implementing this.
13:03 Poornima Nayar
But to have an introduction to messaging today, let us just assume that we have introduced a message queue in between. What is a message queue? Consider it like a database, but instead of rows, you have little messages. And what are messages? They are basically what you will have in your request, information. So with the message queue in place, A will actually send a message into the message queue. And at that point, the message gets stored into the message queue. A is not blocked at all. A can continue doing whatever else it needs to do. And B is on the other side of the message queue. B might be crashed at that point.
13:43 Poornima Nayar
B might be undergoing an update, or B might be online. But whenever B is ready to process things further, it takes the message from the message queue, processes it and deletes it. So in a single shot, we have gotten rid of the temporal coupling and note that there is no request response. Why? Because message queues or message brokers, they rely on protocols like AMQP, which by nature is asynchronous processing. So where do we start with this? I'm going to be focusing on the technical aspects of it, but if you want to head over to particular.net/blog, which I'll be sharing a link today, there's a lot of information about what goes into actually planning such a project.
14:25 Poornima Nayar
How can we migrate or how can we even start green field projects? But today in this 35 minutes, I just want to focus on the technical aspects of it. So the main thing to talk about or the first main thing to talk about is the message queue. There are variety of options out there. It can be queues or brokers. Brokers are much more advanced in terms of functionality with routing and message filtering, message transformation. A talk for another day, but understand that there is message brokers and queues. So there's RabbitMQ, the open source. If you want something in the cloud, which is a more queue version, you have Amazon SQS. Then you have Azure ServiceBus, which is in Azure Cloud.
15:08 Poornima Nayar
And then you have ActiveMQ, ZeroMQ, IBM MQ to Google Pub/Sub. Now all these are infrastructure. This is extra infrastructure being brought into place. And then you also might be moving in from an HTTP to a messaging environment. So you might want to play out, have a look, gain some confidence about messaging in general. In such instances, we have seen database as queues, using SQL server as queues. Why? Because such organizations who doesn't have enough confidence with messaging from the word go, they find it easier to have SQL Server, which they might be already using in the application and use that as a queue.
15:51 Poornima Nayar
It's okay in that kind of scenario to use it, but when you think about real high throughput scenarios, using database as a queue is probably not a good idea because it's also adding load to your own database service. Now we have our messaging queue in place. Suppose say for today, I'm using database as the queue because that is the easiest to set up. But say if you are using RabbitMQ or SQS or ServiceBus or even database as the queue, how would you go about talking to it? Would you use the Native Client library?
16:25 Poornima Nayar
Not really. Because there are so many things that you need to ask yourself when you go down this path. What patterns do you need to support? Is it send and receive and publish subscribe? Which one do you need? Because with publish subscribe, you are going down the event-driven architecture route. What is an average message size? Do you need large message support? What about the complex routing that you might need in place? Do you want to write things yourself? No. How about the message delivery order and the delivery guarantees?
16:54 Poornima Nayar
Are you after at least ones, at most ones or exactly ones processing? What about data transactionality and the item potency of data? Surely you don't want to take payment twice. What about the throughput? Do you want managed service or self-hosted service? Are you after cloud native solutions? To what local dev support you need? All these are things that you need to ask yourself. And on top of that, you need to be also thinking about a variety of patterns that I'm just going to talk to you about. Message routing, workflow, transactions, claim check pattern, serialization, retries, monitoring and alerting because it's a distributed system. All in all, you might end up writing all this patterns that you see on the screen.
17:38 Poornima Nayar
So would you want to do that yourself or would you want to do it a different way? That is where messaging middle there, so messaging libraries come into play. There are many, many options which I'll be talking about, but I'm going to use the analogy of a car here. Suppose you want to have a new car with all the latest features out there. Would you go buy a car for yourself and use the amenities that it provides you with or decide, "Hm, this sounds fun. Let me actually build a car." Trust me, there are people who build a car, but then you have to think about the road worthiness of it. You have to consider the paperwork that goes behind it.
18:19 Poornima Nayar
Do you really want to be doing it? Of course there is the learning, but then one of the key learnings at the end of it would be, "I don't want to do it ever again. I don't want to be servicing this car. It is going to sit as a car, which is not going to be used on my driveway. I built that car. That is it. I don't want to be servicing it." So my point being, do not reinvent the wheel. Use one of these libraries. NServiceBus, MassTransit, Brighter, Rebus, Wolverine, all of these are messaging libraries.
18:47 Poornima Nayar
They all come with patterns that I just spoke about that has been built in. You can make use of those amenities yourself. So this helps you focus on the business code. In today's example, I'm going to be using NServiceBus because that is what I am used to. But regardless of that, feel free to use any of this because all the code that I show you, it is just the APIs that keep changing. The patterns are the same regardless of the library you use. So just putting it out there, do not reinvent the wheel because what you might end up writing is a cheaper version of one of these libraries that is going to cost you a fortune in the long run servicing and maintaining it.
19:28 Poornima Nayar
Save yourself from the headache because a distributed system is no mean feat. So time for some code now. So what we will discuss are these main points, end points, commands and point-to-point communication events, publish and subscribe. We will see if we can extend our system if time permits. And we'll also talk about recoverabilities and retries. So let's go to [inaudible 00:19:51]. What I have here is a solution with four projects. I have unloaded something. Let us bring that into picture when the time is ready. So the client UI is basically the website, which is the web front end ASP.NET Core MVC application.
20:09 Poornima Nayar
Nothing too fancy about it. And I've got two services, which is the sales endpoint and the billing endpoint. So in reality, I might be hosting these as Windows services, the billing and the sales endpoints as Windows services or even as an Azure web job. The client UI, that is a website that could be hosted anywhere or even on Azure. So as I said, I'm using NServiceBus because that is what I'm familiar with, but feel free to use MassTransit or Wolverine or Rebus because the patterns just keep repeating.
20:39 Poornima Nayar
It's just the semantics and the APIs, the names of the APIs and the methods that keep changing. So in my controllers, I have the home controller. There's a place order button. So if I just run this demo and show you what the web front-end looks like, it is very simple and basic for this 35 minute session again. So I have this big button which places an order and it tells me that a order has been placed. So when that place order button is clicked, what happens is it goes and invokes this action method for me. So in here I create a new order ID and then I instantiate something of the type place order. And I am sending that object that I've just created using something called message session.
21:26 Poornima Nayar
So what is actually happening? So message session is like an interface in NServiceBus that helps me do basic operations on the message like sending a message or publishing or receive a message. And I am using that to send something. Now what is that something that I'm sending? So if we look at the place order class, it's a plain old CLR object which inherits from ICommand and this is what I'm sending. In reality, it'll be your entire request which contains all the details of the products and everything. That's one way of doing it. But inheriting from ICommand is telling the system, that it's telling NServiceBus that, "Hey, this is a message. This needs to be treated like a command."
22:10 Poornima Nayar
Now in comes the concept of the two different types of message here. This is a command. A command is an instruction. So it is a very valuable method and me as a sender, I expect someone to take action upon it. So if I expect someone to take action upon it and it is a high-value thing, I need to make sure that I know at least who is going to deal with it. So that also can be configured. But as a sender, I don't need to know whether that receiving party or whoever is going to deal with my instruction is up and running or down or whatever.
22:46 Poornima Nayar
I just need to know that there is this someone who I need to send the instruction to who is going to do the work for me at some point. So using the message session, I'm sending that command. And how that sending is controlled in a centralized manner is from the Program.cs. So here, I have this API called route to endpoint where I say the type of place order classes or the objects of type place order, the destination would be the sales for it, the sales endpoint or the sales queue.
23:17 Poornima Nayar
So as you can see, there is a website and it is talking directly just to the queue, not the sales service, it's the sales queue that it is talking to. So it centralizes all the connection for me. And this is now also an endpoint because that is how we treat things in NServiceBus. And endpoint means it's a service that can send or receive messages. And of course there is the configuration for the SQL transport, which I'm using today. And a little bit of NServiceBus endpoint configuration. So that is the client UI which sends the message or sends the command.
23:55 Poornima Nayar
Now let us go and visit the sales service. So the sales service is where the place order instruction is received. So it needs to process that command. And for that there is a class which says that, "Hey, I'm here to catch all the commands that you send with the type place order. Let me deal with it." In the placeholder handler, we have a class that inherits from IHandleMessage messages of type T. So marking a class inheriting from IHandleMessages of the type place order say is shouting out loud to the system that, "I am here to handle all the commands or all the messages of the type place order."
24:36 Poornima Nayar
And inside my Handler there's a handle method where I can get the message as well as the context. So if you think about HTTP context, the IMessageHandler context is something like that. It gives me access to the context because this is a service that has been invoked by a message. With HTTP APIs, you're invoking using HTTP requests. Here it is invoked using a message. So in here I can then do some business logic that is processing the order or saving it to the database in this instance.
25:12 Poornima Nayar
And then we have some interesting things happening. If you see here there is another object called order placed being instantiated. And I'm using the context API, the publish API in the context to publish it here. I'm not sending, I am publishing it. So what is the difference? So this is an event. Event is like letting the world know that something has happened. While command is an instruction that says, "Do this," event is like something happened. I have done it. So the sales endpoint or the sales service is saying that, "Yeah, I have saved the order to the database, my job is done."
25:50 Poornima Nayar
I'm just letting others know that I have done my job so that others can pick up on what is left to do. So as you can see here, there's just one responsibility residing on the sales service, which is creating that order in the database. So another thing to note here is that the order placed event is again very lightweight. In reality it might be different, but the order placed inherits from iEvent which lets the system know that this is an event.
26:18 Poornima Nayar
Now if you look at the semantics of a command and event, the event semantic is a noun plus a verb in past tense. And for command is like a verb in present tense plus a noun. So that this way you can easily make out which is a command, which is an event without even going inside the class and peeking into the class. So something like a best practice. So now we have published the event. And if you look at Program.cs, I'm not talking to anyone, right? Because that is because this is an event. I just need to let the world know.
26:53 Poornima Nayar
Whether someone will listen to me, how many listeners will be there, I don't care. I just need to let the world know. It is like me talking to you. It's like a broadcast. There could be one or many listeners and those listeners are called subscribers. And this is the basic of event-driven architecture. What we are doing as a pattern is publish, subscribe, and this is the basics of the event-driven architecture. And the beauty with event-driven architecture, that is that we get the space to extend the system easily because we can bring in more subscribers without affecting the system as a whole.
27:27 Poornima Nayar
Now how do we handle this order-placed event? That is getting done in the billing endpoint or the billing service. Here we have a handler again, which inherits from IHandleMessages of type T, which is order-placed. In the handle method, I can again do some business logic and then at the end I might send another command or publish another event. That is also possible. Now let us see the demo in action.
27:52 Speaker 2
Yeah. That's what the desk spun up, a new box. [inaudible 00:27:56]
27:57 Poornima Nayar
So let bring up the import for the run windows. So I have the client UI here.
28:02 Speaker 2
I guess there'll be commissioners.
28:05 Poornima Nayar
I'm going to place a few orders. Just let us start with one to begin with, the CFB4 is the starting. If I look at the sales, it has received the place order instruction. So if I look at the client UI to begin with, I'm sending the command for the order ID CFB4. The sales receives it, and then publishes the event for the order ID. And in the billing, I have now received the order placed and I can move on and do other things. Now we spoke about extending the system.
28:37 Poornima Nayar
Right? So I have another service that I have just hidden here. I'm going to load the project and then I'm going to run this as well. So right now it is not running at all. I'm just going to run this. Again, this is a very simple handler which handles the order placed. And if I run this now, that should also run. Give me a minute. I think this is up and running now. If I go place a few other orders, I should see that the shipping or endpoint should receive some of the messages as well from the past.
29:24 Poornima Nayar
So we have extended our system without impacting the system as a whole. Now what if one of the endpoints is down? It has a bug. What happens behind the scenes? That has when the recoverability and the retries come in. So let us go and introduce a bug into our sales endpoint by just throwing an exception. So let me just stop this and then [inaudible 00:29:45] all.
29:44 Speaker 2
[inaudible 00:29:47]
29:52 Poornima Nayar
So the sales endpoint is completely failing at the moment because I have introduced a bug, but that doesn't prevent the UI from working. I have placed a few orders now and the client UI must have sent a few messages. But when it comes to sales, it keeps on failing. So now what happens is in the sales, I've got some recoverability in place. This is something that you would add to every endpoint. And with the recoverability it says that, "Hey, when you have an error, retry it. And when the first retry fails, then back off exponentially two seconds, four seconds, and so on. To the point you cannot do it anymore.
30:35 Poornima Nayar
And then what you do is you don't sit and do anything, you move that message into another queue called as the error queue." And that is what is going on here. I think it retries for three times before it completely backs off and moves the message into the queue. So if you look here, there should be some messages in the monitoring area, which I'm just going to show to you right now. Let me bring that up here. I should see a few failed messages. So if you look here, there is a message that has failed with the message body B5, BF5CE and that is indeed one of the messages that was sent by our client UI here.
31:27 Poornima Nayar
So at this point, sales is failing, but the client UI is up and running. So now let us simulate the scenario where we have gone and fixed the bug. Yeah? So here we are in the placeholder handler again. We have fixed the bug and we are running the app again. And now what I can do is when things are up and running, I can go back to the monitoring area and then start retrying my messages, the failed messages. So I can go into here, up here.
31:58 Speaker 2
[inaudible 00:32:01]
32:03 Poornima Nayar
Here, in service pulse and I can see failed messages. I can go into one and I say, "Retry this message." Yes, I want to retry this message. So it's getting retried. Now if I go into sales, I should see that it is going to try and pick up the message that just failed. So I'll let it see. Come on.
32:26 Speaker 2
Yeah. Yeah. Yeah, because we don't. Yeah.
32:30 Poornima Nayar
Just retrying.
32:30 Speaker 2
I can send you, I can export this wherever they are. But I mean, we don't get tickets for these. Yeah, I mean, security's not bothered.
32:39 Poornima Nayar
There you go. So the message with the message body B5, BF5CE, that has been retried and the order placed, event has been published and then the billing can receive it and the shipping can receive it. So the idea is that you can extend the system. And even if there are failures, the request is safely and durably stored somewhere. [inaudible 00:33:03] so that you can bring things back up and retry it so you're not losing any message. You are building that difference in place so that nothing is lost. Similarly, we can sell it all and retry and it should all go through without overwhelming the system.
33:19 Poornima Nayar
Or it should at least get processed one by one. So we should see that in action any minute. So I'm just going to move on and we can come back to it and visit the logs later. So we discussed all of this, end points, commands and point-to-point communication, even some publishing subscribers. So about how you can extend your system without impacting the rest of the system. We saw about recoverability and retries. And so we have moved on from that very monolithic architecture to something like this.
33:49 Poornima Nayar
So you have the client UI, which publishes, sorry, not publishes. It sends a commands in place order to the sales endpoint. The sales endpoint then publishes an event called "Order placed," which is consumed by both billing and shipping. And here it gets interesting because the billing can then produce further events to say, "I have billed the order," at which point the shipping can chip in and say, "You have placed the order, the order is confirmed, you have billed the order."
34:18 Poornima Nayar
I might also want to wait for the, say the stock service to come back and tell me that there is enough stock so that I can start the shipping. So that is a very complicated thing happening behind the scenes. And that is yet another integration pattern called the "Saga pattern," which you might want to invest and investigate. And that is used for having or creating workflows. Further, the billing might publish... The same order billed event that the billing published, it might be used by the email service to pick up the fact that a confirmation email needs to go onto to the customer. And finally, the order billed, that could be picked up by the customer loyalty service to say that, "Yep, I have updated the customer loyalty database or the details of the customer as well."
35:07 Poornima Nayar
So all in all, if you see, you are moving closer to your business domain with such a kind of architecture at play. Now talking about where you would see such message driven architectures at place, when you have communication between services, when you want modules in modular monoliths communicating with each other, you can investigate into message to an architecture. In real life, you would see such architectures that play, industrial automation, healthcare system, sorry, banking and finance. Anywhere you want to have high throughput, high-performance scenarios, mission-critical systems, messaging architecture can widely help there.
35:46 Poornima Nayar
And if you are thinking about messaging architecture, these are some scenarios where you shouldn't be using them, CRUD applications, when you need request response or even real-time application. Now revisiting the policies of distributed computing, what is the verdict? How does messaging fare? So network is reliable. We embrace failure by design, by introducing durable storage, retries and DLQs. With the latency, we have introduced Async processing and we have decoupled time. There's nothing called the instant processing.
36:23 Poornima Nayar
It is all eventually consistent data. We are tackling bandwidth by achieving some load leveling because backend systems could process at the normal rate while the client UI alone is scaled. So there's load leveling happening. The network is secure because all the message brokers, they take care of TLS encryption of in-flight services. And with fine-grained services, the attack surface is also reduced. With the topology, that doesn't change. We saw that we can evolve with minimal disruption, we can take entire consumers down for maintaining and servicing, and that doesn't impact the entire system at all.
37:03 Poornima Nayar
We can have different teams working on different parts of the distributed system and they all can work independently and have team autonomy. We can scale just the necessary services and deal with the spikes. For example, you could just scale the client UI alone to have multiple instances for the user-facing thing and then the backend services could be just one instant and keep processing data at a normal pace, achieving load leveling.
37:36 Poornima Nayar
And we have an inherently polyglot-friendly architecture in place because the underlying protocols like AMQP and MQTT, they are language-agnostic and enables communication across different platforms. It doesn't solve all the problems, but what messaging helps you is confront the policies early on. And these policies, as they are, you are forced to ask yourself those questions early on to mitigate the risks. So that is it from me today. That is the QR code to my resources and my resource link as well. And if you want to reach out to me, feel free to reach out to me at Poornima Nayar. Thank you. And thank you once again for having me as a speaker.