Navigating through the Azure Messaging (over)choice
About this video
This session was presented at NDC Oslo.
Message-driven systems are the backbone of reliable, high-throughput solutions. Azure offers a rich variety of messaging services—but with so many options, it’s easy to feel overwhelmed. The abundance of choice can lead to analysis paralysis, where sticking to a familiar solution (cough HTTP) feels safer than exploring the tradeoffs.
In this talk, I’ll guide you through the essential messaging services in Azure and help you make sense of their unique strengths. Discover when to use Service Bus for reliable, publish/subscribe messaging, Event Hubs for real-time data streams, Event Grid for scalable event-driven workflows, and how to combine those services to unlock their full potential—without the hassle of polling.
Don’t let overchoice hold you back—this talk will equip you with robust coding patterns leveraging the .NET Azure SDKs and the confidence to make informed decisions to unlock the potential of the essential Azure messaging services for your specific needs.
đź”—Transcription
- 00:00:00 Daniel Marbach
- Hello everyone, a nice and warm welcome from my side. My name is Daniel. I'm from Switzerland and I'm going to talk today about reliable services, messaging services in Azure. I'm going to talk today about Service Bus Event Hub and Event Grid, and I'm aware there are more messaging services in Azure, but I'm going to focus on those three because they are reliable in a sense, they're storing the payload and they're not losing it, right? There are other sort of notification services, messaging services in Azure that sort of have ephemeral storage and you might lose stuff there and they have their purpose, but today I want to focus on things that you can use and you can trust that actually the stuff that you store there is still there when you need it.
- 00:00:48 Daniel Marbach
- So, I want to focus today on showing you robust and reliable patterns you can use with .NET and the .NET SDKs of Azure against those services so that you have stuff that you can take away and bring it to your project so that you can build successful code on top of these three services. So, the sample code and snippets that I show you today, they are deliberately sort of focused on just the bare essentials, so it's not going to be a fancy business application or something like that. I made these choices because I wanted to show you just the bare minimum that you can use with the SDK just in case you're wondering why it's not showing any sort of fancy business stuff around the UI and things like that.
- 00:01:32 Daniel Marbach
- That's sort of the reason I'm going to do a mixture of slides and IDE stuff and also demos showing some Azure portal, but I've also already set up quite a few things before the talk so that you don't have to basically watch me go through deploying the service and stuff like that because that takes unnecessary amount of time and it's already quite ambitious to cover three services in an hour. So, that's the reason why. But of course if you're wondering and saying, "Well, I would like to do the same that Daniel did on stage, I have everything ready in a GitHub repository that I share at the end of the talk."
- 00:02:09 Daniel Marbach
- It has biceps templates and stuff like that in there so that you can go at home or at work and just use the biceps template, deploy the stuff, and then run the demos as well yourself to basically walk through the code or even if something went too fast for you so that you have to, I also will make sure that I cover most of the things that I'm capable of calling throughout the talk, but of course some of these services, they're extremely feature rich, right? And I could talk for hours, for example, just about the Azure Service Bus, but I cannot do that here because we only have an hour.
- 00:02:42 Daniel Marbach
- So, I want to make sure that in case you're wondering, "Yeah, but Daniel didn't cover X, Y, Z and I'd be curious to hear more about it," feel free to reach out to me over my email address. I'm also available with my name on LinkedIn, Blue Sky and some other social media platforms. I have business cards here on stage. Just grab a business card and send me an email, and I'm happy to answer your questions as well there. So, in case I haven't covered something. Good, let's look at Azure Service Bus. So, first of all, Azure Service Bus is a reliable message broker and what it does, it essentially stores messages that you send to it either in queues or it goes over topics to queues, depending what you want to do.
- 00:03:25 Daniel Marbach
- So, you can do basically point to point communication basically from a sender directly in the queue, or you can do sort of fan out scenarios published to a topic and then subscribe it on the other hand and get it in Azure. Service Bus is a good service when essentially a single message is sort of the information carrier. Why is that important to understand? Well, essentially what a message broker does, it sort of manages sort of the cursor where a connected client is currently standing and it does that on the broker side. That's an important distinction when we then look at Event Hubs and because of that, it essentially knows who is connected.
- 00:04:06 Daniel Marbach
- And for example, when multiple clients are connected to the same queue, it essentially knows, oh, there are multiple clients and I handed out message one to this client, I handed out message two to this client, and it's not possible that essentially two clients are essentially processing the same message as long as the transaction is not sort of rolled back, right? There are a few scenarios of course with these services. For example, with Azure Service Bus, you have a concept that is called peak clock. A peak clock is basically once a consumer consumed a message, it sort of is reserved for a specific period of time for that consumer.
- 00:04:46 Daniel Marbach
- Of course, what can happen is if the consumer takes longer than that time, it is possible that the message is visible again in the queue pops up and someone else can concurrently consume it, right? But at the end of the day, what the broker does, it reliably manages those curses and knows all the connected clients and then it makes sure that the messages are delivered. So, it's a fully managed enterprise service broker. It has transactions, it has pops up capabilities, it has SDK across multiple languages. It is also MQP compliant, which is the advanced message queuing protocol. It has support for JMS and many, many other things.
- 00:05:26 Daniel Marbach
- So, what it also allows you to do is you can, for example, use it as sort of a layer seven bridge. For example, when you have connected clients on premise and you want make sure that you can send message to something that is running in the cloud, that's also super nice when you have sort of hybrid cloud solutions and you want to connect them together, but you don't want to expose the inner details of your system running in the cloud. So, that's also a great way to use Azure Service Bus. I'm seeing two, it's actually three, but I think the first one is not really usable. It has sort of two pricing models. One is the standard namespace. That's a very cheap consumption based plan.
- 00:06:10 Daniel Marbach
- You can connect to, you get roughly around 500 messages a second throughput across the entire namespace. That's what you get. It costs you roughly 12 cents per hour or 10 US dollars a month as a base price and it has operation costs involved with it. You get some free amount of operations, but after that, you will be charged per operation. And then there is a capacity base model. It's called the premium tier and there, you pay roughly around 90 cents per hour or 700 US dollars per month per messaging unit. What is the messaging unit? A messaging unit is basically just a way of saying we're giving you a certain amount of CPU and memory that you pay for and you can use that.
- 00:06:54 Daniel Marbach
- There is no operation charges, nothing like that, and you can just use that until you reach the limit of that messaging unit and then you're getting throttled or you need to scale out and you can then sort of scale out. Let me look at an example what we can achieve. So, here I have this Swiss chocolate manufacturing enterprise application. So, what this does is essentially we are using many of the features of Azure Service Bus. So, here we have a sender and it sends Swiss chocolate to command and it sends that into an input queue down here. And then we have an input queue processor on the end that sort of receives those commands and sort of works on sending Swiss chocolate to nice people.
- 00:07:40 Daniel Marbach
- And then essentially when it has done that, we want to sort say, "Hey, we are actually done. We delivered Swiss chocolate to all the nice people in the world." So, that's all of you here. Good. So, then it publishes here over this path event to this topic over here. It's called My Topic, and that's the Swiss chocolate delivered event. And then on there, we have an input queue subscription that is sort of saying, "Hey, for every thing that is sort of published here, I'm interested here and I'm interested down here, and what this input queue subscription does, it forwards all the messages coming back to the input queue."
- 00:08:16 Daniel Marbach
- So, what this means is the input queue processor is also its own subscriber of the things that it published. And down here, what this is using, it uses a SQL filter because Azure Service Bus has rich filtering capabilities. And this is one of the most powerful filtering capabilities that Azure Service Bus supports. And what this does, it basically says whenever there is the CE-type like processor.percent, then I'm interested in this event. So, this is basically a wild card filter that says anything that comes from that namespace, I'm interested in. And then what it also applies, it applies this action.
- 00:08:56 Daniel Marbach
- What this does is essentially says whenever the filter was matched, I'm automatically setting the subject to the type. And then there's another filtering mechanism that is called the correlation filter. The correlation filters, they're much more efficient, runtime efficient compared to SQL filters. So, there, you cannot say there are 50 times more performance, even though the documentation kind of hints at that because you can have 2000 SQL filters per subscription and you can have a hundred thousand correlation filters per subscription. But here, essentially the correlation of the performance is not 50 times. It's actually different, but we can talk about this offline if you're interested about the performance of Service Bus.
- 00:09:45 Daniel Marbach
- So, here on this other path, we have the Swiss chocolate delivered and we have another subscription. This now uses a correlation filter and what the correlation filter can do, it can only do an equals match. So, you know what I'm saying? When the CE type is equals Swiss chocolate delivered, I want to have it sort of forwarded to me and that's what this sort of subscription does there with the filter. So, that's the entire setup that I'm going to show you in the code, so that you have a visual representation of what I'm going to show you. So, this is how it looks like on the portal. Once this is deployed, we then have, I have the Swiss chocolate, then the events down there, and then I have Swiss chocolate deliver tracking subscription and the Swiss chocolate warehouse subscription.
- 00:10:29 Daniel Marbach
- And this is how it looks like. On the other hand, we have the Swiss chocolate subscription, it has Swiss chocolate delivered. We see there is a correlation filter and it has down there the CE-type matching the exact string of the type that I'm publishing. And as you can see here, the SQL filter, I basically have a SQL expression-like capability. I can say, "Hey, it should be like this processor dot percent, and that's what I'm interested in." As you can imagine, this is more powerful because with the wildcard match, I can basically say, "Hey, if just one of these characteristics matches, I want it to be delivered with correlation filters."
- 00:11:10 Daniel Marbach
- I have to have multiple correlation filters for all the possibilities. Okay, long story short, let's get started into the demos. Good. Let me show you a bit of code and how this action looks like. So, we, first of all, how do we get started with this? So, here, what you need is to get started, you need the Azure messaging Service Bus SDK library. That's what you need. I also have a few more things. I have the Microsoft identity just to do the authentication and I have Microsoft extensions Azure. This is just to provide some service collection integration. That's all what I need to get started with. And then let's have a look at how we actually set up things.
- 00:12:00 Daniel Marbach
- So, what's really nice is essentially up here, as you can see, I'm getting the Service Bus administration clients that is a client that has all the managed rights to actually deploy infrastructure. So, I can deploy infrastructure also with C-sharp code or you can use something else, but you can only do it with C-sharp code when your code has the actual managed rights to deploy things. And then I can declare those queues. And what I'm doing here is I'm creating the first input queue of the input processor, and what I'm already saying here is this is my lock duration. So, what it means is by default, the message is reserved for a client for five seconds.
- 00:12:37 Daniel Marbach
- When it goes over five seconds and it hasn't renewed the lock, the message is lost and then someone else can process it. So, that's basically the reservation period. Then here, what I'm saying is I want to deduplication detection, very powerful feature of Azure Service Bus. What it can do is for a period of here, one minute that's here, it actually will look at, oh, have I already seen this message ID? If it has already been seen, it basically rejects it, right? So, the client sends it, but it only one message is ever visible. So, that's a very handy feature to essentially avoid duplicate messages.
- 00:13:12 Daniel Marbach
- Then down here, I'm creating a topic that's the one that I'm using to sort of communicate and publish the events. And then down here I'm doing the subscription creation. So, what I'm using here is I'm using on the subscription, the forward capability. By default, every subscription in Azure Service Bus is its own queue. So, what that means is if you have a subscription, event comes to the topic, goes into the subscription, it will stay there and you have to receive from that subscription. So, that's nice, but the problem is you're basically then sharing the quota of the topic. The biggest topic can be five gigabytes.
- 00:13:51 Daniel Marbach
- And as you can imagine, if you have one topic that has a high throughput and you have multiple subscriptions, all those subscriptions, they actually share the same five gigabytes in this example, right? When you do forward two, what you can do is a queue can be up to 80 gigabytes. So, now what essentially happens is the service reliably forwards the message that came into that subscription directly to the destination queue and then there, it can sit and use the 80 gigabytes of storage. So, that's a better way to actually do it. You can only have up to four forwarding hubs, but you can actually chain things together so that it lands in the destination queue and it's reliably done by the service.
- 00:14:30 Daniel Marbach
- It's a really nice feature. Okay, let's have a look here. Then what I'm doing is by default, it has a default rule. What the default rule essentially means is I want to match everything, so it's in one equals one rule. So, it's a true filter, and I want to make sure that because I'm using my own subscription rules, I want to remove that one. That's what I'm doing here. And then I'm setting up down here essentially the rules, and this is a correlation filter rule. And here, I'm saying, "Yeah, the CE type should match this Swiss chocolate delivered full name." You've already seen that on the picture.
- 00:15:11 Daniel Marbach
- And down here, I'm declaring the SQL filter rule and I'm basically saying, "Hey, this is my filter rule with the complex expression, and here is my rule action that directly modifies stuff, right? You might be wondering, what is this user here? So, Azure Bus differentiates between system properties and user properties. System properties are things that are set by the service. You cannot influence those really. And then there are application properties that's in your own control. So, they're basically, you can have your custom names, custom values and everything like that. So, that's sort of the differentiation there. Okay, good. Now that we have, let's have a look at what essentially how the sender code could look like.
- 00:16:01 Daniel Marbach
- So, here what I'm doing is essentially I'm sort of creating a service for sender client here to a specific input queue, and then I'm creating a bunch of sends with chocolate commands. And what I'm also doing to simulate that the deduplication actually works, I'm adding more duplicate commands. So, every third sort of command is a duplicate command that I'm sending to the service, and here's sort of the underlying infrastructure stuff. So, what Azure Service Bus provides, it provides the capability of two batching. This is super nice because on the standard tier you can have up to 256 KB message sizes, and on the premium tier one megabyte up to a hundred megabytes.
- 00:16:45 Daniel Marbach
- So, what this allows you to do is essentially when your messages are smaller than 256 KB, instead of for every message you send, basically reaching to the service, say, "Here is a message, here is a message, here is a message." You can basically say, "I'm packing all the messages into essentially single call to the cloud and then basically reducing the latency towards the cloud." So, if you can, you should always be using the batching API. So, that's what this stuff whoop. Okay, fat-fingered. Nice. Good. And then what I'm doing here is I'm using cloud events. I'm not going into details of cloud events, but essentially this is just a way to basically say, "I want to have a standardized form, how my messages should look like, at least from sort of the body perspective and the headers around it."
- 00:17:38 Daniel Marbach
- Because every service has different characteristics or different underlying types that they sort of support, right? So, for example, Service Bus has Service Bus message and it defines application properties, system properties, it defines a body, right? And then Event Hubs has also their own event data. It defines properties, it defines the payload, but here I'm saying, "Hey, I want to make sure that I'm interoperable across services, so that's why I'm using this cloud event." And then I'm essentially saying, "Hey, here is my actual payload. Here's the source that I'm sending from." And then what I do is I add it to the batch.
- 00:18:12 Daniel Marbach
- And what this does is if it's more than 256 KB, it will then essentially just return falls and then the next batch is sort of created. Let me show you how this looks like. Here, I'm creating the actual service message. That's the service plus message. And what I'm then doing is I'm basically passing here the JSON payload to this message and here, I'm defining the content type and this is a system property. It's a well-defined property on Azure service. Plus I'm saying, "Hey, this is application cloud events plus JSON, so now it's standardized."
- 00:18:47 Daniel Marbach
- I'm sending cloud events in JSON format, and then what I'm doing is because I'm using the structured format, which is basically just a fancy way of saying, well, instead of mapping all the properties and the payload onto the native stuff, what I'm essentially doing is I'm adding the body, so the actual payload. So, in here, it's the sense Swiss chocolate two. I'm adding that to the payload of the message, and I'm also adding all the cloud event properties into the payload of the message. But then in order for me to support filtering, because I can only filter on application properties, I cannot reach into the payload of the message.
- 00:19:32 Daniel Marbach
- I have to populate up stuff from the cloud events to the application properties of Azure Service Bus. That's what I'm doing down here. I'm basically saying, "Hey, here is the C-type, and then if the subject is also around, I'm also adding this to the application." This allows me to actually do the correlation filter that I just showed you and do the SQL filter. Otherwise, that would not be possible. Okay, that's what I'm doing here. Good. Let's have a quick look at the processor code. So, here, we have the input queue processor, and what this does, it sets up a bunch of things. It sets up a bunch of the processor that sort of feeds from the queue.
- 00:20:13 Daniel Marbach
- I have set auto-complete to true, which just means that whenever the message handling was successful, it automatically marks the messages completed. And here, I'm saying I want to renew the lock up to 10 seconds. Remember I said the peak lock to five seconds. So, what this does in the SDK, it basically reaches out to the cloud and says, "Hey, I'm still processing this message. Please keep it locked for me up to 10 seconds. After 10 seconds, it gives up and then the message becomes available for computation for other competing consumers. I'm not going into the other details here in the interest of time. Then here's the processing message thing.
- 00:20:51 Daniel Marbach
- And what I'm doing here is I'm basically saying, well, I want to be notified here when the message lock was lost, right? Because if I process longer than 10 seconds, I want to get notified so that I cancel my actual consuming of the message. And then here I'm turning back the, I'm getting the cloud event, and then here I have handling logic for the different cases, right? Because I will get called for sends with chocolate, which is the command, and I will also get called because I have a subscription that subscribes to the events that I published here as well. So, that's that. Good. Let's have a look at how the handles with chocolate looks like.
- 00:21:32 Daniel Marbach
- What I'm doing here is I'm using a very powerful feature of Azure Service Bus. It has transaction capability. So, what I have to do in order to opt into this, I have to create a transaction scope. I know ugly, but this is how it works. So, I create a transaction scope and then what this means is down here I have a publisher and that publisher automatically enlists into the transaction of the incoming message. So, normally if you would write this line of code here and it would fail down here, the message would still go out. But because I have a transaction scope surrounding it, what's going to happen is the message only ever goes out when I essentially have completed the incoming message.
- 00:22:16 Daniel Marbach
- So, the outgoing messages are attached to the incoming message, they're sitting there and then only when I reach and they're sitting basically in the broker stored and they know, the broker knows where they should go. And then I'm completing the transaction saying, "Hey, I'm done." And then they get reliably transferred. So, that's a super powerful feature that avoids duplicates down the line. And then in certain cases, you don't need an outbox pattern as an example if you can opt to this. Okay? That's that. And then I have to do scope complete in order to do that.
- 00:22:50 Daniel Marbach
- Good. So, that's pretty much that. Let's have a look at the destination processor. That's pretty simple, nothing fancy here. Again, I'm setting up a receiver, I'm handling the Swiss chocolate delivered event and here, I'm basically just turning the switch chocolate delivered into some sort of counter and handling it and that's pretty much it. Yeah. Good. Let me just quickly show you here before I do the demo, how this is all set up. I'm using here connection strings based stuff, which is not super nice, right? You probably want to use something else. For example, you want to use Azure identity and try these stuff.
- 00:23:34 Daniel Marbach
- When we have time, we'll show that, but here, I'm using for simplicity reasons, I'm using connection strings. I'm actually using the connection string without managed writes for the regular stuff because that means it's guaranteed that this Service Force client can never actually manipulate queues and topics. It cannot create new queues, it cannot delete queues. That's what I'm doing here and down here. I'm basically having the admin client and that needs a connection string with managed writes because otherwise I cannot do anything with it. Then here, in order for the transaction management to work, I need to use this. This is sort of a magic flag.
- 00:24:14 Daniel Marbach
- It's not enough to just have the transaction scope that I showed you. You also need this magic flag so that anything that comes in from the message from the queue, Automatic enlists when you're actually sending those message, but it's very important that you use the same client for the sender and the receiver. What does that mean? Just to show you again here on the top what I'm basically saying, I'm using keyed registrations with the Service Bus collection. As you can see here, I'm getting the transactional client that has this flag set up so that it automatically works good. Let me show you this here. I'm going to .NET run this sample.
- 00:25:06 Daniel Marbach
- Okay, now it's starting and now what's happening is I'm getting those messages and every 10th message eventually will fail if the demo gods are with me. Now we had one, so what happens is we were processing and I lost essentially the lock. I'm going to shut it down now. So, as we can see up here, we had this input queue processor and the task was canceled because we essentially lost the lock because it took us longer than 10 seconds to process, and then we tried to complete it and now we have to retry. And it's guaranteed that essentially, I think it was this message here so we could now search for it.
- 00:26:03 Daniel Marbach
- And it's guaranteed that this message will never actually be duplicated and the outgoing message will also not be duplicated, but you can reproduce that yourself. You have to trust me, it works here. I think I don't want to spend more time on this. Yeah, so that was that. What I can quickly show you is I talked about connection strings, and connection strings is probably not what you want to be using in your production, right? So, there is actually better ways how you can set up things in Azure. So, what you can do is essentially here I have my Service Bus and I probably need to zoom in a little bit.
- 00:26:40 Daniel Marbach
- So, I have already set up an enter ID application, an enterprise application, and that enterprise application, what it has, let me show you that pretty quickly for a definition of quickly with the internet, okay? Now here an application, come on. And here is my Service Bus application, and it already has sort of essentially all the necessary API permissions. So, it here has Service Bus API permission already set up that's necessary. And once I have that, I can essentially on the level of the namespace or on level of an entity, which can be a queue or a topic, I can go to that queue.
- 00:27:30 Daniel Marbach
- Here I have my RBAC queue, and here under access control, I have a role assignment, as you can see here, sorry, here I have my RBAC. I'm using the built-in role called Azure Service Bus data Sender. So, what that means, this enterprise application only has rights to send to this queue, but it cannot receive from this queue. And this is how the code looks like when you go back into RBAC. So, this is a pretty simple example. Basically what I'm doing, I'm using the SDK again, I'm sending a message, and then once I've sent the message, I'm trying to receive the message what will happen? Any guesses? Boom. Yes, exactly.
- 00:28:20 Daniel Marbach
- So, when we do .NET run here, so what's going to happen is it will essentially send a message. But then when it tries to receive, it'll just explode telling me you don't have the necessary rights. Come on. Yeah, see, it takes a while, but now essentially we sent the message, but it says, "Hey, unauthorized access. You don't have to listen claims, you're not allowed to sort of receive messages from this." And if you go back into this RBAC queue here, and if you go to the Service Bus Explorer, we'll see there are actually messages in there. Four messages, I've already done this a few times, but there is a new message now in there from my attempt to actually send that message.
- 00:29:09 Daniel Marbach
- Okay. So, that's a really good way to essentially have tight control and better security against sort of your Azure Service Bus, instead of basically using connections string that allows you to do everything with your namespace. Okay, so now let me go back to my slides pretty quickly. Here we go. So, one of the things that I want to show you just with Azure Service Bus is that when you're using the premium namespace, certain types of people think, "Oh, but this is super expensive because I'm getting charged 700 US dollars per month, and what if I have peak loads and stuff like that?" Do I need to pay always end times 700 US dollars a month or can I also just adjust it on the fly? And yes, that's possible.
- 00:30:00 Daniel Marbach
- You can opt in on premium namespace, you can opt into a feature that is called auto-scaling, and what you can do is you will build per hour that you use it, but you can essentially define rules. And here, what I've done is I've basically said, "Hey, when the namespace has..." here, what I'm saying is I'm looking at the CPU standard metrics, and here I'm saying, "Well, when the CPU is greater than 60% for a duration of N minutes on average, then please increase to..." It's down here. I'm basically saying one messaging unit more. After 60 messaging units, default increase it by one. That's what I'm doing.
- 00:30:44 Daniel Marbach
- You can also set it for the memory and some other characteristics, and then it basically scales up and you will get charged for that one hour or however long you're using it until the cool-down period for that extra messaging units. And I've done that. I've basically created the Kubernetes cluster, deployed an entire large system to it and ran it. And you can see what's happening. And this is me not knowing how much load it can actually tolerate. So, redeploy, redeploy, redeploy, right? And then here, as you can see, the CPU goes up here and then suddenly, it reaches above 60% for a specific period of time and then here, it drops down.
- 00:31:24 Daniel Marbach
- And that is why because at 502, when I did this, essentially Azure Service Bus, the scale out rule kicked in and scaled out Azure Service Bus. And then essentially, I have two messaging units and after a while, it goes down again. Okay? So, that's that. Okay, so now let me skip that quickly because we've talked about that. I need to go forward to the slides here. Give me a second. Okay, so now what Azure Service Bus also offers for problems where you need strict ordering. Many people forget that when someone says queues, they're saying, "Yeah, queues are ordered, right? It's first in first out." But that's actually not the case because I just showed you with peak lock, it's possible that you're using the peak lock and then a message gets retried.
- 00:32:25 Daniel Marbach
- That means your ordering is no longer guaranteed. But there is a feature in Azure Service Box that is called Sessions that allows you to essentially implement ordering within the service. And that's what I'm doing here is I'm opting in for the session feature and what I have this problem, I want to observe temperatures in Swiss chocolates, stores or warehouses, and when the temperature reaches a certain threshold above 25%, I want to make sure that that trigger an alarm because that's too warm to store chocolate. And as you can imagine for sensory type of data, it's important the order of events because I want to make sure that I'm capturing the temperature over the stream of time.
- 00:33:08 Daniel Marbach
- And for that, sessions are an extremely handy feature. So, what I've done is I've essentially, I have these storages here, they have these weird numbers, and what I'm doing is I'm using such a storage or warehouse ID. I'm essentially creating a sessions, and then what I'm doing is I'm sending this temperature change events to those sessions, representing that storage, and then Azure Service Bus will make sure I have always guaranteed ordering within a session. You conceptually can think about it. It's basically a queue that has unlimited numbers of queues per session ID, and that's all abstracted away from Azure Service Bus. So, that's what I'm doing here.
- 00:33:49 Daniel Marbach
- So, I'm going to show you that the only thing you have to do is you have to enable this flag over here sessions enabled and then you're good to go. Let me show you that. Okay, so this is the session processor and it's down here. So, essentially the only difference we do now is, as you can see, is I'm sort of sending this storage template to change the event. I'm adding sort of published time and current temperature with random, and then what I'm doing is I'm basically batching. As you can see, it's the same sort of code, pretty much the same. The only thing I'm doing here is I'm essentially adding the extension attribute.
- 00:34:37 Daniel Marbach
- I'm sort of marking the storage as, "Hey, this is my storage ID. That's what I want to add." And when I turn this into a Service Bus message, what I'm going to do is I'm going to set this magic property here, the session ID, and I'm setting the session ID to this storage ID, and that guarantees that essentially a sub queue is spanned within that queue and then all the messages are delivered in order. The only other thing that I have to do from input queue processing perspective is this part. So, when I'm bootstrapping, I have to say, "Hey, I want a session processor, so it's a different processor.
- 00:35:21 Daniel Marbach
- I want to say, "Hey, up to 10 concurrent sessions, but within a session, it's never concurrent, right? Otherwise, the ordering would not be guaranteed." I have also locked duration and stuff like that, exactly the same, and then I'm just processing this message. The rest is really, really just the same. There's nothing magical. There's one thing that I can opt in, which is pretty cool. Let me show you this. So, here is my processing logic down here. What I'm essentially doing is I'm getting the message and then here, I have this storage state provided load, and I'm loading the state of a specific storage, which represents the points or the temperature points I've observed over the stream of time.
- 00:36:02 Daniel Marbach
- And I'm doing some calculation, whether it's above the threshold or whether it's below the threshold. That's what I'm doing here. And what I can do is sessions has this cool feature that you can essentially, you have a session state. Every session can have up to the message size limits state. So, on the standard namespace, a message can be 256 KB. So, what that means is I can actually store state up to 256 KB directly in Azure Service Bus, right? So, I'm doing that. I'm basically getting the storage state loading that, and then I can capture the temperatures that I've observed directly within Azure Service Bus and it's reliable and it's there.
- 00:36:42 Daniel Marbach
- And when the session goes away, the state also goes away, right? So, that's what I'm essentially doing here. And now when we run this, let me show you that. So, what we can see is it's going to be a fire-hose of information on the screen. I do apologize. But now basically I'm sending from multiple sort of warehouses, chocolate warehouses, I'm sending up the temperatures, and as you can see on the here, the left side I have sort of warn and info. So, basically what that just does, it essentially says, well, the temperature was still within the range or it was above the threshold, and if it's above the threshold for a specific period of time, it then sort of says, "Hey, it triggers a failure warning."
- 00:37:27 Daniel Marbach
- And now I could essentially raise another event into the system saying, "Hey, by the way, your chocolate warehouse has too high temperature. Maybe you should send some mechanic over there to essentially check whether something is wrong with the temperature there." So, that's a really neat way to essentially handle states that requires a strict ordering. That being said though, when you have states that require strict ordering, it's probably better to actually use Event Hubs. So, because Event Hubs is a streaming service that is sort of built for you to do sort of high volume ingestions of telemetry data, device data, many customers out there use it.
- 00:38:14 Daniel Marbach
- For example, in the automotive industry, it's used to track stuff of car maintenance. For example, Switzerland, there is a coffee machine vendor using it, for example, to essentially check their machines out there in the wild for maintenance and warranty, things like that. Or they're checking lots and lots lock stuff. That's also something, it's pretty neat, right? Because essentially with Event Hubs, the difference there, I told you before that with Azure Service Bus, with the message broker, essentially the cursor is managed by the broker with Event Hubs, with streaming services is completely different there. The cursor is managed on the client side.
- 00:38:54 Daniel Marbach
- So, as you can sort of conceptualize it's almost like a tape for those who still know what a tape is. So, essentially on a tape you can watch a video and then once you're done, you can rewind it and you can watch it again with streams, it's pretty much the same. Essentially you manage where you are on that video and then once you're saying, well, I watched this, I want to rewatch again, you rewind essentially your cursor, but it's up to you. And if someone else watches the same movie, they can be at a different state. So, Event Hubs is extremely powerful for those types of problems where you need strict ordering and when you want to ingest high volumes of data.
- 00:39:35 Daniel Marbach
- One other case is also, for example, lots of people use it with, for example, if you have legacy applications that have large monolithic databases, you can apply something like change data capturing, right? It's really nice thing. So, for example, you hook into your tables. Whenever something changes, you publish those changes into Event Hubs. And then because it's ordered, the ordering is guaranteed. You can get it and then you can ingest it into another database, where you already have microservices consuming that. So, that's really nice. Some people also use it, for example, to implement outbox patterns. That's a way to do it as well. So, that's what essentially Event Hubs is really powerful.
- 00:40:16 Daniel Marbach
- So, what Event Hubs does and here, I have essentially the same problem like I showed you before, is what I want to do is because my Swiss chocolate manufacturing thing, they actually scaled out massively. They have lots and lots of warehouses, not just in Switzerland. They also produce outside of Switzerland, and they want to make sure that they still get all the temperature from those warehouses in strict ordered fashion, but now they have multiple terabytes of data they want to ingest and do stream analytics of that data. That's where sessions is no longer a good idea, right?
- 00:40:53 Daniel Marbach
- That's the point in time where you should essentially switch from using Azure Service Bus sessions as a stopgap to basically say, "We reach the volume of Azure Service Bus, we should probably switch over to Event Hubs." That's what essentially my Swiss chocolate manufacturing team did there. So, here, what we're doing is essentially we're setting up this Event Hubs and then what we do is we send in data, these temperature change things. And here as you can see, we also have these storage or warehouse IDs. And what Event Hubs does by default, it sort of takes the partition key and what it does, it does a hash of the partition key and then it assigns it to a partition.
- 00:41:35 Daniel Marbach
- Here, I have an Event Hubs that has four partitions and for you conceptually, you just have to think about, these are essentially boxes where the data is written into and the order of data is only ever guaranteed when it's in the same partition. So, essentially when we want to keep the order of the temperature change data consistent for a warehouse, we have to make sure all the data for the specific warehouse of storage goes into the same partition. That's why I'm using the warehouse as a partition key. And then it's an append-only log, data just gets written to that sort of stream and into those partitions.
- 00:42:14 Daniel Marbach
- And then what the processor does on this end is it basically receives stuff from Event Hubs in order to receive because the state is managed on the client side, what we need to do is we need to store somewhere essentially where we are within this stream, and that's what this sort of blob storage over here does with Event Hubs. You always need an external sort of state provider and here, I'm connecting it to Azure Blob storage, and then that state is then sort of stored there. So, when you connect to Event Hubs, you also say, "I'm part of a consumer group." What a consumer group is, it's basically just a logical name of consumers that are connected to Event Hubs.
- 00:42:58 Daniel Marbach
- And within the consumer groups, it's sort of guaranteed that while all those consumers within that consumer group, they're participating in reading across the same state. So, that means is, for example, if you have consumer group, you can be at position 500 in the stream, but everyone reads from all those partitions the same data within that consumer group. But when you connect the consumer group, you can basically start on another position. So, it's like all the consumer groups read the data at different positions within the stream. As you can imagine, your scaling factor here is the numbers of positions that you have in Event Hubs. So, that's the important part that we have to think about.
- 00:43:44 Daniel Marbach
- Okay, let me show you a bit of Event Hubs. Okay, good. Okay, so Event Hubs, again, you need a connection string or sort of connection mechanics. Here, I've set up already an existing enterprise application. I'm not going to show you that. And then here, what I'm doing is I need a blob container to connect to it that stores the state. Here, I'm also sort of specifying the consumer group. Here's the default consumer group that I'm setting. And that means all the things that are connected are sort of participating within that consumer group on the stream of data that I'm sending. And I'm saying I want events to be batched by a hundred, that's the configuration that I'm adding.
- 00:44:52 Daniel Marbach
- And what I'm also doing is essentially setting an Avro schema serializer. That's just a fancy way of saying, "Well, I want to make sure that everything I'm producing is a stable format, and the schema registry sort of enforces that across the service when I'm sending data up to Event Hubs." That's all I need. Once I have that, I can essentially sort of send messages, and as you can see here, it's pretty much the same. I'm also using cloud events. I'm sending up the temperature data similar to what I did with the session processor, and then I'm setting again the storage as sort of the extension attributes.
- 00:45:32 Daniel Marbach
- And when I'm turning this into the events data, which is the underlying type, I'm basically setting that to the properties of the underlying type. And then what I have to do is here what I'm basically saying, "Hey, I'm creating a batch for this specific partition key of all the data for that storage, and then I'm sending that off to the service." So, that makes sure everything is within the same partition as I explained visually on the slide. And then what I can do is I can just basically hook up the processor. That's that one. And what it does, it essentially receives here this event and here, I'm getting the cloud events out. I'm deserializing it with the schema register deserializer.
- 00:46:22 Daniel Marbach
- And here, what I'm doing is I'm just using here a concurrent dictionary to sort of add or update essentially points in time. So, again, I'm having the same logic as we previously saw in the session processor. I'm basically saying, "Hey, when the temperature is above the threshold, tell it to me when it's below a threshold, then tell it to me as well." So, here, as you can see, as previously I said with Azure Service Bus, a single message is the information carrier here with sort of Event Hubs, the important distinction as well is the series of event is the information carrier, right? That's what I'm getting out of the service. Yeah. And then what's also really nice with Event Hubs is it is fully Kafka 3.5 compliant.
- 00:47:10 Daniel Marbach
- So, essentially you can set up Event Hubs and then you can basically just, well just, you can pull in the librdkafka as an example, right? And then as you can see here, this is a bit of, it's different code, but it's the librdkafka way of saying, please connect over there using Azure ID connection and stuff like that, right? And then you can say here, "Well, I want to subscribe," and then you're subscribed. And then what you can do is you get the messages in over the Kafka protocol, right? So, I don't have to change anything else. That's it. It's fully compliant. Let me show you how this all works. So, now I'm going to the Event Hubs. I'm going to run it with, first of all, with the regular, with just Event Hubs.
- 00:48:13 Daniel Marbach
- So, now I sent data. I sent up I think a hundred thousand events or something like that. It was pretty fast, and now I'm streaming the data in from all the storages. I'm doing my analysis, and that's pretty much it. So, now I can show you, I can easily switch the same thing over to essentially Kafka. I just have here settings, use Kafka true, and then I can just start it, and now I'm switching over to Kafka. Now the other processor using librdkafka is in place. I'm still sending it with Event Hubs clients, sending it up to the service. And then when the demo gods are with me, I should, yeah, as you can see now I'm getting it over the Kafka protocol, getting all the same data in, and that's all that's there.
- 00:49:04 Daniel Marbach
- Now the difference though is previously with Event Hubs, my consumer state is stored on blob storage, right? That's why I need the blob storage connection with Kafka. The consumer state is stored within the service that's abstracted away. Why is that important? Well, I can show you the difference. So, when we are going back here, and I'm switching back to Event Hubs and I'm saying don't produce any data, so produce data is set to false. What will happen is when I start with Event Hubs, it doesn't create new events. Okay. Maybe I haven't read all of it. Interesting. Ah, well, I produced new data, but now I'm at the end of the stream, right? So, now if I restart again, nothing should happen.
- 00:49:58 Daniel Marbach
- So, now it negotiated the ownership. Nothing happens, right? Because it knows on the blob storage, I'm already consumed the stream. But now, let's say, I want to rewind. I can do that with Event Hubs. I have here my storage account. I can go to my storage browser, go to the blob container, go to this checkpoint storage, and here, I have a folder for the temperature change store. And here, I have essentially the ownership managed on this blob storage. If I delete this, basically when the service starts, it essentially has to restart again because the offset is gone.
- 00:50:38 Daniel Marbach
- Now, even though I'm still not producing any data, so I'm starting, what's going to happen is I've basically did a rewind of the tape, right? As you can see, is now all the data flows in with Kafka protocol. That state is abstracted away on the service, not on blob storage, and you have to use the Kafka specific APIs to manage that state. So, that's not exposed in the portal. That's an important distinction. Okay? There is much more with schema registry, you can enforce schema it automatically, because schema registry is part of this service here. When you go to Event Hubs, it is down here. Of course, I need to go to the right service here. We have the schema registry. It's a sub-node here.
- 00:51:29 Daniel Marbach
- As you can see, I have my storage temperature. Come on, load please. Thank you. This is my schema. This is the Avro schema that is managed by the service. The schema serializer is connected to this thing. It's managed multiple versions. I am not going to show you, you can try this out yourself. If you change the schema so that it's no longer compliant and you try to send messages, it'll reject it. So, that's a way to sort of enforce automatically the right schema, but it's too much for today. But I wanted to give you a hint that this exists properly works. It's all in the samples that I have on my GitHub repository. Good. So, that's all I wanted to show you with Event Hubs. Let me go to Event Grid. Give me a second.
- 00:52:20 Daniel Marbach
- Okay. So, Event Grid is also a highly scalable pops up message distribution broker. And here's where things are getting super complicated because one of the things that happens with Event Grid and especially Event Grid namespaces these days, Event Grid is sort of also positioned as the sort of 80% messaging solution. What that means is Event Grid has topics built in, it has queues built in. And then you might already be asking, "But Daniel, you showed me that Azure, Azure Service Bus also has queues on topic. What the heck? Which one should they be using?"
- 00:52:57 Daniel Marbach
- The answer is really, if you need sort of just the 80% cases with, for example, you don't need transactions, you don't need for example, the feature of the duplication support, you don't need sessions, then Event Grid might be enough for you. It'll also get MQP support soon. So, if you have sort of IoT devices that require the MQP protocol to connect to it, then you can also use Event Grid. One of the most powerful things that Event Grid supports is it is tightly integrated into Azure services. So, what that means is essentially many Azure services, they automatically raise events when something happens in those services, and then you can subscribe to those within Event Grid and you can make arbitrary decisions based on the information that comes in.
- 00:53:47 Daniel Marbach
- So, that's super, super powerful. Another thing that Event Grid is also highly capable of, they have fully fledged MQTT support. So, if you're interested in sort of these IoT scenarios with MQTT, they have MQTT-FIVE-O compliance as far as I recall. So, you can basically set up Event Grid and enable the MQTT support, and then you can use MQTT to manage, and you have a fully managed MQTT broker. That's also nice because then you don't have to manage that yourself, which is pretty complicated. It's very cheap. It's 4 cents per unit per hour, more or less around that, and 6 cents per million events that it brings to you.
- 00:54:30 Daniel Marbach
- One of the good things with Event Grid is it has pull support, the namespace, and it has push support. And depending on the use case, you can basically switch whatever you need. So, this is one thing that I want to quickly show you here. You know what, let's go directly into code. It's probably better because then we can actually see the demo and the interest of time as well. So, here is the pull delivery. So, what I'm doing here is I'm using Azure Event Grid to essentially hook into blob storage changes. So, here what I'm doing is, as you can see, I have a sender. The only thing what it does, it uploads an arbitrary string to blob storage. That's all I'm doing.
- 00:55:29 Daniel Marbach
- And then I hooked up Event Grid and system topics on Event Grid to essentially automatically is uploaded to blob storage, raise an event, and then I consume that on the other hand, and this is the receiver code. So, what I'm doing is here I have set up... let me show you that I have set up the Event Grid receiver client. I'm streaming the cloud events. And what I'm essentially doing, I'm doing this try get system event. And this gives me a storage blob created event data, and then it also has peak lock. So, that's why I need to renew the lock while I'm downloading. And then what I'm doing is, on the other hand, I'm essentially downloading from blob storage and this is what I'm doing down here.
- 00:56:16 Daniel Marbach
- Essentially when I get on this event, I'm getting the URL where the stuff has been published, I'm creating a blob client. And then what I'm doing is here I'm downloading this string on the other hand. What makes this possible, let me show you this on the portal. So, there is really not a lot of magic. Essentially, I have the Event Grid storage account and on the events, I have this system topic set up. That's this one here. And underneath that system topic, I have a subscription that's this one. And what the subscription does, it basically says, "Hey, filter for blob created and blob deleted." That's that. And I have some retention periods, some system aside identity. So, that's pretty secure.
- 00:57:11 Daniel Marbach
- And then here I'm saying please deliver this event to the Event Grid namespace topic. And that's over here. And underneath there, I have another subscription which essentially says the delivery mode is queue. When you set the delivery mode to queue, you are doing a pull, not a push. And then here, I'm getting that into that queue and that's all I need to do. And now when I run this demo, let me go there. I don't need to write any more fancy logic as you can see it uploads to the blob. And then on the other hand, here, I'm upload the picture of people smiling with Swiss chocolate in their mouth. And then on the other hand, I'm getting this essentially back, right?
- 00:58:21 Daniel Marbach
- So, essentially you can see there's the senders and then there, here's the receiver that does the lock renewal, downloads the blob. Oop, sorry for that. It's here, downloads the blob, and then I'm getting actually what has been uploaded the entire stream without me doing anything else. And this goes even further. You can essentially any sort of service that provides that can be hooked up. So, for example, you can also hook up ASH Service Bus with Event Grid, and then you can automatically push information from Azure Service Bus to your code. So, for example, I've done this here. What I've done here is I've basically set up an ASP.NET core project.
- 00:59:09 Daniel Marbach
- And what this ASP.NET core project does it first off sort of does a webhook callback when Event Grid reaches out when I'm using the push delivery. And then down here, when this API gets called, it looks for this Service Bus active messages available with no listeners event data. And what it does is whenever there is a queue in Azure Service Bus that has messages in there, but no one is consuming those messages, Event Grid automatically tells me that. And what I'm doing is I'm then in queuing this event locally, and then I have a background worker that essentially uses the Azure Service Bus SDK, reads this information from the worker queue, and then automatically consumes all those messages with a receive and delete.
- 00:59:57 Daniel Marbach
- So, basically you can write something like Azure functions just with simple Event Grid hookup. That's what I've done here. I can quickly show you that hopefully. Essentially when I go to Event Grid and then push delivery .NET run. So, now it's running. So, now it's set up and I already have a token, and now I need to sort of configure the URL on the push delivery here. So, this is my local tunnel URL because it's going to reach out to me here if I have to write URL. And now I can essentially go back to my Event Grid, go up here, and here is my system topic for the, no, sorry, that's the wrong one. For the Service Bus system topic. And now when I sort of go to the subscription, I have configured it to be a webhook.
- 01:01:08 Daniel Marbach
- I've already on delivery properties, I set my tunnel token, and now I can change here my URL, deploy it, and if my tunnel is still working, we should be able to get it over here. No. Okay. My tunnel is no longer working anyway. So, normally you would get sort of the webhook call and then I can send messages and they'll get automatically consumed. But I'm not going to waste your time any longer here, so I'm going to wrap up things. Yeah, so that's pretty much that. So, let me do a quick wrap up. So, essentially my message here is use all the services for the use cases there sort of built for. So, essentially Event Grid is super awesome.
- 01:02:02 Daniel Marbach
- If you have sort of the 80% messaging needs, it has queues, it has webhook delivery, right? So, some people write their own webhook deliver. It's very complicated to write it at scale. Just use Event Grid for that. You can opt in for push and pull depending on what you need. That means you don't have to write polling logic that is complicated. It's all abstracted away for you. Super good. It's pay as you go. Of course, you don't get strict ordering and things like that. And instantaneous consistency is also not there. For all the streaming cases where you have massive amounts of telemetry data, device data, Event Hubs is the way to go.
- 01:02:41 Daniel Marbach
- I showed you how you can bridge the gap with Azure Service Bus as well with session support, right? Until you depleted that functionality and then you go over to Event Hubs and eventually enterprise messaging. When you have large-scale distributed systems where you want to have rich filtering capabilities, transaction support, strict ordering, scheduling, and many more things, then Azure Service Bus is the right service to use for you and really combine them all together because I think I showed you right, every service has its strength and especially with Event Grid, you can basically also mesh them together in really good ways. Yeah, that's pretty much it from my end.
- 01:03:22 Daniel Marbach
- If you're interested to hear how, for example, you can abstract away Azure Service Bus with NServiceBus, the company I work for, particle software has this product. Then you can go to this URL. It points to Microsoft Documentation that shows you how to use NServiceBus with Azure Service Bus. Then you don't have to write that SDK code that they showed you at the beginning. And last, but not least, for all the samples, the biceps templates, the code, the readmes, the handouts, go to github.com/danielmarbach/azuremessagingoverchoice. Yeah, that's it. Thank you very much.