Skip to main content

Cliff diving into async in .NET Core and C# 8

Cliff diving can be bafflingly complex and scary to watch. Mere mortals often stand there gaping and pointing. In this talk, we’ll jump like Cliff divers into Async in .NET Core 3.0 and C# 8 and learn why you can’t even recommend to your worst enemy to plunge headfirst into SyncOverAsync.

Armed with some best-practices we are ready to throw a triple backflip into the latest .NET Core 3.0 and C# 8 improvements for idiomatic async code. Ready? Let’s leap together, feet first!

🔗Transcription

00:00 Daniel Marbach
Hey, everyone. Can you hear me? Perfect. Did you have a good night's sleep, everyone? That's what I thought, because if anyone had said, "Yes," I would have said, "Liars." Because I'm pretty sure after yesterday you woke up during the night, had sweat all over your body and, "Locations in my codes. I need to vectorize everything. I need to benchmark everything."
00:32 Daniel Marbach
But we have a joke in Switzerland. We usually say, do you know how you can get 10X performance out of your applications and servers? Any guesses? Well, remove all the thread.sleeps the previous developers introduced. That's what we do. That's how we do performance optimizations in Switzerland.
00:54 Daniel Marbach
Today, I'm going to talk about async/await, .NET Core, and C# 8.0 and how those things play together. Maybe you don't know yet, but C# 8.0 is basically only available with .NET Core 3.0. Or, let's say, that's the official statement from Microsoft, but if you dig a little bit deeper, it's actually not entirely true. You can use C# outside .NET Core, you can use it even in NET Standard 2.0, but there are caveats. Not all features are officially supported, and Microsoft says if you're willing to dig into the details, what is supported and what is not supported, you can use C# with NET Standard as well. I'm not sure if you want to do it, but you can do it. But the things that I'm showing today, I'm showing them on their .NET Core app 3.0 only with a little bit of C#.
01:49 Daniel Marbach
One of the things that I do in particular and I've done for many years is I wrote a lot of async and TPL code, and I've seen a lot of bad code.
01:59 Daniel Marbach
We also support multiple thousands of customers worldwide, a lot of developers worldwide, and we get a lot of support requests from them as well, because as a vendor, it's usually when something goes wrong within the framework that they're using, they first contact the vendor and say, "What the hell is your framework doing?" And then you start digging deeper, and then you find all sorts of nasty code and stuff that is not really according to the standards.
02:24 Daniel Marbach
One of the things that we saw is, well, async/await is basically here to stay. It's going everywhere and it's extremely viral. It's like a highly infectious disease. Once you have it, it spreads all over your body and you're going to die. No, not that terrible, but it's actually almost like that.
02:46 Daniel Marbach
The thing is, many I/O bound libraries ... What does I/O bound mean? It's when you have HTTP requests, sockets, TCP and all that stuff, they are now in the olden days, they're modern implementations of their task base. So they can be enabled to use async/await and, of course, if they use the latest and greatest features, for example, they also leverage the things like (cold) ValueTask that I might cover or might not cover, depending on the time.
03:17 Daniel Marbach
It would be nice if we could write new code every day. Shiny, green grass development, right? But unfortunately that's not how we can work every day. We have things to maintain, but still, those things, those systems that we maintain, we pull in new libraries, and suddenly those libraries become async/await. They return tasks, and then we are kind of forced with existing code that somehow needs to interrupt with task-returning APIs, and we're like, "Okay, what do we do now?" Right? So we get in contact with that stuff.
03:49 Daniel Marbach
Well, the good thing is, if you're using something like ASP.NET MVC, even the classic Web API, or the new ASP.NET Core or MVC or Web API, we have an entry point which is, for example, the MVC Controller, and the good thing is, those frameworks already allow us to return a task, right? So we are in a good world that allows us to basically ripple through the virality of async/await to the higher call stack, and we are fine.
04:18 Daniel Marbach
But then we might have existing code that integrates into Windows services or other things, and then we might not be able to do that. And a topic that comes up, and it's the first topic, is this Sync over Async. Because people say, "Well, I have reasons why I don't want to change everything," or, "I'm lazy, I don't want to change everything," or whatever your reason is, they're saying, "I'm just going to bridge the AsyncAPI into a SyncAPI and everything will be fine."
04:49 Daniel Marbach
Let me show you some codes before I get started. I have here, hopefully you can read it, I have here a pretty simple AsyncAPI and it has a run method(), and I have here a little helper method, wrapInContext(), and I have here this BlocksAndPotentiallyDeadlocks() kind of method. What this method does is, it calls an AsyncAPI, DoAsyncOperation, and after that it does GetAwaiter().GetResult(). The developer who wrote this code was smart because that person knew that with GetAwaiter().GetResult(), what's going to happen is we're getting actually the underlying exception that was raised and not, for example, an aggregate exception, so that's already good. It's a blocking call, and it synchronizes the async operation.
05:35 Daniel Marbach
Let's have a quick look into this operation, what it does. It does nothing more than just Console.WriteLine await Task.Delay() of. Another Console.WriteLine returns the "hello" string, and that's it. Then we do GetAwaiter().GetResult().
05:50 Daniel Marbach
Let's have a quick look at what happens when we execute this code. When we execute this code, this demo will forever run, right? I could now leave the stage and say, "I'm done with my presentation because my demos don't work." Well, what we have now is we have a deadlock.
06:06 Daniel Marbach
Why do we have a deadlock? Well, what happens with this code here is, and how the async stuff works is, we have a code that is executed under a thing called SynchronizationContext. The task schedule wraps that SynchronizationContext, and what's going to happen with this code, we are saying, "Hey, please do this async operation," and the thread enters here, and then it goes until the await statement goes up again, and then we say, "And please, thread, now you are blocked."
06:37 Daniel Marbach
You cannot do anything else. You have to synchronously wait for the result of this operation. And because async/await state machine by default does a thing called context capturing, what it does, it basically says, "Well, if you're exiting and executing a continuation of an asynchronous method..." What does that mean? The continuation of an async method in this example is the Console.WriteLine down here after the await. Then, "Please execute the stuff on the thread that entered this method," right? But the thread that entered this method is blocked because we tell it, "Hey, you have to wait for the operation." So now we have a deadlock.
07:21 Daniel Marbach
Of course, here in this example you can say, "Well, we can easily solve this problem." Any guesses how we could solve this problem? ConfigureAwait(false), exactly. We could add to this method ConfigureAwait(false), here, and then it would no longer block, because we're telling the compiler-generated state machine, "Hey, we don't want you to do this silly context capturing, because we're knowing we want to be context-free. Please stop doing that crazy shit." And then basically what can happen is the code, or the continuation, which is the Console.WriteLine return hello, can be executed on any other thread pool thread, and then this code will no longer deadlock. Cool. Nice.
08:11 Daniel Marbach
Let's have a quick look at another example. There would be also another thing that we can do. For example, people get extremely creative if you Google on Stack Overflow, for example, they start writing code like this: either return a task, do a wait of 1,000 milliseconds, and then call GetAwaiter().GetResult(). This code also deadlocks, although it's really creative way of writing the same stuff that has the same problems.
08:38 Daniel Marbach
Then you see people saying, "Well, if you don't want context capturing and you don't have control over ConfigureAwait(false), because maybe it's a third-party library, then you can just wrap it in a Task.Run, right?" What it means is, you're saying, "Please explicitly offload this to the worker thread pool." Well, that works. That will eliminate the problem of this context capturing, because inside the lambda of this Task.Run here, there is no context available. What's available there is the Task Scheduler default, unless someone messes with evil minds with your Task.capture infrastructure. You can do that. You can ask Kevin Gosse, he has done that to his coworker, Christian. It will be executed on the Task Scheduler default, which is not aware of the current synchronization context.
09:30 Daniel Marbach
So you could say, "Nice! Now we solved this problem; I can use this pattern in my code as well." Well, not really, because what you now introduced is you introduced a thread starvation problem. Why did you introduce a thread starvation problem? I can quickly show that in my demo, so let's unblock this one. Let's execute it. .NET. Run. And let's see what happens.
10:02 Daniel Marbach
We can see here, we are in thread one. Here. And now if you execute this, Sync over Async, what's going to happen is we entered the sync context on thread four, we do the async operation on thread five, and then we come back on sync context thread four. What it means here is, essentially, the thread that enters into that method is blocked, right? So it has to wait. But then in order to kick off the continuation, we need another thread to wake up the continuation, and then, essentially, what we did here is we now are using two threads.
10:41 Daniel Marbach
A synchronous method that would have a thread sleeping there for two milliseconds would only block one thread. But the asynchronous thing that we introduced with the Task.Run now uses two threads. Now, of course, if we have this kind of code in our application that gets executed hundreds and hundreds of times, millions of times in our survey environments, we're essentially using the double amount of threads, and therefore we introduce thread starvation problems.
11:08 Daniel Marbach
Why is there thread starvation problems? Well, as you might have heard, the worker thread pool sort of has a limited capacity, and what it does, it does a so-called hill-climbing algorithm and it continuously ramps up the capacity. The more pressure you put on the worker thread pool, the more, basically, threads it allocates for you, the more memory it uses, but it has a limited capacity, and that ramping up and that memory it uses and everything, it takes time, and at some point you're going to reach its limits, that's also depending on the bitness that you compile against, but you're going to hit that limit, and then your application basically stops working.
11:48 Daniel Marbach
And, of course, by doing that kind of pattern, you're twice as more likely to get into that problem. The morale of this story is, what you should do in your code is, whenever you can and you have AsyncAPIs, embrace the virality of those AsyncAPIs and basically ripple the async Task, async Task up to the highest level, to the entry point, and if the framework that you're using, if you get called by a framework, then leverage the async nature of the framework if it has it for you or only ever do the task that's run on the highest level of your code because that's where you are in control and you know how many times it's going to get called, you can bundle things together, and you have control on how many threads you're going to be using. That's one way to do it.
12:46 Daniel Marbach
But I encourage you, do not fall into the trap of going Sync over Async because it's going to hurt the performance of your software. Embrace the virality of the AsyncAPIs.
13:00 Daniel Marbach
Now, if you're in this world where we're saying, "We want to be good async citizens; we want to do whatever the book writes, whatever Daniel, the Swiss guy, told you," then we actually have to think about what is actually a good AsyncAPI. Well, there are a number of things you have to consider when you implement a good AsyncAPI, or, let's say, that you should consider. If you don't want to consider those things, then at least talk about it and say, "We know the risk of not introducing a certain thing, but we make explicit decisions in our code." That's usually something I encourage you to do.
13:41 Daniel Marbach
The thing that I'm going to show you, the API that I'm going to show you, is an API where I'm trying to basically come up with a method that awaits a Task and provides an SLA to that Task. What it does, it will get a CancellationToken, and it will either be canceled when someone outside cancels that method or after the SLA of that method that awaits that task has been exceeded. That's what we're going to have a look at. I'm showing you now this code.
14:19 Daniel Marbach
This is the example here. What we have is, we have a CancellationTokenSource, and we have an await Task.Delay() from one day. Okay? So someone gave us a task that will basically wait for one day, and then we add this helper method that I'm going to explain to you that is in my eyes a good citizen of a good AsyncAPI, it's called .WithCancellation, we provide a CancellationToken that comes from the CancellationTokenSource, ignore this method, it's not important for us here. And then we have another one where we say, "We want to cancel in two seconds," we again have a task delay of one day, and we use the same method.
15:00 Daniel Marbach
How would we implement such a method? Let's have a quick look at what a good async citizen is. So first of all, what we do is, we return a task, task of T result, ValueTask or ValueTask of T result, okay?
15:15 Daniel Marbach
We're not going to return void. Why not? Sorry? Correct, yes. So async void is a kind of fire and forget kind of way; as soon as the first await statement is reached in the implementation of this method, anything that happens afterward is not awaited, so from the caller perspective it's over, and for example any exceptions is raised on the background and will pull down your up domain. Okay?
15:44 Daniel Marbach
As soon as you wrap return a Task, a Task of T, ValueTask, or ValueTask of T result, what's going to happen is the compiler will make sure that the task always or the state machine, everything, always properly wraps the exception in the Task, and in the worst case, if you're not awaiting the Task that is returned from this method, it will at least raise the TaskScheduler.UnobservedTaskException event handler that you can catch and make meaningful decisions.
16:16 Daniel Marbach
Be aware, this exception handler will only be raised when the GC kicks in, okay? So it's not like immediately an exception has been thrown, it got catched, it's not immediately the task unobserved exception handler will kick in, it will take a while until the next garbage collection cycle comes, and only then it will erase that event. Just that you know that.
16:40 Daniel Marbach
Okay. Now that we did it, we actually have proper exception semantics and proper expressed intent of this API, what we then do is, we accept the cancellation token. And now with the latest C# features, you can just say default, you no longer have to say default(CancellationToken), because .NET provides you the concept of cooperative cancellation. Cooperative cancellation means when you are accepting a cancellation token, you're kindly requested to implement cancellation where you see fit.
17:12 Daniel Marbach
What does that mean? Well, you are the implementer and you exactly know what this code is doing and where it's feasible to cancel and where it's not feasible to cancel. Let me give you a concrete example. If you pass a cancellation token to an HTP client, what it means is if you cancel it, you don't know whether the HTTP call already happened. Where in the method implementation the call actually was, right? So it could already be that the serialization from the payload that you got from the server already happened, allocations already happened, and after then, basically, the implementation cancels. But that's up to you that implements that method. That's what it means, cooperative cancellation.
17:53 Daniel Marbach
If you're writing APIs today that you're going to expose to user in terms of frameworks or libraries or whatever, even internally, and you're saying, "I don't know yet where I can cancel it," you can still already today put the cancellation token onto that interface or method, but you can choose to ignore it. Okay? That's totally fine. And then you can essentially evolve the implementation over time, once you learn more about the implementation details that you want to implement, and make meaningful decisions where you want to cancel and where you don't want to cancel. But it's always good to do that.
18:28 Daniel Marbach
Now, because we are saying we want... We have essentially two cases in this method. We want to cancel when this cancellation token is canceled that got provided by the user or we want to make sure we have an internal SLA here of 10 seconds. What you then do is you create linked cancellation token sources. A linked cancellation token source is something that it creates a cancellation token that is either canceled when you call cancel on that linked cancellation token, that's the one that is returned here, or when the externally inputted cancellation token is canceled. So what we do here, for example, we're saying, for our internal SLA for this method, we say cancel after 10 seconds.
19:12 Daniel Marbach
And now let me show you the first C# 8.0 feature. Now we have the using statement that we can just slap onto the using of our linked token source, and now the compiler will make sure that the scope of this linked token source is properly analyzed and that in checks the proper disposal of that linked token source into the right point in the method, so you don't have to worry about it anymore. That's pretty handy.
19:40 Daniel Marbach
You should always dispose CancellationTokenSources, okay? First, another rule. Why should you always dispose CancellationTokenSources? Any guesses? No guesses? Okay. When we provide cancel after, the CancellationTokenSource internally manages timers. And timers are not an infinite resource on your system, they need to be properly managed by the timer pool and everything and you need to dispose the CancellationTokenSource when it manages timer. So the nuanced answer to this general rule is when you call CancelAfter or when you use the constructor of CancellationTokens that accepts a time span, you always have to dispose it.
20:28 Daniel Marbach
Okay. Now, what we want to do is, we have a task that we got from a user. And what we want to do is we want to make sure that our own code waits either for the output of the user-provided task or for our internal SLA, and we want to do that without blocking. So what we can do is we can use the task completion source, and the task completion source is basically a task where we control the output, sorry, the state of this task. So we can tell a task completion source you are now canceled, you're now an exception state, or you're now completed.
21:07 Daniel Marbach
When we use a task completion source and we are on .NET framework 4.6.1 and higher, and here we are in .NET Core app, then you should always provide this task creation options RunContinuationsAsynchronously. That's a pretty mouthful. What it does is it changes the behavior of the TaskCompletionSource when it executes the continuation of your code. What does that mean? We have here the TaskCompletionSource that has the task here, down here, and what it does, it says, basically, normally, if you don't provide this input, what's going to happen is the continuation of this code, which is the part here, is going to be executed on the thread that calls TrySetResult, TrySetCanceled, or TrySetException.
22:00 Daniel Marbach
And that behavior can lead to deadlock because the .NET framework could make breaking changes, unfortunately. They introduce this enum, and it's not the standard behavior, and you have to opt in for that. If you're not on those frameworks yet and you have task completion sources in your code, do a little bit of homework over the weekend or maybe on Monday morning, go back to your code, search for task completion sources, and if-then wrap the TrySetResult, TrySetException, or TrySetCancel with a Task.Run. Then you get the same behavior when you don't have this enum. When you are in a framework version that supports that enum, please add that enum.
22:46 Daniel Marbach
Of course, if you are in high-performance, high-critical code and you exactly know what this stuff is doing, there might be cases where you're saying, "I know it's more efficient to not use that enum," but, let's say, in many cases it's probably good to have this enum. Okay?
23:02 Daniel Marbach
And now, if you're a good citizen and we write high-performance framework-like code, what we have to do is when we use the CancellationToken registrations... CancellationToken registrations means is, we can attach a delegate to the CancellationToken that gets called when the CancellationToken goes into the canceled state. Okay? That's the register one. Then, again, we need to wrap it in a using, because it needs to be disposed, and now what we should be doing is we should make sure that we don't have closure allocations in this code. So ideally, if you can, we use the state-based overload of the Register method, and then we put the state into the method, which is the TaskCompletionSource, and then we cast the state to the TaskCompletionSource, and then we call TrySetResult.
23:57 Daniel Marbach
The next thing, we already talked a little bit about it, we have to make sure that we opt out from context capturing because we're writing library/framework issue type of code, we use the useSynchronizationContext: false. Okay?
24:13 Daniel Marbach
And then what we can do is, I usually advise people that are new to async/await and stuff don't go into concurrency land from the beginning. What does concurrency land mean? Well, let's say explicit concurrency land. Usually when you just write "await await await await," right, it's sequentially executed. So it's like this line of code, then the next line of code, if there is an exception, the next line of code will not be executed, and then your try-catches and everything comes into place.
24:40 Daniel Marbach
We can opt in for explicit concurrency if we know what we're doing. For example, WhenAll is such a thing or WhenAny is such a thing, right? But once we start using those kinds of thing, we are in the concurrency lands, and there are dragons. We all know that, right?
24:53 Daniel Marbach
So we have to make sure that the code is executed under those concurrency mechanism is essentially thread safe, or let me allow to say concurrency safe. Because it can be executed by the same thread with async/await, but it might be concurrently executed. Any shared state, any mutable state that you share in there, you're going to have a hard time to find the problems that you're going to have in your production system. Okay?
25:22 Daniel Marbach
Also, when you're not doing explicit concurrency, as you can imagine, it's easier to debug your code, as simple as that. So I usually tell people, "Go with the normal async/await, and only when you have measured and then you know you need to fan out into concurrency things, go for concurrency."
25:39 Daniel Marbach
Here we need to, because what we want to do is, we want to say, "Please, continue with this code when either the user provider task is canceled, sorry, is canceled or executed, completed, or when the TaskCompletionSource that represents the cancellation case of the SLA is completed." And what's pretty handy is when Task.WhenAny returns a task that represents the task that was completed first. Okay?
26:08 Daniel Marbach
Here, if the user task is done, we get the user task back. If the TaskCompletionSource is done, we get the TaskCompletionSource task back. Now what we can do is, we can do equal comparison, and we can say, "Okay, in the case where the resolved task was the TaskCompletionSource task, we know that we got canceled, so the SLA is over, so the user-provided task took longer than the 10 seconds that we implemented." Then we throw an OperationCanceledException.
26:38 Daniel Marbach
What many people don't know is that OperationCanceledException has a constructor overload that allows you to provide a CancellationToken that is indicative of the token that was triggered in the cancellation process. That's pretty handy. So you pass it in here, that is also something that I would call a good citizen.
26:59 Daniel Marbach
Then, of course, we have await task.ConfigureAwait(false). Again, the questions that you should be asking yourself should be, "Am I framework or library code?" Yes? Then probably most of the time the answer is, "I'm going to write ConfigureAwait(false)."
27:18 Daniel Marbach
Another question is, "Do I need access to environmental context-aware stuff?" If the answer is no, then also write ConfigureAwait(false). If the answer is yes, for example, in double PF, WinForms, when you need to access elements on the UI thread, then you probably don't want to write ConfigureAwait(false); then the default is ConfigureAwait(true).
27:44 Daniel Marbach
That's what I would say is a proper, good async citizen. That hopefully gives you a good example of all the things that you should think about when you write a good and highly performant async code.
28:00 Daniel Marbach
Good. Let me quickly execute this, just that you see. In the first case, the SLA kicks in. After 10 seconds, this method will be canceled. In the second example, just a quick reminder, I said it, the cancellation from the outside to one second, and then it's canceled. So the total execution time of this method is roughly 12 seconds.
28:24 Daniel Marbach
Cool. I forgot to mention one little thing: What I also hear is sometimes, "Yeah, but async/await doesn't make my DB queries faster." It's like, yeah, that's true. What do you expect? It's no silver bullet, right? If your DB query takes two seconds, even if you're going to use async/await, it's going to use two seconds. The only difference that you have, but it's a good difference to have, is that the thread that is executing that query is not blocked for the two seconds, so it can do concurrently hundreds, thousands, multiple thousands of other operations, which is on your servers, in the cloud, and everywhere. It's much more efficient because you can get higher saturations of your resources that you have available on the servers, okay? Cool.
29:17 Daniel Marbach
Let's have another look. What I then sometimes hear is, especially with performance-related discussions, is, "Well, I know I've heard that I can shortcut the state machine by not using the async/await keywords." And I say, "Yeah, that's true."
29:36 Daniel Marbach
Let me explain that to you. So normally you would write this, right? You write, the method does not shortcut state machine, you write, "TaskCompletionSource, blah, blah, blah, await Task.Delay," some async stuff, "give it something," and then I execute this.
29:52 Daniel Marbach
This method will generate state machine stuff under the hood, and the state machine has fields and closures and crappy things that use memory, right? If you execute this code hundreds and hundreds and hundreds and thousands of times concurrently, it might be more efficient to do this: instead of writing the async/await keyword, you just return the task. You can do that if the method has one exit point of a task, right, the exit point is the last one, the return, or if you have two exit points. For example, if you have if(condition) return else return. Then you can do this kind of optimization.
30:35 Daniel Marbach
What then happens is, if you execute this code, the first version has a stack frame count of 32, and the async method builder is set to true and does shortcut, has a frame count of 19 frames. So we're saving a bit of space, but I usually tell people in most of the cases, I would not encourage to go down that path.
31:08 Daniel Marbach
Why would I not encourage to go down that path? One problem this code has, it's harder to evolve. Because once we start returning tasks, when we wrap, for example, this code in using statements and we're not aware of the behavior, what's going to happen is the task is returned and immediately the using statement disposes the resource that we are wrapping around that task. If the method that wants to access that disposable resource is executed, the disposable resource is already disposed even though the method has not executed yet. Those kind of hard problems to find, you can avoid it when you always stick to the async/await keyword.
31:51 Daniel Marbach
One other good thing is, when you have the async/await keyword is, you get more compiler warnings if you forget to await a method that returns a task and the compiler tells you, "Hey, look at this code, I found a task. I think most of the time it's probably good if you await it. Do you really do not want to await it?" And then you can say, "Yes, I know what I'm doing. I don't want to await it." And then you can ignore the compiler warning.
32:17 Daniel Marbach
One other good thing that the async/await keyword has is, when you add it, you do not get surprises. What are surprises? Let me show you this method here. The surprise method returns a task and throws an InvalidOperationException. And the upper-layer code what it does is it says, "Surprise." It calls the method, which returns the task, and it adds an Ignore method. The Ignore method, what it internally does, let me show you quickly, it does a try await Task.ConfigureAwait catch any kinds of exceptions.
32:58 Daniel Marbach
The problem is when you're writing... Sorry. When you're writing that kind of code, the exception is thrown on the synchronous path, so it's not really wrapped in a task, it's directly bubbled up to the caller. So any kind of await statements and stuff, it does not really kick in. If I would wrap this with a proper try catch, which I did here, the method is not executed, so the exception bubbles directly up on the call stack on this method level. That's another thing that is a bit tricky.
33:47 Daniel Marbach
One of the things that .NET Framework team did is they realized they want to make the situation for you better when you stick to this async/await keyword. And what they did is in .NET Core in many versions, already in .NET Core 2.2., they started optimizing those scenarios. What you can see is, previously, here, it used 488 bytes when you awaited, and now with the newer versions, with .NET Core 3.0, it uses 456 bytes. Microsoft is continuously optimizing also the state machine code that gets generated for you. So in most of the cases for your code, I suggest you for safety reasons stick to the async/await keyword only if you know what you're doing. If you have profiled it, then opt out from the keywords and return the task directly. Okay?
34:39 Daniel Marbach
Cool. And another thing that gave us a lot of headache is that whenever we got super requests from customers, we got log4net, Serilog statements that were like, I don't know, 10 pages long, right? Because of the async state machine created stack traces from hell. Multiple pages of StackTrace. And now, with the newer .NET Core versions, what you get is you get better stack traces. Now, all the async state machinery is removed from the StackTrace when an exception is thrown. We can see here, we have a six-level deep StackTrace, and with the newer .NET Core versions, it was introduced in .NET Core version 2.1 and later, we now have finally readable stack traces, which is a really good thing.
35:29 Daniel Marbach
Now, also, when you're analyzing your log files, you no longer have to scroll through pages and pages of stack traces, now the StackTrace is almost as good as before. Or, let's say, it is as good as before.
35:43 Daniel Marbach
Okay. Cool. Let's dig a little bit deeper into some of the other nifty features that we get. We now have AsyncDisposable. With C# 8.0 .NET Core, we can now use AsyncDisposable.
36:01 Daniel Marbach
One of the problems the community had is, when you implement this iDisposable interface, iDisposable returns void. If you need to dispose in the Dispose method something that is async or you need to execute asysc code to clean up your sources to closing connections asynchronously, the only choice you have is, you either do async void which is evil that we talked about, right? Or you do the GetAwaiter().GetResult() stuff, which is Sync over Async, which is also evil. Okay?
36:28 Daniel Marbach
So now, finally, we have something, we have the AsyncDisposable, and here I'm showing you this, so the normal disposable, we had to call GetAwaiter().GetResult() if you wanted to, for example, flush a stream asynchronously, and now with the new AsyncDisposable, we can return this AsyncDisposable interface and then what it does, it returns a ValueTask. The benefit of the ValueTask is it's basically a Discriminated Union of a task that only gets allocated on demand if you're really on an asynchronous path. If we're completing synchronously, which we can here, there is no allocations, which is a really nifty thing that they introduce. So now, if we implement is IAsyncDisposable, we can properly, for example, FlushAsync the stream, DisposeAsync the streams if we have it, or any other Async kind of operation we can execute here.
37:29 Daniel Marbach
The syntax is a little bit awkward. What we have to write here is we write "await," "using," and then we new up our correct disposable that wants to dispose async resources that implements IAsyncDisposable.
37:44 Daniel Marbach
Only when we implement IAsyncDisposable, we can write await keyword in front of the using so the compiler makes sure that this is properly implemented. And then, of course, we have to write somewhere ConfigureAwait(false). So we have to write it, it's a bit awkward, after "new CorrectDisposable," write ".ConfigureAwait(false)."
38:10 Daniel Marbach
Why do we have to write it here? Any guesses? What is using statement? Or, let's say, what implements the using statements if you will express this with regular C# code? TryFinally, exactly.
38:29 Daniel Marbach
Who writes the TryFinally? The compiler, right? Okay. But the code is written by the compiler, and I need to influence the code. Which code to I need to influence? Remember TryFinally. Inside the finally block, we do "await something async." The continuation of this code is whatever comes after the finally block is done, which is the code, so to speak, after this curly brace. Okay?
39:04 Daniel Marbach
But the code is written by the compiler. And that's why we need to tell it here, "Hey, please, when you write the code that's with the TryFinally and you execute whatever is in IAsyncDisposable, after you call Dispose await DisposeAsync of the implementation of IAsyncDisposable, please disable the context capturing for that code." Okay?
39:28 Daniel Marbach
And, of course, you can implement both interfaces, you can implement IAsyncDisposable and IDisposable. You can do that, but the compiler, so the compiler-generated code will only execute, as you can see here, the DisposeAsync. So if you write "await using," it will only call the DisposeAsync. Only if you then remove the await from the using, it will actually call the Dispose method of the IDisposable.
40:00 Daniel Marbach
So don't implement both because we return a ValueTask on the interface. It's totally valid to only do synchronous work in the DisposeAsync, and you will not be hurt by the allocations of the task because the ValueTask already has that case basically internalized, where you can synchronously execute stuff and you don't have the allocation problem. Cool.
40:28 Daniel Marbach
Let's... The timer's gone, how much... All right, five minutes. Cool. Now, let's have a look at another feature of C# 8.0. We are now getting async streams. Async streams is, I think, for me, personally, with async, it's one of the coolest features, because it enables so many super nice scenarios. Let me show you that.
40:51 Daniel Marbach
What we have here is we have a method, ReadsDelays, and it returns an IAsyncEnumerable. Let's have a quick look. And what we can do here, when we return AsyncEnumerable, we can now finally write "yield return" and "yield break." Previously, we couldn't do that in a body that executed async operations. Now we can. So I'm showing you here, I'm reading a file, I'm opening up a stream reader, and inside that file, I have milliseconds, 100 milliseconds, 200 milliseconds, 400 milliseconds and so on, I read it, and what I do is I read every line asynchronously from that file, from the file stream, and then I yield the return those milliseconds converted into an int.
41:42 Daniel Marbach
What's going to happen? The compiler creates a state machine that is async-enabled, and we can use yield return to continuously stream results that are coming from that file to the caller that calls us. By doing so, we essentially can have await foreach, and for every milliseconds that we get, we can write another async code.
42:08 Daniel Marbach
So if we execute this one, what's going to happen is, we're continuously getting milliseconds from the file, and as you can see, this code basically continuously slows down while it's foreaching. And because it's async-enabled, the state machine that's generated, we can also pass in CancellationTokens, and then the compiler's going to do a little bit of magic, I'm going explain this briefly.
42:36 Daniel Marbach
So when we put in the CancellationToken, we can also call WithCancellation with a tokenSource. What WithCancellation does is when the compiler gets the AsyncEnumerator from the code that is generated, it essentially provides the token that you passed in the WithCancellation extension method to that GetAsyncEnumerator call.
43:01 Daniel Marbach
If you don't call it, then it will pass in the default CancellationToken, which is basically an empty struct. Again, if we use the await foreach, we have to do ConfigureAwait(false). Quick reminder, a foreach loop is nothing else than a while loop basically, right? It gets turned into a while loop or state machine, so also here the compiler creates code for us and we need to tell it, "I want to opt out from context capturing because I don't need it." Then we can properly cancel.
43:34 Daniel Marbach
For example, as you can see here, I canceled off the 3,200 milliseconds; or if I cancel this, the other one, of the 2,400 milliseconds, it will also cancel. I'm not going to show this because this takes too much time, but this is enabled by passing this EnumeratorCancellation attribute. What the EnumeratorCancellation attribute does, it tells the compiler to create the combined CancellationTokenSource out of the two tokens. One is the one that I pass into this method, and the other one is the one that I passed with cancellation. Okay?
44:19 Daniel Marbach
Why is this useful? Well, let me show you a real-world example without going much more into the details of the await foreach because of timing reasons. What we can write is, for example, we need to parse for a project that we do, we need to parse NuGet metadata from the NuGet APIs in order to download packages. What you can do is you can combine the streaming nature with the async concurrency nature and create really, really cool code.
44:52 Daniel Marbach
For example, what you can do is you can lock down the concurrency in the yield code as well, you return an AsyncEnumerable. You're saying I'm going to use a semaphore to lock down the concurrency, you spin off all the GetMetadata calls to the NuGet server, per page. Let's say we get 1,000 NuGet packages from the server. We spin off, and this code does nothing more than just HttpClient.get to that package, downloads the metadata, and it limits the concurrency because we've passed in the throttler to maximum 10 calls, so we only do 10 calls concurrently so that we don't overwhelm the NuGet server.
45:34 Daniel Marbach
Now what we can do is we can use a while loop here, and we can say, "Well, wait for any of the tasks in this list to be completed, and once you have it, remove it from the list of currently executing tasks and yield the return it." What that means is, for example, we spin off max 10 HttpClient calls to the NuGet server, and if five of those happen to be completed in, let's say, a second, we return five calls that are already completed to the caller. Basically, the other caller doesn't have to wait anymore for the package to be downloaded, they can just get all the metadata five, and then we wait again, maybe 10 are concurrently executed and done, and then we immediately get 10 other NuGet packages.
46:25 Daniel Marbach
What this means is when we are building, for example, a client experience that needs to visualize the metadata from the NuGet servers, they continuously stream as they're done into the client code and they can briefly, hopefully the internet works, I can briefly execute this on my machine, and then you can see, I'm reading, and you can see as the metadata is basically done within the concurrency constraints that I'm providing, the metadata packages are continuously flowing, flowing into the console. This is done by just leveraging here, as you can see, the magic of await foreach, read metadata packages, and we get as many as that are concurrently complete.
47:13 Daniel Marbach
Okay. So in any case, I have much more information and much more details also about the foreach. If you're interested to hear more, you can go to get github.com/danielmarbach/Async.Netcore, there are also many more samples, pipeline channels, and other things, because normally this presentation would run for two hours. There is also a README in there with all the explanations that I gave you should anything be too quick for you. If you have other questions, if you're going home sitting in the train or, I don't know, public transportation and, "Oh, Daniel didn't talk about this one," I have here business cards on this side and on this side of the stage. Feel free to grab a business card, send me emails over the weekend or next week or whenever you're ready, and I will kindly answer them. Of course, I'm also still here until maybe five, six p.m. If you have any other async questions, feel free to shoot them at me. Thank you very much.
48:10 Speaker 2
Daniel Marbach, ladies and gentlemen.