gRPC in .NET: Basics & More
About this video
This session was presented at GOTO Amsterdam 2024.
gRPC is Google’s implementation of RPC. With .NET Core 3.0, gRPC has become a first-class support in .NET.
In my session, we will look at what gRPC is, and how to both create a gRPC service and consume it. We will also discuss the four modes or methods of gRPC, versioning and also talk about options when it comes to hosting gRPC services.
đź”—Transcription
- 00:11 Poornima Nayar
- Thank you everyone for being here and thank you GOTO Amsterdam for having me as a speaker. This is my first time at this conference.
- 00:19 Poornima Nayar
- So as Frank said, feel free to ask any questions through the app. I can take some questions at the end of my session as well. That's all good. And of course, leave some feedback as well because as much as we are speakers, we cannot improve and we cannot keep improving our content and the way we deliver our sessions without some feedback from the wonderful audience.
- 00:43 Poornima Nayar
- So the next 15 minutes we'll focus on gRPC, specifically the implementation of gRPC.NET. We'll go through gRPC modes, we'll speak about versioning because at the end of the day, gRPC is an API. We'll speak about hosting. So the idea is you go from learning about gRPC all the way to how we can host gRPC, consume the API as well. And maybe if we have time, we'll go through some bonus tips and other things at the end as well.
- 01:12 Poornima Nayar
- And why should you listen to me? Who am I? I'm Poornima Nayar. I'm a software engineer at Particular Software where we build end NServiceBus and the entire platform around NServiceBus. I have come all the way from UK. I live in Berkshire with my husband and ten-year-old daughter. I'm a Microsoft MVP for developer technologies. And non-work me, I read a lot, I spend a lot of time with my daughter and I speak at a lot of conferences and meetups and I do have a daily job and all that was not enough. I'm also a student of Carnatic Music vocals, which is a stream of Indian classical music. Should you wish to reach out to me via DMs, I am on LinkedIn @poornimanayar. That is how I am on Twitter as well.
- 01:54 Poornima Nayar
- So straight into the topic. GRPC, it is a modern open-source, high performance Remote Procedure Call framework straight out of the docs. This is what gRPC is termed as. It is open source. It's meant for high performance Remote Procedure Calls and that's an entire framework around it.
- 02:12 Poornima Nayar
- A little bit of history about gRPC. It's Google's implementation of RPC. So Remote Procedure Calls, such kind of frameworks are not new. If you look back at RPC frameworks, you might actually date back all the way to 1970s or '80s, however you look at it. But gRPC, Google Remote Procedure Call, that is pretty new. It was called Project Stubby and it was made open source and called gRPC in 2015 March. And with .NET we started getting official support for gRPC from .NET Core 3.0 onwards. So if you are looking at remote procedure called requirements for your project in modern .NET, this is the way to go because we had WCF in the past, which was .NET framework and we have CoreWCF, which is a community supported project. But if you're after something more officially supported, gRPC is the way to go ahead.
- 03:09 Poornima Nayar
- gRPC favors contract-based API development and is designed for HTTP/2 and beyond. Now if you are migrating from WCF world, you will actually see similarities. In fact, that was one of the ways that Microsoft was pitching this and that's how the transition happens as well. And where is gRPC used? Microservice to microservice communication. How can one service get data and information from another most efficiently and in the most performant way? That's where gRPC comes in.
- 03:40 Poornima Nayar
- IoT devices, that's a regular platform where gRPC used. Polyglot environments, when you want to support multiple languages and frameworks. If you have streaming requirements for your project, that's a good way to use gRPC. GRPC is a good candidate. But it's not the ideal candidate for browser-based apps. We will talk about that briefly as well.
- 04:04 Poornima Nayar
- So straight into some action with gRPC. Let us have a look at some code. Okey dokey. Can you read the code correctly? This is not what I want at this point of time. So I have this code available on my GitHub repos. So this is a public repo if you want to go and have a look at it. So in this basic demo I have a basic protos project, which is a class library with absolutely no dependencies as of now. But what I have in here is a proto file, a file with a .proto extension and that is where you have the contract of your API. So gRPC with the contract-based API development. This is the contract. This is the holy grail when it comes to gRPC.
- 04:51 Poornima Nayar
- And how you put together that contract is very specific in gRPC. It cannot be in anything that you know. It is very specific to Google and it has a certain syntax. And at this point of time what we are looking at is Proto3.
- 05:06 Poornima Nayar
- So everything that you see in this file confirms to the Proto3 syntax. With gRPC, there's code generation and the code that is generated, in my case that is C#. I can say, "Hey Google, generate the code for me and put it into this namespace that is suggest here with the option C# namespace." So that is for the generated code.
- 05:29 Poornima Nayar
- Now proto files itself can be put into namespaces and to avoid namespace clashes, you can specify a namespace proto file and that is done using the package keyword. If you do not specify the C# namespace, what you specify as the package name here also doubles up as the C# namespace. In fact, it is considered a good practice to have the package name in place with every proto file that you have. At least the package name, even though if you don't have the C# namespace.
- 06:03 Poornima Nayar
- And then you have your service definition. It is typically one service per proto file and inside your service you can have as many RPC definitions as you want. So every RPC begins with the RPC keyword and then you have your name and then the request and returns response. So with gRPC, the RPC definitions, it is always a single encapsulated object that is your request as well as the response. You cannot have multiple parameters like you have in a typical function or method. It is always a single encapsulated object.
- 06:37 Poornima Nayar
- And the request and responses, they also form a part of the contract and it is against this contract that you see here that everything is passed, serialized and deserialized. This is language neutral and platform neutral and everything is passed against this particular contract that we put in place, which is the reason why you can have gRPC in polyglot environments.
- 07:04 Poornima Nayar
- So this is the contract. Now how do I create a service out of it? So I start with a ASP.NET Core project with this package in place. That is gRPC ASP.NET Core, which is a meta package which has got references to ASP.NET Core server, which is a server-side library and the tooling, which helps me create or generate the code that I want.
- 07:28 Poornima Nayar
- For the code generation to happen, I need to refer to my contract as an artifact. So sharing that contract file is very important, be the server or the client. So here I am adding my proto profile using the Protobuf attribute here. So the presence of a Protobuf attribute referencing to a proto file is an indication that code must be generated and the kind of code that gets generated depends upon the gRPC services attribute here. So it can be client or the server-side code or it can be both client and server-side code that gets generated. It is the value here that drives it.
- 08:05 Poornima Nayar
- So we get some initial code generated and you can see that in the OBJ folder here where, for every proto file that you have, two different classes gets created. In the greet.cs; so if I have to follow the naming convention; it is basically the name of the proto, file.cs class. Here, you have all the messages. So if you look here, you have the hello request, somewhere there you have the hello reply as well. So all the messages that you saw in the proto file gets written down as POCOs and there is a proto file gRPC.cs class where the code generation, the actual service code generation happens. For the service I have in my proto file, an abstract class gets created here. So if I say greet base, so that is the abstract class that gets created.
- 09:05 Poornima Nayar
- So the naming convention is again the service name followed by the keyword base. And every RPC definition becomes a virtual method in that abstract class. At this point, I do not have any implementation so the minute I consume this service as such, I get an RPC exception.
- 09:24 Poornima Nayar
- So we are following the language paradigms that we know here, abstract classes with virtual methods, which means that I can create a class which inherits from the greeter base class and override the virtual method to give it my own implementation. Now, the method signature varies when it comes to gRPC mode, but one thing that is available to every gRPC service out the box is a server called context object. This is where you can access the contextual information about the code, like if you have authentication information, the underlying HTTP context, information about the request headers. All of that is available in here. And at the end of the day, gRPC is a middleware and it is a part of ASP.NET Core, so which means your DI authentication authorization, the logging functionality, all of that is available to you.
- 10:17 Poornima Nayar
- And to tie it all up in ASP.NET Core, in program.cs, we need to add the gRPC services to the DI container and also add the service that I have implemented to the request pipeline using the map gRPC service method. So that is a server-side aspect covered for me.
- 10:36 Poornima Nayar
- And to consume this service in my client, I have a console app. Again, I have referred to the proto file using a proto buff attribute, but this time the generated code is a client-side code. And to support the code generation, I need these three packages: The Google proto buff, which generates the corresponding C# types, the .NET client library and the gRPC tools, which is where the actual tooling leads to generate the code for me.
- 11:03 Poornima Nayar
- Similar to the server-side code generation, there is client-side code generation happening. All the messages are POCOs, but there is something special that gets created in the client side, which is in greet gRPC there is a greeter client that gets generated. There we go. This generated client is important for us to communicate with the server.
- 11:27 Poornima Nayar
- So if I'm to see the implementation part of it; program CS; I start with creating what is known as a gRPC channel, which points to my local host server. So gRPC channel is something like a long-standing connection between the client and the server. That part is where the system knows where is the server. The rest of the system doesn't care about the server. It only invokes the method. GRPC channel is quite an expensive object to create, so whenever you create a channel, it's often reused across multiple clients or even multiple types of clients. And all the requests can be simultaneously multiplexed to that same HTTP2 request.
- 12:14 Poornima Nayar
- Once we create a gRPC channel, we use that channel to create an instance of a greeter client and using the client, I can invoke my stub method. So in my greeter client class, there's also the stub methods that gets generated for me. The stub methods are nothing but a representation of the actual server-side method.
- 12:37 Poornima Nayar
- So in reality, this part looks like a local function call. I'm invoking a local function, but it actually goes and executes in the server because underlying gRPC channel takes care of it. So this location transparency, trying to program something, trying to make a function call as though it was a local function call, but in reality it jumps the network and executes somewhere. That is remote procedure call for you.
- 13:05 Poornima Nayar
- So let us run this program and see. So the first thing to do is gRPC basic service. That is up and running now at any point. Let's see where the demo codes are with me. Yep, that's running. And the client debug. Start with our debugging. That has actually spoken to the server and got back some response for me. So the basic demo is working.
- 13:37 Poornima Nayar
- So let us move on. So that is gRPC in a nutshell, but moving on to other things, just reviewing what we just saw. We had the contract definition in proto file and the proto files must be shared between the server and the client as some kind of an artifact for you to communicate with the gRPC service. There's code generated on server as well as client. We went through the abstract base class as well as the client stub methods.
- 14:04 Poornima Nayar
- You always require that generated gRPC client to communicate with the server and the core generation that happens behind the scenes is done using a special compiler called the Proto C Compiler. And the gRPC tools package that we just saw has this compiler baked in along with the Proto C plugin, which understands C# that generates a C# code for me. So all that code that is generated confirms to the Microsoft.NET standards that's been laid out for C#, so that shouldn't be a worry there.
- 14:35 Poornima Nayar
- In fact, Proto C understands a lot more languages than we think. All of these languages are languages which Proto C natively understands and Google supports it out of the box and there are about 90 different third-party add-ons available as well if you want. So you have your contract which is language neutral and platform neutral, and you generate your code based on that and you have everything passed and validated and serialized and deserialized against that language neutral platform, which is what makes gRPC a good candidate for polyglot frameworks and environments.
- 15:14 Poornima Nayar
- So talking about one of the key terms with gRPC, that is protocol buffers. The name actually originates from the class that Google had, which used to be a buffer. But in reality, protocol buffer is Google's open source mechanism to serialize structured data. And as I said, it is platform neutral, language neutral, and this is incredibly smart because you can actually tame it to be both backwards as well as forward compatible.
- 15:40 Poornima Nayar
- And everything you saw in that proto file, that is the service alongside the messages, all of that together forms proto buffs. This is an extensive topic in itself. You have an entire website of documentation just on proto buff. It's not something that you can actually read and digest with a cup of tea. It's quite advanced.
- 16:03 Poornima Nayar
- But for simplicity's sake, let us talk about it today as an interface definition language as well as the message exchange format. And the request and responses with gRPC are in the form of messages. So this is what you saw in the proto file as message. An example is here. And each message is a record of key value pairs with each field having a special type, a name and a number attached to it. That number must be unique per field and once your messages are in use; that is your service is alive and running; that number must not be changed because breaking that leads to bad things happening. Because messages are transmitted in binary format with gRPC and unlike JSON where you actually have the value transmitted on the wire against the name of the field, here, the value of the field is actually transmitted against the number. So that is how protocol buffers. The messages are transmitted across the wire.
- 17:04 Poornima Nayar
- The fun part is as a client, if I don't set, say, the temperature name on the clients and when I issue the request, that field is not even transmitted across the wire. When it gets to the server side, what happens is the server has the contract, so it actually tries to parse the incoming request and deserialize it against the contract that is in place. It finds, ha, temperature is missing. So what I'm going to do is before I go and hand it over to the next step that is the invocation and execution, that field is set to the default value for that type. So in this case, temperature will be set to default value for double in this case. So with scalar types in proto buff, there's no concept of null. That is something new to me as a developer as well. But there are cases where null is possible as well.
- 17:59 Poornima Nayar
- Some other things. You can technically have field numbers ranging from one to that big number. With to 19,999, those field numbers specifically reserved for protocol buffering implementation. But of course when you start having this kind of field numbers, consider readability, maintainability as well as encoding efficiency because when the message encodes, field numbers one to 15 takes up one byte and the rest takes up more. So the higher the field numbers, it takes up more space for encoding as well. And that's not something that we need because gRPC is meant for performance service to service communication.
- 18:48 Poornima Nayar
- And what are the types available for proto buffs? These are the various different types and these are the corresponding C# type. With bytes, you can see that it is not something in the .NET space that we have. It's a special class that gets generated.
- 19:04 Poornima Nayar
- Another word about the first six types, the int32s to the sin64, they use what is known as variable encoding so it only takes up as much space as it needs to. It's not your typical four-byte and eight-byte integers and numbers.
- 19:23 Poornima Nayar
- If you have a hello-reply message in that format, you can represent that in C# like so. That is plain old C# .NET for us. You can also have enums in proto buffs and bigger generated as C# enums. You can have nested types. So in this example, I have a message called nested message, which actually forms a field of the hello-reply. That is also possible. So in this instance, you have to create a new instead of nested message when I form a hello-reply object, otherwise it can be null. So you have to instantiate before you use it. Again, plain old .NET for us.
- 20:03 Poornima Nayar
- You can have collections as well in your message. The first type is repeated. Repeated represent a list or an array. And to make sure that again, there is not no null values that is possible, the way that is achieved, it doesn't have any public setters. There's only getters. So which means that you can only append to the list. You cannot rewrite the list, you cannot set it to null. You can only append it always. But any scalar type or nested message can be repeated.
- 20:34 Poornima Nayar
- You also have dictionaries or dictionaries like things with proto buff that is using the map keyword. And again, it doesn't have a setter, only a getter, which means you can only append. And of course the key always needs to be an integer or string type, but the value can be anything. You might have noticed there is no date time that is supported. For that, we need to bring in what is known as the Google well-known types. This is just one of the Google well-known types. There are many other Google well-known types.
- 21:09 Poornima Nayar
- In this example I'm importing what is known as timestamp.proto. So using the import keyword and the path, that is how you import a well-known type into your project. And timestamp.proto gives you access to timestamp and duration proto buff types, and you can convert any datetime, any .NET-based datetime into timestamp or duration for you. The underlying classes are, again, Google proto buff types. It's not the .NET types.
- 21:40 Poornima Nayar
- So that is messages and proto buffs in a nutshell. And now we can talk about gRPC modes. So gRPC, of course there's client and server, but beyond client and server, there's multiple ways in which you have client and server gRPC. The first one is unary that we just spoke about, the actual average client-server response. Here, the client sends a request message to the server and the server responds back with a response message. And the RPC definition can look something like this. It's a single object that goes in and a single message that comes back.
- 22:21 Poornima Nayar
- The idea that I'm trying to build around the demo is some kind of a device sending a reading to the service for processing. That is the idea that I'm building here. So the call starts with the client sending the message, the server processes it, sends back some kind of a response and that is call done. That is call complete. And with the .NET gRPC implementation on the client's side, when the step methods are generated, you get access to two different step methods. One is the asynchronous, non-blocking call and the synchronous blocking call. So if you want access to the response headers, you need to make use of the async version for that. I'll do all the modes and then show you the behind the scenes.
- 23:04 Poornima Nayar
- The next mode is server streaming. This is where the streaming capabilities comes in. The client sends a request message and the server sends a stream of responses. So this is perceived performance that is breaking big chunks of data into small pieces and sending back to the client.
- 23:20 Poornima Nayar
- The RPC definition remains the same except for the fact that there's a stream keyword in front of the message. That is the response message here. Here, the idea is that maybe the device or some kind of a dashboard is actually asking for all the readings on a particular date. There might be huge amount of readings so the server is streaming that information back to the client.
- 23:44 Poornima Nayar
- Some notes. So with this scenario that is the client and the server streaming, the client sends in a request message to the server. At that point, the stream is open. The server can then place messages into the stream, at which point immediately it is available to the client. Once the streaming has started, the client cannot send in any further messages so which means that it can go into a kind of scenario where it is a long-running server process.
- 24:14 Poornima Nayar
- So in order to tackle that, what clients can do is clients can specify either a deadline or send in cancellation tokens. So deadline is saying I will wait for this amount of time. After that I won't wait for you. So at which point a cancellation token is actually issued on the server and then you can cancel all the child operations, or explicitly the client cancels the call using a cancellation token.
- 24:42 Poornima Nayar
- So that is a server streaming. Then you can have client streaming where the client streams information to the server and the server processes it and then sends some kind of a response message back. Here, the stream keyword is in front of the request message so the idea here is that I am streaming readings to the server for processing. I don't know for what reason, I actually thought of such a kind of scenario, but hey, that's fine for the day I guess.
- 25:12 Poornima Nayar
- So with this, there's no actual message sent to the server to actually start the streaming method. The method is simply invoked and then the client keeps placing messages into the stream. And then, when the client is done with all the streaming, it actually tells the server that, "Hey, I am done with all the streaming from my end. You can go do whatever you need to do now." So at that point, server processes the messages and sends back the response to the client along with the status and maybe some trailers as well.
- 25:47 Poornima Nayar
- So this is ideal if you want to, say, upload a large file or video encoding so you can send data in chunks to the server. And then there is a mixture of these two scenarios that is client and server side streaming, where the client streams as well as server streams. Here you have the stream keyword in front of both the request as well as the response message. It's probably something like processing a stream of readings from the client and the server saying that, "Yep, I have acknowledged your message," or something of that sort.
- 26:19 Poornima Nayar
- So again, the client invokes the streaming or the method here. You can have independent client and server streams in this process. Either the client and server can stream simultaneously, which is a more complex scenario, or the client can send in a stream of information or a single message and for every packet of information received, the server can send some response back as well. And the method is complete when the server also has completed the streaming and all the response and the metadata, the headers, everything is sent back.
- 26:54 Poornima Nayar
- So let's have a look at some code here. So that is in my modes two project. So it's the same proto files that I have except for the fact that I have broken them down into four different proto files. So starting with readings proto, this is where I have the unary one, where I'm sending in a reading and then accepting a response. I have the server streaming where we are getting readings for a particular date. We have the client streaming where client is sending some data to the server for it to be processed. And then you have bidirectional streaming, some kind of readings being processed in both directions and streaming happening.
- 27:42 Poornima Nayar
- So it is consumed in the same way on my ASP.NET Core application, the way I showed it to you before. So in my service, this is the one, starting with the unary, it is nothing big in implementation. All I'm sending is a response back. But as you have seen in my basic demo, it is a simple request and the server called context object available to me and what I'm responding with is a sent reading response. And on the client that is the unary client here, I create a channel based based on an address. So I have already deployed my service to Azure. So this is where I have deployed it to. I create a channel and a unary client and send my reading in.
- 28:33 Poornima Nayar
- And let's have a quick look at the demo and see whether that works. gRPC unary. It should send and give me a response. It has. If you want the asynchronous blocking call, you have to implement something like this. Ctrl KU. So I store the call in available using the async method, and then I can wait for the response and await that call to get the response. If I want access to the headers, again, I can await a call to the response headers async and I can then write that out into the console as well. So if I comment this out, okay, see. Oh, what did I do? Let's delete that. I deleted the wrong bit. There we go. So that should print out. Did that run? Yeah. So it has given me some response and these are the headers that are coming back along with my response header. So that is also getting printed out.
- 29:58 Poornima Nayar
- Now, going back to the server streaming method on the server, here, you can see that the method signature is slightly different. I am sending back a task, but I have access to the request message that comes in. I also have a server stream writer that is the response stream to which the server places the messages and also the server called context object, which is available in any mode of gRPC for me.
- 30:24 Poornima Nayar
- I have a little bit more code here because I'm trying to add something to the request headers as well. Find me after the talk if you want me to go through that code but I'll keep it to just the streaming call for now. So here, what I'm doing is I'm checking whether there's a cancellation token that has come in as a part of deadlines. If there's no cancellation requested, then I am creating a get reading response instance object and then placing that directly to the response stream. And when it is placed here, it is immediately available to my client, which is here. Where is it? Server streaming.
- 31:03 Poornima Nayar
- So in the program.cs, again, it starts with creating a channel and creating a call and I can await the call to get the response headers and here. Where is it? And using the read all async method, which is on the streaming call response stream, that is where the stream data appears from the server. I can keep iterating till the end of the stream to actually write the response into my console. So if I have to see that in action, it should get me that information back quite steadily so it should, I think, go through some 10 readings or something. So that is the gRPC and that is the metadata that is coming back with the trailers and the status code. So that is server streaming.
- 31:56 Poornima Nayar
- Similarly, you have client streaming where I have access to the incoming request stream and I send back a readings response. Again, the main part of the code here is moving through or looping through that request stream. So I'm using the move next method here. So with client streaming, the client actually notifies the server that I am done with the streaming, at which point this move next will turn to false, will give a false value. Till then, this will continue looping and when it turns false, this response is sent back to the client. So to see that in action before the code, program.cs. Again, there is a gRPC channel and invoking the call.
- 32:43 Poornima Nayar
- So here is where I am sending the readings and for sending the readings and placing the data into the request stream, I just simply loop through one to 10 and then into the request stream of the call I keep on writing some data. And to notify the server that I'm done with streaming, this is the line of code that does it, that I have completed the streaming from my end. Go do whatever you need to do. So to see this in action, the client streaming is here. It is sending the message. It should send up to 10 messages, I think, and then send me back some information. That's it. And that's the response metadata that is received from the server.
- 33:32 Poornima Nayar
- Finally, bidirectional streaming. Here, the method signature has a stream reader as well as a stream writer. And the scenario that I'm sticking to is very simple. That is for every input message that comes in from the stream, I'm writing back to the response stream. So I'm using the request stream.movenext to ensure that I haven't reached the end of the stream and for every time I loop, I'm writing something to the response stream. The code on the client is a little bit more involved here, which is here, in program.cs. I invoke the method, I immediately start listening to the server. And for anything that is placed in the response stream, I'm directly writing to the response stream and showing it on the console. Because we never know. Depending upon your scenario, the server might have started streaming even before the client has started streaming.
- 34:25 Poornima Nayar
- And to stream from the client, again, I have a loop and then placing something to the request stream here. In this case too, I need to inform the server that the client has ended the request stream. So that is the bidirectional stream code for you. And if I need to show you that in action. So sending message. So that is client as well as server streaming one after another. You can have the simultaneous completely independent streams as well. But regardless of the gRPC mode, because we are using HTTP2, the order of messages, even in a stream that is preserved for you.
- 35:11 Poornima Nayar
- Talking about metadata, just like you have request and response headers in an HTTP API call, you have access to that in gRPC as well. There's also response headers that are sent alongside the response, but there is more information that you can have as metadata with gRPC because gRPC makes use of the HTTP2 trailers. Trailers are information or it's a part of the HTTP toolkit, HTTP 1.1 has it as well, but it is in gRPC that it is properly made use of.
- 35:47 Poornima Nayar
- There's a difference between the headers and trailers when it comes to response headers and trailers because response trailers are served once the response is completely complete. It's always served after the response has gone back. That is probably server has finished streaming in server streaming and bidirectional streaming. Response headers, it actually contains information about the call itself. So if you looked at one of the demos that I had, it is information like date and the host that goes back with the response headers. You can have more, but response trailers are about the response itself. Probably during the streaming process the server timed out or maybe had some kind of an error. All such exceptions can go into the response trailers. That is where you have that information. So the code is in the GitHub repo if you want to have a look or we can talk after the session.
- 36:42 Poornima Nayar
- Talking about versioning, of course gRPC is an API, so we need versioning to make sure that we do not explicitly force the client to update. It should be a choice given to client and that is a good practice when it comes to any kind of API development. So with versioning of services, the client and server can iterate at their own pace. The clients are not forced to update and we can gradually introduce breaking changes at a pace the client can digest them and update them. It's not like, "Hey, drop everything that you're doing, update your client because I'm going to introduce breaking changes." And the best thing about gRPC is that versioning is something they have baked into the whole API contract itself. So a little thing that you need to take care of and the versioning can go a long way for you.
- 37:32 Poornima Nayar
- Before we look at the code, some things to know what contributes to breaking as well as non-breaking changes. The non-breaking changes, of course, if you add a new field to the request of the response, that's not a breaking change. If the server adds something to the request message, the client might not be setting that because the client is not updated. So when it gets to the server, it is set to default value for that particular field type before it is moved on to the execution.
- 38:00 Poornima Nayar
- Similarly, if server actually sets something in the response, a new field is added to the response. On the client side, it has a contract as well so it tries to deserialize the message that it has received. It doesn't find the field. It cannot understand it, So it is marked as unknown field and then just discarded.
- 38:18 Poornima Nayar
- So generally speaking, adding a new field, a method or service is considered a non-breaking change. I think this is the same with regards to any other API out there. But it can have binary breaking changes, which means that you need your client to update if they get the latest proto file. For example, removing a field. Removing a field means introducing default value but what happens when you introduce default value? There could be a behavior change. For example, if you're using a boolean field, the for boolean value of false might actually mean different from a default value for boolean, which is also false. So based on that, you might have behavioral changes so you need to be careful in such cases. So removing a change is a binary breaking change.
- 39:06 Poornima Nayar
- If you rename a field or a message in certain instances it can be a binary breaking change. So with proto buff, you can actually have a catch-all kind of field. I can mark a field with a type, a special well-known type called any, and I can pass in any type as the value of that field. And that's probably the only instance where the protocol buffer, when it transmits over the wire, the name of the field is also transmitted. So in such cases, renaming a field or message can be a binary breaking change and needs the client to update.
- 39:44 Poornima Nayar
- Changing C# namespace, of course, it changes all the code layout. So again, a binary breaking change. Nesting and unnesting a message can be a binary breaking change because it changes the message name. If you have a message that is explicitly marked a message and then it is referred like I show you in my example, that is fine. But there are instances where you can have the message a part of another message. So you can have the message definition inside another message definition.
- 40:14 Poornima Nayar
- In such cases if you unnest or nest something, the message name changes so you need to update the client. And what constitutes complete breaking changes, changing the field data type? Certain field data types are compatible, but if you have non-compatible stuff, then things are going to break. Changing field number because the field numbers are transmitted over the wire constitute breaking change. Renaming package service or method is a breaking change. Completely breaks things, so is removing the service or a method or even, as I said, renaming a package.
- 40:51 Poornima Nayar
- So why is this package keyword important? Let's have a look. Show taskbar and into my versioning demo, which is here. So here I have got two protos in V1 and V2. For simplicity's sake, I have kept everything in a single project. In the greet.proto, it is out of the box Visual Studio template. The most important thing is the package name that I have got here. I don't have a C# namespace, so the greet.v1 is going to double up as my C# namespace as well.
- 41:24 Poornima Nayar
- So this is my V1 proto and suppose I go live with this and I have a V1 implementation of this, which is a very simple implementation, nothing special about it, a unary call. And of course I have got it all mapped to my gRPC service middleware as well. And let's run this and then have a look at the client, which is also based on this particular proto file, which is here. So in here I'm consuming the same V1 proto and I have some generated code, all of that. And I have a unary call making a call to the V1 version of the server.
- 42:08 Poornima Nayar
- So if I now try and run it, it's gone and spoken to my server. But what happens behind the scene is it is doing an HTTP2 post to a path that it has worked out. So it is the base URL of my server followed by greet.v1, which is nothing but this package name that you see here. And then you have greeter, which is the service name that you see here. Finally, the say hello, which is the RPC name. So the package name, the service name, and the RPC name are all a part of the URL that is worked out by the middleware. So changing any of that breaks the system.
- 42:56 Poornima Nayar
- But how is this package keyword important? Now, suppose we have another greet.proto with a package name called V2. I have my own implementation, greet service V2 which is again very simple and I've got it all mapped. And suppose I have another client consuming the V2 version of the greet.proto. Is it still running. This. Okay, and let us start the V1 client as well as the V2 client.
- 43:32 Poornima Nayar
- So both of the clients have communicated with the server. In the first case, it has communicated with the V1 version and then it is with the V2 version. So there are two different clients, each having references to two different proto files, consuming two different versions of the proto file and the middleware is making sure that it is to two different services that it is communicating. So just with the presence of the package keyword, we have managed to have that extensibility and versioning baked into our services.
- 44:05 Poornima Nayar
- So by the nature of gRPC, it is impossible for a client in V2 version to communicate with the V1 or a V1-based client to communicate with the V2 version of the service. So having that package keyword is a best practice to have. It is a best practice right from day one to have it on your service. And that is the kind of foundational stone to your versioning of services. So as I said, versioning is something that gRPC takes care of for you, and it is a simple way to achieve that. Of course, understand where you actually need to implement versioning and of course sunset your versions when you don't need them anymore.
- 44:47 Poornima Nayar
- So finally, the hosting gRPC services. I have played around with hosting gRPC services on Azure. I have got it working on Azure Container Apps. Of course, you need to go through some publish action with the .NET support that we have for Docker and get it out into Container Apps. But you have Azure App Service with Linux plans completely capable of hosting gRPC-based services. That is generally available at this point. And with the Windows plan, it is still in preview. It is something that we can look forward into in future.
- 45:25 Poornima Nayar
- I haven't played around with Kubernetes service and gRPC, but the doc says it is possible, so I'm going to leave it that way. But as you can see in my gRPC modes when I explained all of that, all my client apps are talking to deployed version of the service. And there are three very simple things that I have to do to get gRPC up and running on a Linux-based app service. I actually deployed using Visual Studio to Azure directly, but in the gRPC Linux App Service, there are three things. In the configuration, I need to make sure that the HTTP mode is set to version two. Come on. Load.
- 46:12 Poornima Nayar
- And there's a special app setting that you need to set so that all the incoming HTTP two traffic listens on that. Oh, there we go. So the HTTP version is set to two and there should be ... Sorry, this is not the one that I was looking for. This is the one I was looking for. Sorry for that. Configuration. Load, load, load. Yeah, so this is the one where the HTTP2 proxy should be set to gRPC only and in the environment variable, I need to have an HTTP2 only port, and it can be set to any kind of arbitrary value, but that sets the port to listen for any HTTP2 requests.
- 47:03 Poornima Nayar
- How are we doing for time?
- 47:04 Speaker 2
- Over.
- 47:06 Poornima Nayar
- Over? Okay. There was some bonus information, but if you want to have a look at the resources for the day, it's here. So there's a lot more stuff in my demo for the day, which contains all things like gRPC web and gRPC JSON, transcoding, all of that. But all of the information that I have is available in this URL for you. If you have any questions, I can speak to you now, or if you want anything more from the demo to be showed, I'm happy to show that as well.