Skip to main content

Webinar recording

Decomposing .NET Monoliths with NServiceBus and Docker

Monoliths are hard work. They’re difficult to understand, brittle to change, time-consuming to test and risky to deploy. And you’re stuck with the monolith’s tech stack so you can’t use any modern architectures or technologies. Decomposing monoliths is easy if you take the right approach, and it results in a distributed solution with many small components. Those can be independently updated, tested and deployed, which means you release better software, more quickly.

🔗In this webinar you’ll learn about:

  • The right approach to decomposition - the feature-driven approach, powered by Docker and NServiceBus
  • How to run your monolith in a container
  • How to extract features into new components
  • How to plug components together with NServiceBus and run the whole stack in Docker containers

🔗Transcription

00:00:00 Kyle Baley
Welcome everybody to the Decomposing .NET Monoliths. I'm really happy to have presenting today Elton Stoneman. Elton's a developer advocate for Docker and eight time Microsoft MVP and a book and Pluralsight author and he's going to be telling us his experience on Decomposing .NET applications with the focus on, of course, Docker and NServiceBus. He spends much of his time doing this sort of thing, traveling around the world, helping people understanding what containers, DevOps and micro services can do for them with their existing applications.
00:00:42 Kyle Baley
So before I pass it off to Elton, just a quick note on the Q and A. There's a Q and A that you can answer questions. We're generally going to keep the answers to the end, the last five or 10 minutes we'll answer the questions unless there's something that's very relevant to what Elton's talking about. So don't be afraid to use the Q and A and everybody who's registered will get a recording of the webinar that we'll send over, I believe, tomorrow.
00:01:17 Kyle Baley
So with that, I will turn it over to Elton. Welcome, Elton.
00:01:22 Elton Stoneman
Thank you very much. Thank you everyone for joining. My name's Elton. You've already had the introduction. So I work for Docker. Before I joined Docker I was a .NET consultant for a dozen years, building big .NET projects that we thought were top of the range designs at the time, which turns out to be they were just big, ugly monoliths. And more recently I've been working on how you can break apart those monoliths, take your existing applications, bring them into the modern world with modern architectures and modern technologies without doing a big, huge rewrite that takes two years and delivers almost no business value.
00:01:56 Elton Stoneman
And that's what this session is about, but specifically focusing on NServiceBus and Docker. So there's three ares I'm going to cover. So firstly I'm going to talk about how you run .NET applications in Docker containers. This is not about Linux or .NET Core. This is about .NET Framework, full .NET Framework apps running in Docker containers on Windows. I'm going to show you how you take an existing application, package it to run in a Docker container and how you can start that application up and run with it. So that's going to be just an existing monolith.
00:02:26 Elton Stoneman
The second part of the session which is the bulk of it is breaking down that existing monolith. Breaking features out into separate components, running those components in their own containers and then plugging everything together with NServiceBus. So we'll see how we can use all the typical messaging patterns that you're used to, how it works with NServiceBus and Docker and how the containers can all talk to each other nicely.
00:02:48 Elton Stoneman
And then the last thing I'll do, which is just a quick overview really, is why NServiceBus and Docker work so nicely together with NServiceBus having plugable transports, I can use different physical message cues without changing any of my code, without changing ... Just a few lines of conflict in my Docker application manifests, then switch to a completely different transport, whether that's evaluating it or have different transports in different environments. I make it super easy so that when I'm shipping in production it's the same code, the exact same binaries that I've tested in the other environments, but I can be using different transports with confidence.
00:03:23 Elton Stoneman
Okay, so firstly, Docker migration. Starting with my sample application, it's a really simple .NET application. It's a .NET web forms app. Originally it started live as a .NET 3.5 app, but I've upgraded it to 472 just so I can use core stuff like .NET standard libraries and some of the new C# features. I haven't changed any code. It would still compile as a 3.5 app. It uses a SQL Server database. So you'll see it's a really simple web application. It talks to SQL Server, it loads some reference data up, and shows on the webpage and then it saves and dates the database.
00:03:56 Elton Stoneman
Now, if I want to run this application in this monolith, I've got a whole bunch of prerequisites. Whether I'm a developer joining the team or whether I'm an ops person who's to point us to a new environment, I've got a whole bunch of things I need to have to make the application work. So I need the runtime, I need the .NET Framework. If I'm a developer, or if I'm setting up a build server I need a tool chain. I need MS Build and NuGet and all the targeting packs and all that sort of stuff. I need the host that's going to run it, so I need some flavor of a web server which is probably IIS and then for the database I need something that's going to be compatible, so a SQL Server.
00:04:30 Elton Stoneman
Now, there are a huge array of options of different versions of all that stuff. There are at least six different versions of .NET, the tool chain, the targeting packs, all those things are different versions. If I'm a developer I might be using IIS Express and SQL Server Express. In a test environment I might be using the developer editions and in production the SQL Server on a replicated environment. There's a whole lot of variety about getting the right versions of these tools in place. And actually the bigger the team is, the more important it is that everyone's got the same versions, because if someone upgrades their Visual Studio and accidentally checks in a change to the solution file then maybe the build is going to break and you're going to have to hunt that down and everyone upgrading and stuff and it's just an incredibly difficult matrix of things to keep synchronized just to get this thing working.
00:05:17 Elton Stoneman
So the first thing we're going to do is just move all these things to Docker. So I'm going to run my web application in a container, I'm going to run SQL server in a container and then suddenly the only things I need are Docker. So you'll see in the first demo that I'm about to move on to, I can use Docker containers to compile my application so I don't need Visual Studio on my machine, don't even need the .NET Framework on my machine. Inside the Docker container is all the build tools that I need, the full SDK to compile my application from source and then also to run the application. So everything I need to run it, ASP.NET, IIS, all that stuff is available in my web container.
00:05:54 Elton Stoneman
And I'm just going to run SQL Server in a container too because what I'm going to show you to start with is just a development environment or a test environment. I do not need a replicated, highly available SQL Server. I just want to be able to run the same version of SQL server that we've got in our production environments. So suddenly the only prereq I've got is Docker and that makes things super easy and that's what I'm going to show you now.
00:06:13 Elton Stoneman
So switching to my machine. So what I've got here is this is a Window server 2016 virtual machine. Everything I'm going to show you and at the end of the session; you have to wait to the end; I'm going to show you a link to all the demos that are on GitHub so you can do this all yourself. You just need Windows 10 with Docker installed or Windows Server 2016. Those are the minimum versions.
00:06:32 Elton Stoneman
So I've got Docker installed and running in here and you'll see when you run Docker commands, so the Docker command lines that you see everywhere, Docker image build and Docker container run, the command line is actually talking to the Docker API which on Windows is a Windows service that's sitting in the background. You can point your local command line to some remote Docker server and that could be Windows or Linux. But right now my client is running on Windows and my server is running on Windows too. So this is just a full Windows install.
00:06:59 Elton Stoneman
So the first thing I'm going to do is take my existing application and compile it so that I can run it in a container, package it up to run in Docker. And to do that I need a Docker file. And the Docker file; Zoom in on this a little bit; is just a really simple script that packages up an application. So think of your existing deployment document which is probably a Word document with a bunch of pictures saying, "Click this button here and then type this word here and then click this button here." The Docker file automates all that in a really simple script.
00:07:29 Elton Stoneman
So there are two stages to this. The first stage is compiling from source. Now, you don't need to do this. You don't have to use your source code to put stuff in Docker. You can have an MSI that's output by your current CII process and all you would do in your Docker file is script deploying the MSI with Msiexec. If you've got a ZIP file that's using Web Deploy or something like that, all you do is copy the ZIP file in and then expand it. But I'm using a source because that makes it easier to expand the demo as we go on.
00:07:55 Elton Stoneman
So the first part of this Docker file is using an image. A Docker image is just a package with a bunch of software in it that's publicly available. This image is owned by Microsoft, it's go the .NET Framework 472 and it's got the SDK. So this long name here is the unique name for this Docker image and when I run this Docker file it'll start a container from that image and then start processing the rest of the instructions in this script.
00:08:19 Elton Stoneman
So what I'm doing here is I'm setting up a directory where all my source code is going to live and then I'm copying in the source from wherever I run the command, so that could be my local machine or it could be the build server or whatever, and then I'm running my existing build file. So I've got an existing batch file that I use. There's no need for me to try and break this up and do it in Docker, although in a second I'll show you how you would do that. I can take all my existing stuff, write a really simple Docker file and see how my application looks in a container.
00:08:48 Elton Stoneman
So the first stage, when this completes I will have my published web folder, my published web deploy folder in this container and then the second stage is packaging that up so I can run my application in a container. So this first stage has got all the SDK, it's got MS Build and NuGet and all that sort of stuff so I can build my application.
00:09:06 Elton Stoneman
The second stage is another Microsoft image, it's got ASP.NET 472 installed on top of Windows Server Core 2016. So that gives me everything I need to run my application. It's got IIS installed, it's got .NET 472, It's got ASP.NET already installed and configured. So this is everything I need to actually ... all my prerequisites to run my app. It doesn't have any of the SDK, because I don't need that anymore. That's all finished with from the first stage of the Docker file.
00:09:34 Elton Stoneman
So now I'm switching to PowerShell because by default the Docker will run commands using the ordinary Command shell and if I switch to PowerShell, I can do anything I need to in here. So I'm creating a directory where my web application is going to live and then I'm running some PowerShell just to set this up as a web app in the default website. Default website, I know it's there because it's mart of Microsoft's base image so all I'm doing is creating effectively a virtual directory inside the default web app where my application is going to live. And then I copy in from the builder stage, which is what I've called the first stage of my Docker file, I copy in the published website that comes out from the build. And that's literally it.
00:10:10 Elton Stoneman
So now I don't need a build servicer with Visual Studio installed to build my application. I just need something with Docker and the source code, because my Docker lives in source code along with everything else. So to join the project now, I need docker on Windows 10, I pull the Git Repo or download it from VSTS or whatever I'm using and then I run the Docker command I'm just about to show you. Now if I build from this, this is really simple because my batch file has got all the logic to compile my application, which could be a whole bunch of MS Build scripts and targets and random stuff in there. And it works fine, but my Docker file really should be explicit in exactly what's happening.
00:10:49 Elton Stoneman
So I've got a version two of my Docker file. It's going to build the same source code, but it's expanded the first part of the build to a little bit more clever stuff. So what I'm doing is I'm copying in all the individual project files and packaged conflicts files from NuGet and then I do a NuGet restore. So in the first stage I'm still using the SDK image from Microsoft that's got all the stuff I need. I copy in all the project and conflict files that define all the dependencies for my whole application and then I run NuGet Restore. And then I copy in the source code and I run Microsoft Build to build my project. So rather than have this batch file that I can't see what's happening, I'm explicitly specifying all this stuff.
00:11:29 Elton Stoneman
And I won't go into a huge amount of detail here, but by splitting out the NuGet and the MS Build steps, Docker has a really nice, efficient caching mechanism so when I rebuilt this stuff it won't do things that haven't changed, so it won't need to run the NuGet Restore unless I've changed some attributes. So just showing you this is a more realistic one. And the second half is pretty much the same, but I've got a few more things going on in here. I copy and I do my web application and then I've got some stuff to send some logs back out to Docker so I can read the container.
00:11:58 Elton Stoneman
So I'm not going to spend a whole bunch of time on this but just bear in mind this is all on GitHub and you can check it out and there's only 30 odd lines here to have a Docker file which gives me my .NET application, which could literally be a 15-year-old application and how I can run that in the container. So I can move it from ... Maybe I'm running it on Windows Server 2003 in the data center and I want to run it on 2016 or I want to move to the cloud. Putting it in a container first makes that super easy.
00:12:23 Elton Stoneman
Okay, so I've got my Docker file. To compile my application and package it I'm going to run a Docker command, which I'm going to copy here so you don't have to watch me type. So Docker image build. So I'm building an image, so I'm packaging up my application. This is my unique name. So just like Microsoft changes their images names, I'm calling mine dwwx, which is a short form for Docker Windows Workshop, because the demo I'm showing you is something I do as a workshop, the web application name, and then I'm giving you a tag. And that's just like a version number. So this is a version that reminds me this is for the NServiceBus webinar.
00:12:58 Elton Stoneman
Then I told Docker where to find my Docker file and that's it. So I'm going to hit enter and it's going to send all my stuff over to Docker and it's finished. It's finished in like a second because I've already built this image before and I haven't changed anything since I last built the image. So you can see all the steps in my Docker file and they all get shown as using the cache. So this efficient, that I was talking about, when you have this Docker image and I've got my application now, it's packaged up in this image which I can share with the rest of the world.
00:13:29 Elton Stoneman
Logically it's just one thing that I talk about, but physically it's lots of small layers and each of those small layers can come from the cache which is why this was super fast. So the reason I'm focusing on this is because as part of your CI build, if you're using Docker to compile and package your application, you should be able to build an image with every single check-in, with every single push to git and it will work really efficiently in terms of the build time and in terms of the disk space that you use to store these things.
00:13:56 Elton Stoneman
Okay, so I've got my application now. So what I'm going to do is I'm going to run it but I could just do a Docker container run but I can't just run the application on its own because it needs SQL Server. So what I'm going to do is I'm going to run a application that has two Docker images. So in order to do that I have another type of file. So my Docker file is how I package one component of my application and what I have on the screen here is a Docker Compose file which is how you define a distributed application. So this distributed application consists of two container images. One is my signup website that I've just packaged and the other one is SQL Server.
00:14:37 Elton Stoneman
So I'm running my database in a container. And again, this is an image that Microsoft own, they keep up to date, it's publicly available. And I've just a bunch of configuration in here. So I could do Docker container run and parse all these values in, but by writing it in my Compose file, which also lives in Source Code, I can share it, I can version it, I can have it as part of my ordinary release process. So all I've gotten here are some environment variables to set up SQL Server and to set up my connection strings for my web application. I'm publishing some ports so I can traffic into the container and I'm mounting some volumes so that I can see data outside of the container. I'm not going to go into too much detail on that stuff, but this is all fairly simple stuff. I'm just defining two separate services, one database, one web application and they're going to run in containers.
00:15:21 Elton Stoneman
Okay, so to make this work I am going to run another command. So I've already got those images on my machine. One I've just built and one is from Microsoft. I'm going to run docker-compose up and what that's going to do is start all those containers. So what Docker Compose does is that the application manifest, the file that I've just shown you is the desired state and when you run your command to start up your application, Docker compares what's in the desired state to what's currently running and it will create or stop or update whatever it needs to do to get to your desired state.
00:15:55 Elton Stoneman
Okay, so it's told me it's created those two things. So if I do a container list here I'll see I've got my signup web application and Microsoft SQL Server. So they're both running. They're both running on my machine. Now, If I browse to my web application container, there are two ways I can get traffic into this container. Externally, I can browse to my machine which is .157 on port 820 which is how I mapped it and Docker will send traffic into my container. If I'm on the machine, I can look at the container. If I do Docker container inspect with a container name, then I'll see a whole bunch of detail about what this container is set up to do and what the parameters are that have been parsed when it got created.
00:16:44 Elton Stoneman
This all comes out in JSON, it's really useful for when you're automating stuff. It's also quite handy just to see what's running in your container. I've got an IP address here. If you're not familiar with Docker, inside the container the application thinks it's got its own server. So it's got its own C drive, it's got its own registry, it's got an IP address and a host name, but actually in reality it's just a container running, all the processes are running on my server directly. So actually the process to run ASP.NET, the IIS worker process is running directly on my server but it's got this thin boundary around it, which is where doctor comes in, called a container. So it can't see other containers and everything is isolated, unless you connect them using a Docker network.
00:17:24 Elton Stoneman
Okay, so I'm going to browse to this and you'll see it takes a while to start. And the reason it takes a while to start is because this is a big, old ASP.NET monolith. It's written to run on IIS and IIS was written to run on servers or virtual machines in the days before we had containers. So these aren't modern platforms that I designed to start up in a second, because they're part of a dynamic platform where a container can start and it's running in a few seconds. IIS expects to run when your machine starts so it can afford a bit of a cold start time because it expects the machine to keep running for months and years.
00:18:05 Elton Stoneman
That gap you saw while I was waiting for this to load is when the container starts, IIS gets started but it doesn't start a worker process until the first request comes in. When the first request comes in, then the worker process starts. Entity framework is being used in here, so it connects to SQL Server. It's an empty SQL Server database because I'm just using Microsoft's image. So it deploys the schema, deploys the reference data and then finally I get my page.
00:18:28 Elton Stoneman
Okay, so here's my demo application. It's not a particularly interesting application. It talks about this thing called Play with Docker which is a real app, so you can browse through the Play with Docker website and it's an online Docker environment. There are a bunch of tutorials you can follow. You don't need to install Docker on your machine. It's Linux-based but actually the Docker terminology is exactly the same for Windows. The newsletter, which is this application is fake. So when I click this button I've got some drop downs here. The data here comes from reference data in the database so I know this must be connecting and this is my application now running in Docker containers.
00:19:02 Elton Stoneman
So if I just put some brand data in here, select something here and click on go, my web container is adding data to my database container. So this is just how the application works. So I've got my whole monolith running in a container and there are advantages which are just in doing that. So I haven't moved to a microservice architecture and broken up my application yet, but just by putting it in Docker with this really simple Docker file, I've already got something that is now portable that I can run anywhere.
00:19:33 Elton Stoneman
Okay, so I've got data in SQL Server but my SQL Server container isn't publicly available because of the way I started it. It's only a private thing that I can get to from my machine. If I want to see that data is really there, I can run this command and that's going to execute a command inside an existing container. So this is the name of my database container and the command to run is PowerShell, Invoke-Sqlcmd, select star from prospects, which his the name of the table where the data gets saved. So if I run that it's going to run that command inside my existing container and just show my the output back out there.
00:20:08 Elton Stoneman
I know that Invoke-SqlCmd, PowerShell command is inside the SQL Server image, because Microsoft put it there and I know I can rely on it being there. And this just shows me the output that I typed in so far. So that's the codes that relate to the country and the role that I selected from the dropdown.
00:20:25 Elton Stoneman
So the last thing I want to show you before I move on from the first iteration of this app is I've also got a bunch of end-to-end tests. So if I'm doing any kind of major restructuring, an end-to-end test that goes in at the top at the website and actually executes the functionality and then checks it's there at the end is a really useful thing to have, but they've typically not been very well-liked in the industry because they are expensive to run and they can be brittle. So if I'm going to run data to a real database, I need to know that when my test runs, all the required data is already set up and when the test completes that it can be run again without failing because the data is already there.
00:21:06 Elton Stoneman
So I'm using the spec flow here which just lets me define my test in simple languages. So I'm saying as a prospect interested, da, da, da, da, da, I will decipher notifications. And then each of these sections, when I browse to the signup page and enter details and press go, each of those has some code behind that's actually executing the test. It uses a headless web browser, so it pretends that there's a browser connecting to the page and then it enters all the details from this table down here and then it checks that it says thank you afterwards, and then it goes and checks SQL Server to make sure the data's in there.
00:21:35 Elton Stoneman
So this is really useful, because I've also put this inside a container. I've got a docker file for this and if I run my end-to-end tests I'm just using any unit inside my tests. So when this container runs, it's configured to expect to find a web container with the name signup web, which is how I'm running my web container. So we will find the container, the headless web browser will go and connect and do all this stuff and I've got 26 tests and 26 paths. So before I start breaking this thing up, I'm confident that if I run this again, even if I completely split out the architecture, because I'm going in at the top and testing my webpage and then checking the results at the bottom in my SQL Server, then if this carries on working I can be confident that my re-architecture has worked.
00:22:20 Elton Stoneman
And because this is in a container, I can spin out the SQL Server container, spin out my web container, run all my tests and then kill them and then I'm confident that I can just keep on doing that cycle over and over again, because the SQL Server container will start in a second, I can execute all my tests in a second and I can just keep repeating that because I know that the data that I need is going to be there and I know that when I repeat the test I'll have a fresh SQL Server without having to manage all that data.
00:22:46 Elton Stoneman
Okay, so that was the first set of demos. We've got to the point now where I've containerized my application. It's still a monolith and that in itself is valuable, that now it's a monolith running in a container. So what that gives us is a whole bunch of useful things. So firstly, it didn't take very much effort. So I didn't have to exchange any code or adopt any new ways of doing things. I can take a Docker file and I can just put that at the end of my existing CI process. And what it gives me, and I haven't dwelled too much on this, but these containers, they're super efficient. So on my server there, which is a VM that's got eight gig of RAM and one CPU, I'm running a SQL Server container and an ASP.NET container.
00:23:29 Elton Stoneman
By the end of today's session I'll have a dozen containers running. They won't use any CPU unless they're actually doing work. They don't have any memory requirements unless the application is doing something with memory. So you can cram as many containers onto your server as you possibly can unless they're all peaking at the same, they'll all sit there nicely. And they're portable. So this application package that's my Docker image, I built my web application into a Docker image. I can share that on a public Docker hub or on a private registry which is just a server that stores all these things inside my organization, and then whoever has access can pull that image down, run my application in a container and they can be confident it will be exactly the same.
00:24:11 Elton Stoneman
So you can copy and paste those commands that I've shown you so far on a completely new Windows server and Docker will download the images that it needs. They're all public, so it will download them and you will see the exact same results that I've got here. And I'm upgrading my entire stack when I do this. So I don't have to upgrade .NET or the application code, but when I'm using the Microsoft base images, I'm using Windows Server 2016 under the hood. So if I've got legacy applications that are running on 2003, 2008, then I can just do a really quick upgrade and I can be sure that everything still works. So I don't have to change code.
00:24:49 Elton Stoneman
You can cut up an awful lot of your infrastructure just by moving your existing things to containers. You can run in the data center any cloud, and when you run on a laptop it'll be exactly the same as in the data center and you're getting the latest operating system and technologies there. So if you want to do a platform upgrade to the latest version of .NET, you can do that fairly safely, package your application, verify that it all runs and then you've skipped past all the inefficiencies you've got and the potential security flaws in Windows Server 2003 with .NET 3.5.
00:25:21 Elton Stoneman
Okay, so now I've got my monolith in a container. That's fine. So there are benefits there and I've just walked through them. Those are only some of the benefits. But this is a really good starting point for modernizing my architecture, for breaking up this monolith because I now have a platform underneath that lets me run these different components really efficiently so I can take features out, run them in a different container and then plug everything together using whatever I like, whatever paradigm I like to join these things together. Obviously today I'm going to focus on messaging in NServiceBus.
00:25:53 Elton Stoneman
So from the second version of my application I'm going to focus on a feature which is potentially a performance bottleneck. So when I click that go button on my website and it saves the data to SQL Server, that is a synchronous operation. So my web application uses Entity Framework, it opens a connection to SQL Server, it does some look-ups to find the reference data that it's using and then inserts a new row into the table.
00:26:20 Elton Stoneman
That's a synchronous operation. While that's happening, which might only be a 10th of a second, but while it's happening that thread has an exclusive lock on one connection from the SQL Server connection pool and the connection pool is finite. So if I have a huge amount of traffic ... And I need an awful lot of traffic because this has to be exactly concurrent. But if I've got enough traffic coming in, then eventually I'm going to starve the connection pool and the next person who tries to click go will see a big, ugly, yellow error screen. It doesn't scale. I can scale my web application because I'm running in containers. So I could have a cluster of a dozen servers all running Docker, I can run a 100 or 200 web containers fairly easily, but eventually I'm going to hit a bottleneck where SQL Server needs to be scaled up as well.
00:27:03 Elton Stoneman
So instead of doing that, version two I'm going to use messaging. So I'm going to take the save feature out of the web container and put it into something separate. A separate component which will run in its own container. It's just going to be a .NET console application that will run in a container and it will sit there waiting for messages to come in. The web application, instead of saving directly to SQL Server will send a command message to NServiceBus and then the message handler on the other end is listening for those messages and it will do the save to SQL Server. And this does scale because now the thing that's writing to SQL Server is my message handler component. I can scale that independently of my web component.
00:27:43 Elton Stoneman
So I can have a 100 web containers running and maybe just 10 message handler containers. If there's a huge amount of load coming in, then messages will start filling up in the queue but that doesn't matter. That's what the queue is for. So I can smooth out any requests that come in and for this particular workflow, when I click the button I don't need to go directly to the database within a 10th of a second. If it takes a second or 10 seconds or an hour for it to work its way through the queue, that's fine.
00:28:09 Elton Stoneman
The other thing I'll do when I come to version two is in the message handler, when I've dealt with the message and I've handled the command, I'm going to publish an even to say a new prospect has been created. And right now nothing is going to happen with that event at the end of version two. It's just there, because it's a useful business event to record. When I come to version three I'm going to make use of that event and I'm going to put a new feature into my application which is a completely new data store. I'm going to have a separate data store and a separate analytics web UI. So what I'm doing at the bottom here with this new set of containers is I'm listening for that published message from the message handler to say that a new prospect has been created. I'm going to save that data into a separate data store which is just a reporting database. I'm going to use Elasticsearch for that. And then Elasticsearch has a companion product called Kibana which is a web UI and the web UI lets you do your own analytics over Elasticsearch.
00:29:04 Elton Stoneman
So by subscribing to that message, I can put the data into Elasticsearch in a slightly different format, a more user friendly format, and I can give end users their own analytics. And this is a zero downtime update. So from version two to version three, I'm just adding new containers. I don't need to change any of the existing stuff and I get a huge amount of extra functionality running these open source components in containers and just subscribing to an event that's already there. And the last thing I'll do is I'm going to take out the reference data component of my application. So inside my web application right now, looking up the list of roles that you can select and the list of countries that you can select, all that stuff is baked into my monolith.
00:29:48 Elton Stoneman
But those are potentially things that other parts of the business could find useful. So instead of having them baked into my monolith I'm going to take that feature out, I'm going to run it in a separate container and this is going to use request response messaging. So now when the web container starts up it's going to publish a request to NServiceBus to get the list of roles and the list of countries and then this message handler will send a response back with the list. And this is something I could use in other components if I wanted to. So by the time we got to version four, this is an architecture diagram you can actually throw up on a big screen and talk about and talk about how you've actually moved forward with your architecture without doing a big rewrite, because each of these things are taking small components and moving them to a different place or adding very simple new components, or adding open source software and running it all in containers.
00:30:39 Elton Stoneman
Okay, so let's get on and show you how that looks. So let's clear this. So for version two, what I've done, I've cheated a little bit here just because otherwise we'd be here, this would be a 10-hour webinar instead of a one-hour webinar. I've already written the code that we'll use at NServiceBus to send the command to save data rather than saving it direct. So in my web application here, this is just a .NET web forms app so it's a fairly old technology. I've got a folder here for how the data gets saved and to start with I'm using a synchronous data save. So I'm using some dependency injection here just to get the component to save data, just because it makes it easier to swap that out in the demos.
00:31:23 Elton Stoneman
So currently, what it does is it uses the entity framework context, which opens the connection. So for the duration of this, the thread has an exclusive use of one connection from the pool. So it looks at reference data, it adds the prospect and it saves changes. When I move to version two I'm going to use this component instead, which just creates a new prospect command, passing in all the existing data, so the interface for prospect save just takes a prospect and does something to make it be saved. And then it makes a call to my NServiceBus endpoint to send the command and that's it. So this is the bit that scales, because I can send as many of these as I want per second whereas my SQL Server I'm limited to a 100 concurrent calls.
00:32:07 Elton Stoneman
So I'm going to be switching this component in. Now the other side of this is listening for those commands and handling them, and I have a saved prospect message handler. This is just a .NET Framework console application. And all there is in here is a little bit of boiler plate code to connect up to NServiceBus. So I get my transport configuration, I start listening and then I wait for calls to come in. The message handler inside here is ... again, there's some boilerplate stuff in here. But what it actually does is the exact same code that used to be in the website. So I've taken this code out. In a production scenario this would be way more complicated, but I'm literally taking this method, putting it into a console application and then putting the boilerplate stuff to connect them to NServiceBus and get the messaging through. So I've got one new component to run, which is my message handler and I need to change my web application so instead of running the SQL command itself, it sends the command.
00:33:12 Elton Stoneman
So my message handler also has a Docker file which is a very similar format. So if I look at basic messaging save handler, very similar to the web application that I showed you. It copies in all the packaged config files and the csproj files and does NuGet Restore. It starts off via the same SDK and then it runs MS Build to build just that project. And then the second half of the Docker file is even simpler because it doesn't need ASP.NET. So when you're moving things to Docker, you start from the image that gives you the bare minimum requirements that you need. Because this is a console application I don't need IIS or ASP.NET.
00:33:49 Elton Stoneman
So if I have those things then I would have potentially more surface area for updates and for potential security flaws. So I use the smallest thing that gives me everything I need and then my command is just to start my .exe and I'm sending in some environment variables of where I'm going to find these various things, and then I copy from the builder, the prospect handler that I've just built. So my console application that I've just built. Again, I've already built that to save us some time, but I've built it with the same kind of Docker image build commands. So I'm not doing anything that I haven't already showed you.
00:34:21 Elton Stoneman
So version two of my application is going to be defined with this is my application manifest, my composed file. It starts in pretty much the same way. So my database definition is exactly the same. So when I run this version it's going to leave my existing SQL Server container in place because it's the same definition. My web application is the same version of the image. So I haven't saved the build here because I already have those different ways of saving data in there. But in my environment variables I'm saying the dependency for prospect save should use NServiceBus prospect save. And I'm saying my NServiceBus transport should be the Learning Transport, the ordinary kind of simple transport that you get up-and-running within development.
00:35:03 Elton Stoneman
And then the extra component is my new save handler. So this is the component that listens for those save commands and writes the data to SQL Server. And again, I've got some environment variables here for my configuration. There are lots of ways to do config in Docker. This is a really simple one. So I've got my connection stream for the database and I've got my NServiceBus transport configured to use the Learning Transport. Now these are separate containers that are going to run with their own file systems. But the way the Learning Transport works is they need to share the same file space to be able to transfer messages around.
00:35:36 Elton Stoneman
So Docker has this way of separating storage from a container and being able to take a bit of storage that's on your server or on your working machine and surface it inside the container. So what I've got here is I'm saying my volume on my C drive is going to be where my Learning Transport gets written. And my save handler is going to use the same folder on my C drive that my web application is using. So what that means is when they go and write the Learning Transport data, they're using the same folder. So that's just a nice way of still using all the NServiceBus stuff but using it with containers.
00:36:12 Elton Stoneman
Okay, so the way I run this is I just do docker-compose up. So docker-compose up. Let's actually copy the right command in. The advantage of copying commands of course is that you don't have to see me make lots of typos and the disadvantage is I might copy the wrong thing. So version two up. Oh, let's see. Oh, I haven't created my folder. So that's a good show actually because this here, this folder needs to exist. So on my C drive, on my actual server where I'm running this stuff, because I'm using the Learning Transport I need to create the directory for where this stuff is going to live.
00:36:49 Elton Stoneman
Okay, so let's try again with my up command. So what it's going to do is it's going to look at my Compose file for version two. That's the desired state. It's going to look at what's currently running. So my signup database is up-to-date, so my SQL Server definition hasn't changed, so the current state is the desired state, so it leaves it there. The save handler is a new thing so it gets created and my web application, the definition has changed because the configuration has changed, so that becomes a new application container.
00:37:19 Elton Stoneman
So if I have a look at the logs for my save handler, this is just a console app. I'm going to do Docker container. See, this is why I copy and paste things. Docker container logs of my save handler and all I've got is some default logs that are coming out here. So I'm using the Learning Transport and listening, so it's just listening for messages. Now, because I have a new application web container, I need to get the new IP address. So all this business about getting the container's IP address when you're logged onto the machine, you don't have this issue with Windows 10 and Docker Desktop, because it does all that stuff for you. And in Windows Server 2019 you don't have to use the container IP address. You can use local host. But on 2016 you have to do it this way.
00:38:04 Elton Stoneman
So I'm going to inspect my container. That's going to give me a new IP address. It's a completely new container. Again, it starts in a few seconds, it's ready to go. And I'm going to browse here. And again, now we're going to cold star that I told you about previously. I haven't done anything in my Docker file to make this old application start faster or behave like a modern application really. There are ways of doing that, which I won't cover today, but you can do that again without changing code. By putting some extra smarts into your Docker file, you can make your monolith behave very much like a small microservice.
00:38:39 Elton Stoneman
Okay, so I'm here. I'm going to click sign up. It's the same user experience, so I'm going to say V2, I'm going to put some stuff in here, I'm going to say Denmark, and my role, decision maker, and I'm going to click go. Same user experience but what's happening how is the save process is asynchronous. It's going through NServiceBus as a command that's being handled. So if I repeat my logs and look at the save handler, I'll see it's received a create new prospect message, that's my command message. So a message ID of this, and then I've got some stuff about the prospect being saved to SQL Server and that's the prospect ID.
00:39:14 Elton Stoneman
So if I'm repeating my command to search through SQL Server, I will see my original entry that I put in here for version one, I'll see all the test data that I put in, because when I ran my end-to-end test that used the same SQL Server instance. And here's the new one that I've just put in. So I've moved my save to a different function, to a different component which I can scale independently, I can update independently but I've still got all the same functionality that I previously had. And it's a pretty simple thing to do, because literally I've taken the code that was in my web application, put it in a console app and just plugged everything together.
00:39:49 Elton Stoneman
Okay, so version two has fixed a potential performance problem. What I'm always doing with my message handler, my save message handler is publishing the event to say that my prospect is being created. So in version three I'm going to add this extra functionality to get my analytics, my user-facing analytics. So what I've got in here is ... Let's just go and check that that's how that's working. So my save message handler, after it's done the save it does prospect created, it publishes this prospect created message back to the same NServiceBus endpoint that it's using. So right now nothing is listening on that endpoint and it's just sitting there. I've got a new handler component which is going to run in a separate container that's going to listen for those messages. It's going to subscribe for my prospect created message. It's called index prospect, because when you save data in Elasticsearch that's called indexing and this is a .NET Core application.
00:40:45 Elton Stoneman
So currently, right up until now I've been using .NET Framework. The reason I still use .NET Framework inside my original message handler is because I want to use my entity framework model. My ORM model is it could be old and have a lot of extra logic in there. I just want to move that to a different component. I don't want to do an ORM upgrade to EF Core because that's a completely separate project. But this component, I'm getting this message that's coming through as an event and that's just a plain .NET object with some information in it. This doesn't need to be a full .NET Framework application. So this is written as a nice, simple .NET Core app, my index prospect.
00:41:22 Elton Stoneman
My program in here, again, there's a bunch of boilerplate stuff in here to set up my dependencies and to connect to my endpoint, my NServiceBus endpoint, and then the handler in here, it listens for my ... Where are we here? It listens for my prospect created event and when it's received that event, it takes the prospect that's inside the message and it enriches it a little bit for a lasting search. So the things that it does, it creates a full name field to join the two name fields together and it uses the names of the reference data. So the country name and the role name rather than the small codes that are stored in SQL Server.
00:42:04 Elton Stoneman
So SQL Server is still doing the transactional data, but the reporting database, if you've done any kind of BI work, a lot of the effort involved is in servicing the data in a way that makes sense to the business. So there's no point having a country code of UK or US or whatever. It doesn't necessarily match to anything that they expect. So I'm enriching the data here to add the names and because this is happening out of bound, this is in a separate container that's getting a separate message that's completely separate from the user-facing component. I could do anything I need to in here without snowing down the rest of the application. So it sets up this document to write to Elasticsearch, creating Elasticsearch and that's about it. There's some other stuff in here about metrics but I'm not going to go into that today.
00:42:46 Elton Stoneman
So this component, although it's .NET core, it's going to be packaged and run in exactly the same way. So my backend analytics, I have a Docker file. The patent is the same. I'm starting with the SDK and I end up packaging it in the runtime image. But the SDK is .NET Core, so it's .NET Core 2.1 SDK. Again, Microsoft have these images with everything, all the build tools already in them. I don't need to set anything up myself. I'm using the things that Microsoft maintain. I'm creating a directory for my source code. I copy in the csproj file that contains all the dependencies and I run .NET Restore. So exactly the same pattern. Totally different technology stack, same pattern.
00:43:26 Elton Stoneman
And it's the same for anything really. If you have a Java application or a Go application or a Node.js, you can follow exactly the same patterns. So your Docker files all look very similar. Copy in the rest of the source code and then publish the application. So this is going to give me my executable application. And in the second stage I start from the runtime image. So this has got everything I need to run my app, but none of the extra stuff, none of the extra SDK, create the directory, tell Docker what to do when you start a container, and then copy in the published output. So this image handler now, totally different tech stack focused exactly on the problem that I want to solve using the latest technology which I can do without having to do a big upgrade of my existing application.
00:44:08 Elton Stoneman
And to make this run I've got version three year. Again, this is just a Compose file. It's pretty much the same as version two. So my database definition, my web application definition, my save handler, none of that's changed. Those are all the exact same specifications as version two, but version three now has my new index handler. So this will run its own container. I've got the same configuration in here to say use the Learning Transport and share the same folder so everything will have the same Learning Transport for the messaging for the way I'm using this transport. And I've got two extra components in here. I've got Elasticsearch, which is the document database that I'm using for my reporting data source and Kibana which is my front end that connects to Elasticsearch and lets me visualize it.
00:44:55 Elton Stoneman
So these are components that I packaged myself. Sixeyed is my Docker user ID. So again, they're public components. These are Windows images, so this is Elasticsearch packaged to run in Windows and Kibana packaged to run in Windows. The company who make this stuff, Elastic, they have their own images for Linux. So just like Microsoft look after their images for Windows stuff, a lot of Linux companies have their own images that they maintain but there isn't a Windows one yet. But that's fine. It's open source, it should be all software so I can package my own image.
00:45:24 Elton Stoneman
Okay, so when I run this, same deal. I'm going to start Docker Compose and I'm going to run version three. No, no, no. I haven't copied it. Let's try that again. So version three. Again, same thing. Desired state is what's in my Docker Compose file. Current state gets compared. These are all fine so they all say the same. It's going to create Elasticsearch. Inside your Docker Compose file you can specify dependencies. So Elasticsearch is a dependency for the others, so that gets started first. Then the index handle get created and then Kibana. So don't worry too much if you're not familiar, excuse me, with Elasticsearch and Kibana.
00:46:05 Elton Stoneman
I'm just using this as an example of how I can plug in what is effectively enterprise grade open source software without having to worry about how I install it or maintain it or configure it. Because actually this is all I need to run Elasticsearch. This part of my Docker Compose file. This is my image, but if I was using the official image it would be the same. And again, Kibana. I don't need to know anything about this application. Just to add it to my own solution to give me the features that it's got.
00:46:32 Elton Stoneman
Okay, so that's all running. Now my application here is at the same IP address so I can browse back here. I should go and check my new components to make sure that it's listening on the same transport. So I'm going to do Docker container logs of my new index handler and I'll see the same kind of boilerplate stuff coming out. It's using a Learning Transport. Here's the stuff about the NServiceBus logs and then it's listening. Actually, while I'm here, if I want to look at Elasticsearch, which is a Java application. So I know nothing about it except I know it's Java. I wouldn't even necessarily need to know that. I can do Docker container logs Elasticsearch, and this will show me all the stuff that's coming out of Elasticsearch. This is using Log4j as it happens. It's telling me that things are starting up and it's all up-and-running.
00:47:20 Elton Stoneman
Similarly, I can do the same with Kibana. So Kibana is a Node.js application so I'm going to do a Docker container log. Because these applications, when they're running in containers, they all have the same shape, they all have the same interfaces for Docker. You're going to use the Docker command line and the Docker API to work with them. They all work in the same way. I don't need to know what's under the hood. I can get logs out, I can go and connect to the container just like I did with SQL Server. I can go and look at the processes that are running. Whatever I need to do. It's the same whether it's a Windows container running .NET or a Linux container running Java. I'm going to connect with them all in the same way.
00:47:57 Elton Stoneman
Okay, so now when I go in and do this here. Ooh. A new update for Firefox. I don't need that right now. So I'm going to move to version three and then put some stuff in here and let's go to India. And click developer advocate. Here we go. Okay, same user experience. So I'm going to find when I look at my save handler logs, I've got my new prospect in here, number 29. So that's the data that I've just put in. If I look at my index handler logs, then I should also see that this has now seen a new message. So this is the prospect created message. Different message ID from the create prospect message, because this is the event that gets published by my message handler. So it's created the data now in Elasticsearch and the logs are showing me that this is ready to go.
00:48:44 Elton Stoneman
So we're doing fine in terms of getting our application data into Elasticsearch and now I've also got Kibana which I can use to go and visualize it. So if I clear this down and I do the same thing. Kibana is a web application so again, I need to get the container's IP address on my machine so I can connect to it. There's my IP address and I'm going to browse. Kibana has this weird port, 4601, so I'm going to browse there.
00:49:17 Elton Stoneman
Okay, so this is Kibana. So this is completely stock. I haven't had to download it or configure it or anything like that. It expects to find Elasticsearch with the same name that my container is using so that's all fine. This is the data that I've already inserted because my message handler has picked it up and put it in there. And if I click on discover and zoom in a little bit it's going to show me the data from my version three that I've just put in. And it's got the country name and the role name that I've got in here. So I don't want to go into Kibana, but I can set up my own dashboard really easily. I don't need to get IT involved to get some data out. If I've got access to this I can do whatever I need to do as a power user.
00:49:55 Elton Stoneman
Okay, so now then, the final thing I'm going to show you for my modernization piece is the reference data. So currently when I go and get these lists of roles and countries, this is logic that's baked into the application. What I want to do is get that from a separate component which I can share around the business. So just like before I've cheated and inside my web application I have two ways of getting our reference data. So down here under reference data I've got a database reference data loader. That's what I'm currently using. So I go to my database using Entity Framework and I load everything up. And I also have an NServiceBus reference data loader and what this does is it sends a request response message to my end point. So request. I get country's response and request a get roles response.
00:50:48 Elton Stoneman
So I'm using the same entity definitions for my country and my role, the same object classes. So this is a really simple way of doing this. I'm just changing to doing using my endpoint instance that I've already got, that I've already set up and I'm sending some request messages and just waiting for the response to come back in.
00:51:06 Elton Stoneman
Now on the other side of this listening, I've got a new component which is a new message handler, it's a reference data handler. This is also .NET Core. So again, I don't need to use the same technology stack. This is using as it happens. It's using .NET Core and Dapper as the ORM rather than Entity Framework. I don't need to use the existing stack if I just get everything. Same boilerplate code to connect for listening for message. And then the handlers are really simple. So I get my request to get countries and then I just fire some SQL over to my ... I've got a repository that's going to get me everything that I need to send the data back. So same format of data, swapping in a different handler.
00:51:48 Elton Stoneman
And then version four of my deployment. This is the standard one. So I've got exactly the same definitions of everything, except for my web application. I'm now using my NServiceBus reference data loader. So that's going to switch to using the asynchronous request response messaging and then I have a new component which is my reference data handler that's going to listen for those messages. And again, configured in the same way. So using the connection stream for the database. It's using Learning Transport and it's sharing the Learning Transport data.
00:52:20 Elton Stoneman
Okay, and the way I do this update, the way I do my deployment is the same as before. I do a docker-compose up. So let's close this down. Fit screen. docker-compose up version four and it's going to do the same thing. It's going to say my web application definition is changed so that gets updated because I'm using my different config, my different environment variable, and actually creating my reference data handler. So I'll have this new component that's there. Everything else stays the same because the rest of the stack hasn't changed.
00:52:48 Elton Stoneman
When I do this kind of deployment, and it's the same principle when I'm running on my machine on a cluster up in the cloud, I would do the same type of deployment and Docker would work out what needs to change and what doesn't. So I'm only going to update the things that have actually changed.
00:53:02 Elton Stoneman
So I have a new container now for my web application, so I'll do a Docker container inspect, get the new IP address and before I go and browse to it, I will look up logs for my new reference data handler and that says the same thing. Listening on the Learning Transport and it's just listening. So now when I'm browse ... Oh, let's go and get my IP address again because I've overwritten it. Going to browse here. So now it's going to be, again, the same user experience. The UI and the UX is all the same. But when the page loads and it shows me my list of options from the dropdown, that's going to have come from my reference data component. So if I double-check this and look at the logs again ...
00:53:59 Elton Stoneman
So I see now that it's received a request and it sent the response. So by the time I browse back here my application is still loading up, so my web application is still getting the responses here, so here we go. Exactly the same data, same format, same order, all the rest of it, but now it's coming from my request respond messaging. So I've broken my monolith up into lots of components now.
00:54:20 Elton Stoneman
Okay, so this is where I am now. So I've taken all my components that I want to address and put them in different containers and I've done it the feature-based way. So rather than just saying I'm going to move some of this stuff to .NET Core without any real business value, I focus on important things. And these are the things you can look at if you're looking at doing this kind of feature-based decomposition. Anything that changes more frequently than other components, put that in its own container and then you can release it with its own release cadence.
00:54:52 Elton Stoneman
These are things that don't change very often but are perhaps brittle. So things that maybe you integrate with a finance system that only changes once a year when they change their API, but it needs a lot of testing. So if I put that in its own component I don't need to do an update of that component except once every year. Anything with issues. So I looked at my performance issue but if I've got security concerns or something that's very buggy, put it in its own container and I can zone in on that to fix those issues. And anything that promotes reuse. So getting APIs out, getting event-driven stuff in there. So the rest, other business components can use it or even business partners if you want to make that stuff publicly available. So I've broken down my monolith, I'm running these different containers and I'm plugging them all together with NServiceBus.
00:55:33 Elton Stoneman
Now, what I haven't particularly focused on; I'm just going to go through this quickly towards the end of the session; is why this is such a nice fit having these things running in containers that are really lightweight and I can spin them up easily. I'm plugging them together with NServiceBus. So the way these containers talk to each other is just the ordinary networking stack. So they reach each other by their container names, they use TCP/IP or UDP or whatever they need to use. I've deliberately shown the message queue and NServiceBus in my diagrams as an abstraction. I don't really need to worry too much about the transport. But because I'm running these things in containers it makes it really easy to experiment.
00:56:10 Elton Stoneman
So currently I've been using the Learning Transport so far. I've shown you how to share the directory on my server or on my development machine so all the containers have the same volume to write to, but I can easily switch that out to RabbitMQ, I can run that in containers. Or if I'm going to run in the cloud and I want a managed service I can switch to the Azure Service Bus transport or AWS or whatever I'm using. So just very quickly I will show you what that looks like in terms of the things that I need to change.
00:56:39 Elton Stoneman
So back to my demo environment. I've got a new Docker Compose file. So my version four Compose file specifies everything that I need. You can join more than one Compose file together to pick out just the changes between different environments. So I've got a RabbitMQ variant of my application that I want to run. In my RabbitMQ file I'm running RabbitMQ itself as a container and then I'm changing just the bits of the configuration that I care about to switch to RabbitMQ. So when I run this update, I'm going to be starting RabbitMQ as my transport and I'm switching all my other components to use the RabbitMQ transport. All the rest of the configuration stays the same.
00:57:22 Elton Stoneman
What I'm going to do first of all to show you how NServiceBus takes care of some of the details for me is I'm going to clear the screen here. I'm going to join those two files together, so version four and version four with Rabbit. I'm just going to start the RabbitMQ for now. I'm just going to start the message queue. RabbitMQ is running the container, it has a web interface so I can connect to that. In a second I'll start the rest of the application but what I want to show you is when I get the IP address for rabbit ... So this will show me the IP address. I can browse to the interface and port 15672. So again this will take a few seconds to start up, but when it's here ... This is just plain RabbitMQ. If you're used to RabbitMQ, you've seen it before, this is what you would normally get to after about two hours of installing dependencies and configuration.
00:58:24 Elton Stoneman
So this is the basic overview. Again, not going to go into the details. But my list of queues is empty. There are no queues in there. My list of exchanges which is how things get published there. So all the default things in here. Now when I start the rest of my application, so I'm going to use docker-compose up. Oh, no. I'm not. Not unless I copy it correctly. And let's copy this in here. So docker-compose up, my two files, start the rest of the application. Rabbit is already running. Most of these things are up-to-date apart from anything that uses the NServiceBus because the configuration has changed to use the RabbitMQ transport. So they're all going to get restarted because those configuration changes run for the whole life of the container. So if I try and change the configuration, that means there's going to be a new container.
00:59:12 Elton Stoneman
As they start up, I'll connect to one of them once we're ready and we'll just see the logs. So Docker container logs. My index handler. And what we'll see is that it's using RabbitMQ transport. So that's the only difference from the logging point of view. Go back to RabbitMQ and you'll see all these new things have been created, all these new exchanges, all these queues, the whole thing has been populated for me just with the startup of the application.
00:59:37 Elton Stoneman
So I'm just going to quickly show you that it still works. So let's go and get the IP address of my web container. So I appreciate we're probably running a little bit low on time for the Q and A, but we'll get to what we can and then the rest of it we can always do with a different way of communicating with you guys. So here's my IP address. It's going to be exactly the same. It'll work in the same way. So actually as demos go, this is a pretty dull demo because it's going to be the same as the previous one. But everything now is going through RabbitMQ.
01:00:11 Elton Stoneman
So when I browse to this it's going to send my request to get the reference data, my reference data handler will pick that up and send the response and send everything back, but now it's using RabbitMQ. So check those logs from my reference data handler. Oh, I've got an exception. So that means this container started before RabbitMQ started up. In a production environment my Docker file would capture this and the container would end and Docker would start up a replacement, but for now you're just going to have to take my word for it.
01:00:43 Elton Stoneman
So let's switch back to the slides a second. Okay, so the transport stuff there. It has to be a failed demo otherwise you wouldn't believe it was really a live webinar. So the transport discussion is all about being able to swap out different transports without changing your code, just by changing some configuration and supporting all the different things that you need. So in dev you can use your Learning Transport. Your test environments you can use whatever works for you. RabbitMQ as in my excellent example, SQL Server, MSMQ, and then you can support wherever you need to run these things. And the key thing there is the only thing that you're maintaining is your code to set up the transport. Your code to say, "I'm using RabbitMQ and these are the transports I support." The rest of that is all supported from the platform for you.
01:01:29 Elton Stoneman
Okay, so this is how you get in touch with me to say, "You went over and where is the Q and A?" I'm on Twitter @EltonStoneman. If you want to email me I'm just elton@docker.com. Feel free to get in touch with me. And that last link, that is.gd link is the demos for today's session. So you can try it yourself to see if you can get the demos to work at the end. It's all on GitHub so all the source codes are on GitHub and the steps that I took and went through is all on GitHub.
01:01:52 Elton Stoneman
So appreciate we've gone a little bit over and I hope you still found that useful.
01:01:57 Kyle Baley
Thank you Elton and we'll take a few minutes for Q and A but keep in mind that for everybody that's registered, if you do have to drop off we'll get the questions answered and sent out to you in some format. If it's a blog post or through email or discussion group or something. The Q and A questions, we'll post those answers and make them available for you.
01:02:22 Kyle Baley
A couple of them were having to day with is the code available and it looks like you've answered that. We'll send out that link to everybody of course. And the presentation has been recorded and the recording will be sent out to everybody.
01:02:39 Kyle Baley
So let me go through the QA questions here fairly quickly. In the demo, the version two demo, in the Docker Compose file, the YAML, you parsed in some environment parameters. How are those passed to the console application environment for that.
01:02:55 Elton Stoneman
Oh, okay. Yeah. Good question. So yeah, I glossed over a lot of that stuff. So environment variables that you passed to a container get surfaced as system environment variables. Just like with PowerShell you do $n or using set-x in Command. So when you start your container you can parse in a bunch of environment variables. Now what happens there depends on your code. So in the examples where I'm using ASP.NET Core and .NET Core console apps, they have a really nice configuration model that lets you read from a JSON file that's packaged with the app, but then overwrite from environment variables so that they're very aware that an environment variable might be used for config.
01:03:32 Elton Stoneman
So when you start your console application in a container, you do Docker container run and you can do dash e to pass an environment variable. Docker Compose does that for you when you specify those things there. So inside the application, the value you parse in is the value of the environment variable and then your app does whatever it does.
01:03:54 Kyle Baley
Okay, thank you. Another question here. When you have the SQL transport and for a microservices solution, do you recommend having a common database for the NServiceBus messages and subscriptions and the business data or do you often set that up into separate databases and or separate containers?
01:04:18 Elton Stoneman
Good question. The last time that I used ... I have used the SQL Server transport in production but not with containers. This was before Docker. We kept them separate. So we had a dedicated SQL Server instance for messaging and then our transactional data was stored elsewhere. I think I would probably be inclined to use the same pattern.
01:04:38 Elton Stoneman
And the interesting thing is when you're running in containers, your containers have the same amount of availability as your cluster. So if I'm running on a single server and I'm running all my containers, if I lose my server I've lost my application. But if I'm running in a cluster with a bunch of different servers, then I get high availability by default. Because I could run this exact same application on a cluster running Docker Swarm which is the orchestration technology and it would start up all the containers on different servers. I wouldn't know where they were running, I wouldn't care. They still talk to each other using DNS and all that's transparent.
01:05:15 Elton Stoneman
If a server goes down and takes a bunch of containers with it, then Docker will startup replacement containers elsewhere. So that leaves a question about persistent state, which I'll put to one side, but it means you can have ... I'm going to run my message queue in a container. And I don't need to worry about it having a different service level from my web app because they're all going to run on the same platform and they'll have the same service level.
01:05:38 Kyle Baley
Great. Okay. Thank you. And I'll just add that if you do keep them in separate databases you do have to consider the transactionality of that it's still possible but it's not as automatic as if they're were all, say, still on the same database.
01:05:58 Kyle Baley
So next question. In the last Elasticsearch sample, assuming the existing monolith has been running for a while with a lot of prospects, how would you handle retroactively populating all the existing prospects to Elasticsearch? Would you create them through retrospective NSB messages?
01:06:21 Elton Stoneman
Yeah, good questions. So those are two separate concerns. I'm adding my new feature to capture data and store it in my reporting database plus I need to retroactively fit all the existing data. So you got two options. You're going to have to run a one-off job, but you want to reuse as much of your code as possible. Actually similarly, when I've done this sort of situation in production, we've done a one-off job that just published the messages to NServiceBus. So rather than have an ETL process to do an entity transform load and read from the source data, transform it and then write it to the target database directly in Elasticsearch, much nicer to work out which things you need to populate and publish a bunch of messages and then let your message handler put them in Elasticsearch because you're using the same production workflow, using the same set of code.
01:07:13 Elton Stoneman
That's how I would do it. It's going to be a separate load process but I would run that in a container using as much of the existing code as possible by publishing messages.
01:07:22 Kyle Baley
Okay.
01:07:24 Elton Stoneman
Someone's told me my link doesn't work. Let me update my link. The link to my demos. Here we go. No, that is the right link. Sorry, carry on there Kyle.
01:07:37 Kyle Baley
The next question. We saw RabbitMQ and Kibana you could connect up the web interface for then. Can you still use SQL Server Management Studio to connect to SQL Server running in the container?
01:07:52 Elton Stoneman
Yeah, absolutely. Absolutely. Let me just show you because I've got it here. I've got a bunch of containers running. Container list. SQL Server running in a container is just SQL Server. So my SQL Server container here, when you refer to these things you can either use their name or their ID, which is the random thing that Docker generates. If I inspect this one, container inspect, it's just a remote instance of SQL Server. It just happens to be in a container. It's got an IP address so I can grab that and if I had SQL Server Management Studio on here, I could connect to it just like a remote SQL database.
01:08:27 Elton Stoneman
Actually what I've got here is I use Sqlectron, which is a really lightweight SQL client but with that I can take my IP address; I'll give it a second to start up; and I can go and paste in my container IP address. Make sure I've got the right set of passwords. I always use the same password for everything obviously, so that's all working. So I can connect to this just like a remote database. I've got all the default stuff that comes with SQL Server, plus I've got my own database that Entity Framework created for me. So yeah, it's just SQL Server, just happens to be in a container.
01:09:01 Kyle Baley
Great. The next question. Have you tried the same setup with minikube or Kubernetes? I guess that I've got a similar question for this, is how would you orchestrate deploying the application that you eventually ended up with?
01:09:23 Elton Stoneman
Firstly, you can't run this stuff on Kubernetes right now because it uses Windows containers, and Kubernetes support for Windows Nodes is currently in beta. So it should go GA by the end of the year, in which case you can use the exact same Docker image. Oh, quick rewind. For anyone who doesn't know what this stuff is, Kubernetes is the orchestrator. It's one of the clustering options for having this stuff running across a set of machines. So it's what you do for production.
01:09:53 Elton Stoneman
So I'd have a whole bunch of machines running. Some running Linux, some running Windows. I install Docker on all of them and then I join them together as a cluster. And the choices of cluster really are Docker Swarm, which is built into Docker, or Kubernetes which is the most popular orchestrator. Kubernetes doesn't support Windows right now, so you can't run this application with Windows.
01:10:12 Elton Stoneman
When it does, which is going to be by the end of the year, it's probably going to only be on Windows Server 2019, then yes, I would have to write a different description for my application manifest because Kubernetes doesn't use Docker Compose. Kubernetes has its own application manifest format. So the description of how all these services plugged together would be in a different ... It's still YAML, but it's a different language format.
01:10:39 Elton Stoneman
I would still use my same images and what it would let me do, and the same with Docker Swarm actually, if I was running this on a cluster with a mixture of Linux and Windows servers, I could run the open source and Linux stuff in Linux containers so my .NET Core components can run in Linux, Elasticsearch, Kibana, they can all run in Linux, and so I'm only running the Windows-specific .NET Framework stuff in Windows and that lets me, again ... And again, it's completely transparent. The components just working exactly the same way. They just happen to be running in Linux or Windows. So it's a really nice way to get that.
01:11:13 Elton Stoneman
Can't do it in kube yet. We'll be able to soon. Right now you can use Docker Swarm.
01:11:20 Kyle Baley
Great. Actually I'm going to skip over to a related question. What are the options to run both Windows and Linux containers in the same network?
01:11:30 Elton Stoneman
Oh, okay. Yeah, so that's cool. So right now you're going to have to use Docker Swarm. So Docker Swarm is a really simple clustering technology. You install Docker on all the servers you want to use, you run one command on one of the servers, which is going to be the manager. You run Docker Swarm in it, which initializes the swarm. The output of that gives you a secret token and in your other service you run Docker Swarm join, excuse me, and you give it the secret token and then they're all in the swarm. What that gives you is a cluster where you can just run basically whatever you like and it just comes down to the definition of whether you want to run things in Linux or Windows and you have that as part of your compose.
01:12:12 Elton Stoneman
So what I didn't particularly focus on, I showed you that you can join several Compose files together or you can do that to add in production things. So I've got one Compose file that defines the structure of my application. I've got another Compose file that add the extra bits I need for production and that might say that in production I'm going to use Elasticsearch on Linux, so I've got a different image and a hint to tell Docker to run it on Linux.
01:12:36 Elton Stoneman
But yeah, what you end up with is a cluster of ... You operate it as a single unit. So I'm still running Docker Command, but Docker is going to distribute all this stuff across the whole cluster.
01:12:50 Kyle Baley
Okay. Another question has to do with licensing for NServiceBus on Docker and Chuck, if you're still there I would encourage you to reach out to us. I believe the short answer is it's not one license per Docker container. The longer answer is out of scope here, but because it has to do with licensing I would encourage Chuck to reach out to us.
01:13:16 Elton Stoneman
And there's a wider question on licensing there actually. For Windows containers you don't pay a Windows license per container. What I'm showing you on the screen here incidentally is my cluster where I've got a mixture of Linux and Windows nodes. So this is Docker Enterprise that gives you a fancy UI, but it's Docker Swarm, the same Swarm component.
01:13:34 Elton Stoneman
Yeah, so I've turned off my Windows servers for that, because this is running in Visual. All you get is one cluster with a whole bunch of different things. And actually I've got Linux and Windows here. I could have Linux, Windows, Raspberry Pi and IBM Z mainframes all in the same cluster and just distribute the right ones.
01:13:50 Elton Stoneman
Yeah, sorry. The licensing question. You don't pay a Windows license for each containers. So you pay a Windows license for the server where the containers are running, the thing that's running Docker. If I can run a 100 Windows containers, I'm still going to pay one Windows license. The software inside the container depends on your vendor, the vendor of that software and I guarantee they are all being asked how do you support containers, because this is how people want to run things now. For their own components, for open source components and for off-the-shelf software. So yeah, talk to your vendor because it won't be the first time they've heard the question.
01:14:26 Kyle Baley
Right. And I believe NServiceBus is similar but I don't want to speak so let's move on to the next one. Is it possible to run MSMQ transport with Windows container. I kind of know the answer to this one, but I'll let you.
01:14:41 Elton Stoneman
Yeah, so if I go back here, there's an asterisk next to MSMQ. You can do it from Windows Server 2019. So-
01:14:50 Elton Stoneman
Yes. And right now the demos I've got were 2016. 2019 is out actually by the way and so I'm gradually transitioning my demos over to 2019. So the next iteration of this presentation will all be 2019 and I'll have an MSMQ demo in there. Tiny bit of background for that. When Microsoft started looking at containerization, the stuff to make software run in containers has been in Linux for 20 years. Microsoft started working Docker back in 2014 to get the same sort of functionality into Windows Server.
01:15:23 Elton Stoneman
What they looked at doing was they looked at all the kinds of workloads that Windows Server currently supports and they thought which ones are people going to want to use in containers. And in the first cut they didn't think MSMQ was going to be something that people wanted and then all the NServiceBus users and anybody else who was using MSMQ said, "Well yeah, absolutely. If I can run my MSMQ in a container and have it run on the same service level as the rest of my application which is all run in containers then that would be cool." So Windows Server 2019 supports that.
01:15:58 Kyle Baley
Mm-hmm (affirmative). And I believe the last question here is you've got a relatively simple monolith application but it's usually a lot more complex. In your experience, have you run into any issues migrating monolith to containers? For example, are there any restrictions on registry? What if it's like a Windows Form application? Things like that.
01:16:21 Elton Stoneman
Yeah, absolutely. So two parts there. The first part is taking the monolith and making it run in a container. I've just throw up on the screen here about breaking the monolith down, so just for reference. So the examples I've shown are quite simple. Basically anything you can script, you can put into a Docker file. So as long as you concurrently deploy your application through scripts, it should work in Docker, with the caveat that certain quirks like MSMQ aren't supported in containers. But that doesn't tend to happen with end user applications.
01:16:54 Elton Stoneman
So anything you need to do like write into the registry, doing weird stuff with app command, setting up some strange links that you need or some paths that you need to exist, you can do all that stuff. So I've shown you some simple PowerShell in my set up scripts in my Docker file, but you can do whatever you need. You know, setting permissions, pretty much ... As long as you can write a batch file or a PowerShell command to make it happen, you put that in your Docker file and you should get there.
01:17:22 Elton Stoneman
The one thing which is slightly awkward in 2016 which again, is improved in 2019 is active directory integration. So when I'm running inside a container, the container isn't directly-connected. So if I'm using things like authentication for SQL Server through active directory, inside my container, the container itself is not connected. The way that works is the server is connected, then you pass credentials through. So when you start a container you start with some special configuration and it inherits all the permissions that the active directory ... that the server has.
01:18:00 Elton Stoneman
People are doing it in production. It's fine, but it's slightly clunky and it's better in 2019. But actually that's the only thing that people find issues with because you have to set up some specific types of principle and flow those principles through when you create your containers. It's something that causes people a headache, but once you've done it you've done it and it does get easier. So in terms of real applications, I've literally taken a .NET to web forms app, then ran against SQL Server, wrote a Docker file for it very similar to the ones I've shown you and had it running.
01:18:37 Elton Stoneman
The interesting thing is what you will probably find, and in a longer version of this session I start off by running my application in a container with a simple Docker file and the first thing it does is it yellow screens and I get an error and that might be the first thing you see when you move your application to Docker.
01:18:53 Elton Stoneman
But the error you will see will be your error. It's going to be an error from your application, so saying that it's missing a DLL or it can't connect to some dependency that you've forgotten about. So when you first run these things in Docker, actually you're providing your application runs, which there's no reason why it shouldn't, as long as you can script it, you're immediately into your own domain and you'll see, if you see an error it's probably an error you've seen a 100 times to pull into other environments. So it's something that you should be able to deal with.
01:19:22 Kyle Baley
Okay, and now the last question ... If there are any others that come after this we'll answer them via email or some other thing. This one, we talked about this before we started. Does it make sense to work with containers and NServiceBus without moving to .NET Core first? And they specifically mention the image size. They're quite large, so does it makes sense to move to the .NET Core where the images are quite a bit smaller or do the image sizes hamper your development or your deployments?
01:19:59 Elton Stoneman
It's a good questions. So the image size, when you package your application into a Docker image, I talk briefly about the fact that it's layered and those layers can be reused in the build cache. The layers get reused for the entire image. So what you will find is if you go look at the size of some of your images ... Let me switch back to my VM and I'll show you this. So let's clear this down. I'm going to have millions of images on here so I'll just quickly switch to something towards the end here.
01:20:25 Elton Stoneman
So these are images. It tells you the size of the images. You will see that anything that's based on Windows Server is a 10 gigabyte plus. Now that's a virtual size. If I only have this one image on my machine, then it would take up 13 gig of disk space. But actually, of that 13 gigabyte, 13 and a half gigabytes, maybe 50 megabytes is my own application and then maybe two gigabyte of that is .NET and IIS. And then the other 11 gigabytes is Windows Server Core. And they all get shared, so every single one of these images is using the same 10 gigabyte base of Windows Server Core. All the ASP.NET images are sharing the same two gigabytes of ASP.NET.
01:21:04 Elton Stoneman
If I build several versions of my own application, they'll all show up as being 13 and a half gig but actually the difference between each version will be 10 megabytes or one megabyte depending on what's actually changed between those versions. So you see these huge numbers but actually because of the way Docker does this stuff, it's actually very efficient. So I'm seeing hundreds and hundreds of gigabytes here, but actually my disk is only using 20 gigabytes or whatever it's going to be. So that's the first thing.
01:21:29 Elton Stoneman
Don't be put off by these big numbers because actually that's the virtual sides. So when you first set up a brand new machine, yes, it will download four gigabytes for the Windows Server image and it will expand to 10 gigabytes. But then that gets reused for every single image you use, so don't let that put you off is the first thing.
01:21:49 Elton Stoneman
Second thing is in Windows Server 2019 one of the things they've actively been doing is reducing these sizes considerably. So the ASP.NET based image comes down from 12 gig to something like five gig. So it's still big but the payoff of that is it will run all your software because it's Windows Server. It's got 32-bit support, it's got support for .NET back to 2.0. You can literally take your 15 year old application, write a Docker file and have it running in a container and move it off your Windows Server 2003 machine into the cloud in a day. So the payback for the fact that these images are much bigger than the Linux equivalent is that they've got everything you need to run your existing applications.
01:22:30 Elton Stoneman
As to the move to .NET Core, I don't think it's a technical decision. I think I would have that as a business-driven decision. So moving to .NET Core is not as simple as just changing the targeting framework of your application. Most of the companies who come to Docker and have an existing landscape of applications that they want to modernize either by running them on new infrastructure or in the cloud or if they want to modernize the architecture, these are old apps. These are maybe five or 10 year old applications, like pre NuGet. So you're not taking that application and making it run in .NET Core because the chances are it uses some dependency that hasn't been updated for 10 years so you're not going to be able to simply migrate it.
01:23:13 Elton Stoneman
And even things that are more recent, if you're using Entity Framework as an example, if you were using EF5 in your .NET Framework application, to migrate that to .NET Core means upgrading to EF Core, which is logically different in certain ways to Entity Framework as well as it doesn't have a matching API. So suddenly now, what I want to do is I want to run in a container and run in the cloud, but to make that happen I've taken a ORM upgrade into my project which has a huge amount of risk and potentially a huge amount of work.
01:23:47 Elton Stoneman
So I would say, "What am I trying to achieve?" Don't be put off by the image size because that's a technicality which you can very much live with, and look at what you're trying to get to. So unless you desperately need to run .NET Core for ... because you want to run on Linux maybe. If you want to move off your Windows estate and move things gradually to Linux and then run those in Azure, because Linux Vms can be run more cheaply, that's a discussion worth having. But just move to .NET Core because the image sizes are too big, I don't think that's something you need to do.
01:24:21 Kyle Baley
Fair enough. All right. I think we've tested everybody's patience enough here. Thank you everybody who's still listening and a bit thanks to you, Elton, for presenting this today. If anybody wants more information, by all means reach out to us. Elton has a book on Docker called Docker on Windows, I believe, and he's working on the second edition and he has at least one Pluralsight course and maybe you can say a few words about that?
01:24:54 Elton Stoneman
Yeah, by all means. If you're interested in this stuff, I blog about it and Tweet about it all the time. This is my blog. My book, Docker on Windows, is all about Windows 2016 to start with, but the second edition which will be out as soon as I finish writing it will be about 2019. Pluralsight, there's a whole bunch of stuff in there. There's .NET stuff, I'm an Azure MVP so there's Azure stuff in there.
01:25:18 Elton Stoneman
If you're interested in today's session, the one you'll be interested in is Modernizing .NET Apps with Docker, which takes some of the stuff that I zipped through super fast today and does it over, I don't know, three and a half hour and four hours. Goes through a lot of detail on that. So yeah, that's the one you're going to ... If you're a Pluralsight person, that's probably the most useful one for you.
01:25:38 Kyle Baley
Thank you Elton, and thank you everybody and again, we'll be in contact with you about the recording and any questions that have lingered that we haven't covered. So everybody, have a good day and enjoy your adventures in Docker.

About Elton Stoneman

Elton is an 8-time Microsoft MVP, author, trainer and speaker. He works for Docker and travels the world helping people understand what containers, DevOps and microservices can do for them and their existing applications.