Skip to main content

Webinar recording

Tales from the trenches: creating complex distributed systems

Join Neel Shah and Mauro Servienti for war stories about the challenges of building real-world distributed systems.

🔗Why watch?

Developing complex distributed systems is hard. There are many challenges. How do we identify and tackle them? How do others do it?

In this interview, Mauro Servienti is joined by Neel Shah, VP of development at Strasz Assessment Systems, to learn about their journey through this complex domain, and how they successfully apply ADSD principles using NServiceBus.

🔗In this webinar you’ll learn about:

  • Finding service boundaries in a complex domain
  • UI and ViewModel composition
  • Focusing on the business problems with NServiceBus

🔗Transcription

00:00 Hadi Eskandari
All right. Welcome everyone. Thank you for joining us today, and my name is Hadi and I'm a software engineer at Particular Software. Today, my colleague Mauro will interview Neel from Strasz Assessment Systems, and we'll talk about challenges of building a real world distributed system. You'll hear some horror stories, so very interesting. Just quick housekeeping before we get things started. If you have any questions, please don't use the chat, use the Q&A section, so we can notice your question and we'll make sure we address them and answer them towards the end. Without further ado, let me ask Mauro and Neel to start.
01:03 Mauro Serviente
Thanks Hadi, and welcome Neel. And good day everyone, and welcome to another Particular Webinar. So, it's interesting because I've been helping organizations design their systems for the last more or less 20 years now, and after having been involved with the fashion and clothing domain and industry, with all the craziness around sizes, colors, attributes of various types and multiple SKUs for each item, because reasons, right. And I thought that nothing could have been more complex. Then I met Neel and the team at Strasz and they changed my mind, because things can be more complex. And this is Mauro, as Hadi said before, and today I have the pleasure to host Neel, the Vice President of Development at Strasz Assessment Systems. Welcome Neel, tell us something about you.
01:56 Neel Shah
Thanks, Mauro. I'll talk a little bit about my company Strasz Assessment Systems. For around 20 years, we have provided solutions of the assessments industry. So typically, our clients they offer education certification or licensure programs, and the system that we should be talking about today is the one that we've built for one of our largest clients, the AICPA. So the AICPA, they produce, deliver, and score the Uniform CPA Exam, and our enterprise solution for them includes, content offering of test questions, automated assembly of those questions into an exam, delivery into a test center, scoring, reporting, and invoicing. Some of our other clients include AAMC, who offer the MCAT exam; Inteleos, who offer sonography and position certifications and NCCPA, who certify position assistance.
03:01 Mauro Serviente
Yeah. Interesting. And even the more interesting thing is that I've been one of your customers probably, because I had a few tests at different test centers. And I know that one of those is one of your customers, so I used your system and that's very cool. So, that was a brief overview of what Strasz does and what the domain is about. And can you now give us a brief overview of what are the technologies play in the system you're building.
03:36 Neel Shah
So, currently we are in the middle of a cloud migration effort, so latest technology stack is obviously NServiceBus, there's .NET Core Azure functions, Azure CosmosDB, Azure Service Bus, Azure Cognitive Search, and Angular TypeScript, and Monorepo with NX. On the other hand, the CPA exam was computerized, I think, in the early 2000, so there are a lot of legacy technologies also into play. And our on-prem stack currently has NServiceBus, RabbitMQ, RavenDB, SQL Server. We also use IBM's CPLEX engine, so that actually uses linear programming and user defined constraints to automate the creation of the exam. So the CPA exam is actually assembled by this linear programming kind of CPLEX engine. And over the years, we've had kind of two or three opportunities to make deep design or technology changes for the AICPA. So one is where then we overhaul the exam to be web based instead of using desktop technologies. A second one is when we got rid of testing windows so that the candidates could test throughout the year, and so there are no blackout dates at this point with the CPA exam. And obviously the more recent one is the cloud migration effort that I was referring to. So I guess that gives you a sense of all the technologies currently in play.
05:12 Mauro Serviente
That's interesting because there are a lot of technologies at play in this case, and which makes things even more challenging I guess. It's not only the technology that is complex, but it's also the overall amount of technology involved. What were the reasons, or if one, what was the reason for following SOA principles and implementing a distributed architecture in your domain?
05:38 Neel Shah
Okay. So let me share my screen for this. So in order to discuss the need for SOA, I think it's best to briefly look at all the systems that are involved. And what we'll do is, using some examples, we'll look at specific issues that we were facing. And we also look at one of the first services we extracted. So I added this slide to give a high level view of how the CPA exam is actually administered by three separate organizations. So while the AICPA produces the exam and scores the results as shown by these red arrows, the candidates, they actually register with state boards. So if you look at this first green arrow, that's the candidate registering for the exam. And when they give the exam, they do it in physical test centers that are owned by another company called Prometric.
06:36 Neel Shah
So our exam software actually runs in 400 Prometric test centers. And what we do is we create a small 5 to 10 MB of a RavenDB database per candidate. And then this state file is then sent to our backend system where we score the candidates responses, and then we report their scores back to the state boards. So if we are looking from an AICPA perspective, the systems that we built specifically for AICPA. So from left to right, we have the CMS system that is for content authoring. Then there's the assembly system that creates the exam. The test delivery system is the one that the candidates actually interact with. And then the backend system scores the candidates responses. I'm also including a sample item. So in our industry, a test question is called an item and an item can be as simple as a multiple choice question, but it can also be more complex.
07:40 Neel Shah
And so it can have exhibits, reference materials, and number of different ways to collect candidate responses. So in this example, these Excel like grids that you're seeing, they actually capture the candidates responses. So we can talk about the need for SOA from a CMS system perspective. Like I mentioned earlier, the CPA exam was computerized around 20 years ago. And if you look at the implementation, there is heavy use of Entity framework. And business rules are typically in the presentation layer or in store procedures. However, if you just look at it from an item perspective, the CMS system is responsible for more than the test question. It is also responsible for offering of scoring rules, for managing the inventory, managing statistics, references, and classifications amongst other things. So we don't have the time to get into each one of these, but I'll briefly describe what statistics are.
08:39 Neel Shah
So the candidates who appear for the CPA exam, they don't actually get a raw score. Instead, they get a scale score that kind of traces their performance on a bell curve, and this scale score is derived from stats associated with an item. So from a system perspective, while these are very separate business capabilities on their own, technically they're coupled for several reasons. One being the system UI, another being reporting, then there are database constraints, views, store procedures. So this kind of technical complexity increases the cost of change, and also you cannot optimize one business capability without affecting all the others.
09:23 Neel Shah
So another example is that for an item to be included in an item, it needs to be promoted from its initial state to a pretest state, to an operational state. And as it is being promoted, several reviews need to occur. So these reviews were implemented globally across all the business capabilities, and validation was performed globally across all the business capabilities as part of one large workflow. And so, as you can imagine that this can get fairly complex from an implementation perspective. And it would be nice if each business area had its own smaller review process, and its own smaller workflow. Another example that I give with the CMS system is that candidates may request a rescore for an exam they took in the past if they believe that there was an error in their reserve processing. So, when we rescore the database must appear frozen in time.
10:18 Neel Shah
And our monolithic view of an item meant that we snapshot everything related to an item across all the business capabilities that I was talking about. And as you can imagine, creating the snapshot is of high technical complexity. And only when we looked at this problem through a SOA lens, we realized that very little data actually needed to be snapshot. So, overall what I'm saying is that there is some technical complexity that is unnecessary, that only got clarified as we started implementing SOA. Another common issue that we had was duplication, and I will try to explain it through a feature called adaptive routing. So simply stated, adaptive routing is when a candidate is given a harder set of questions or an easier set of questions based on how they did with the previous set. And the scenario gets a little bit more complicated when the candidates who take exams on multiple days.
11:18 Neel Shah
And so we don't have to get into the spec test, it can hopefully demonstrate the complexity involved. So these adaptive routing rules, they originate from the automated assembly process. They then need to be imported into our CMS system because that is the system of record. From here they're exported and included in a content package that is sent to test centers and the delivery system executes these rules for the live exams. And when the results come to our backend system, the backend system, again executes these rules to ensure that the candidate had a fair experience. So you can see that the data schema and logic for adaptive routing is in every system, and every system will need to change if we make a change to adaptive routing. So from a SOA perspective, this would be simplified because only one service would own the data schema as well as behavior.
12:14 Mauro Serviente
Let me interrupt you for a second. This is an interesting logical view. So these are four main building blocks that represent logical blocks in the system. Going back for a second to the technologies at play we briefly touched on before, I imagine, well, I know, but I imagine that all these logical views when deployed in production are in very different systems and very different environments. Can you briefly describe those?
12:54 Neel Shah
Yeah. So let's say the test delivery. So the test delivery we run in a VM in the Prometric test center on IAS. So what we've done for that system is we don't have the infrastructure to run and NServiceBus. However, what we've done is we have identified the different services that are involved, and then we have created components from those services that get deployed into the IAS application as just regular .NET relay packages. And so there is messaging involved, but that messaging is really... It's a poor man's ServiceBus that we use in the test delivery. There are plans to possibly move the driver to be cloud-based, and so when that happens you will see that it is more in line with how we are doing the rest of the enterprise. And so the backend system is a good example of where there is currently the full push of using the cloud native technologies. And CMS is again, where is cloud native technologies are being used. But currently both CMS and the backend, on-prem, they use RabbitMQ, NServiceBus, .NET Core, RavenDB, those sort of technologies.
14:34 Mauro Serviente
Okay, good. Thanks.
14:37 Neel Shah
So just moving on, I wanted to show the kind of advantage you get once you extract the service from a monolith. So the first service we extracted, in our industry it's called the rubric service, but just to simplify, I'm calling it the scoring rule service. So before the service was extracted, the scoring rules were expressed as steps and in this format that kind of a low level language. So in this example, the correct answer is if you had 10,000 in the debit column here, and 10,000 in the credit column over here, but as you can see, it takes around eight steps for the author to express the scoring rule. And again, in less readable format. You really have to know this style of offering the scoring room in order to be able to specify in the rubric. So I was kind of reminded of this quote from Eric Evans, where this would be an example of software doing something very useful without explaining itself. So I kind of added that quote for that reason over here.
16:00 Neel Shah
So once the service was extracted, we have a smaller boundary, and the fact that you have a smaller boundary, it allows for a more expressive model to emerge. And now the scoring rules can be offered in a very human readable format. Now you can simply assert specific values in the dividend credit columns without having to write that many steps. And since this was a separate boundary, we were also able to use a NoSQL database so that persistence could be frictionless. These are the kind of things you couldn't do as part of being inside a monolith.
16:37 Mauro Serviente
Got it. That's very interesting. And I guess that allows me to ask another question that is, what were the challenges you and the team faced in applying these SOA principles? I guess that my question is, was for example, finding service boundaries a problem? Even if I imagine that everyone in the team mastered the entire domain.
17:10 Neel Shah
Yeah. Actually, SOA, I think you could say it has been a multiyear kind of an initiative. And the first few years is to just get the idea socialized in a way that the team is on board. So we had challenges in terms of getting business buying. Even though we have an excellent product team, we rightly needed to show them that we were not further complicating the system. To that end, we had also arranged for an external review of our code base, and our SOA proposal by Paul Rayner, who as you probably know, he's one of the leading voices in the domain driven design community. Getting developer buy-in was also challenging, but we have a strong culture of continuous learning where we spent at least an hour a week getting together, either learning a new topic or looking at the existing systems and so on.
18:09 Neel Shah
So slowly and surely, we were able to get the dev team on board as well. And finally, doing the work is obviously challenging and more so with ADSD because with... So I had a lot of background in domain driven design and with the DDD tactical patterns, you can find so many reference architectures. I think that the cargo and shipping example has probably been replicated in every programming language that exists today, so it is a little difficult to find reference architecture for ADSD. But we did trust our team's collective background in design, and we engaged you, Mauro, to help us during the implementation. So Mauro used to spend, I think, a couple of hours with us every week, while we were implementing. And we got a lot of helpful guidance from you, from that perspective.
19:15 Mauro Serviente
Thanks. One of the things we provide is the ADSD course advanced distributed system design course, and currently the free distributed system design fundamentals course. Have you or your team done any of the two? And if yes, was it helpful in facing the mentioned challenges?
19:40 Neel Shah
Yes. I think the ADSD course was really helpful because... I've watched a lot of Udi's presentations over the years and it did help connect a lot of these concepts together that he was talking about. By the time we had ended the course, we had already extracted the first, you can say bounded context, the scoring rule service out of the monolith. However, we were having significant issues, even imagining how composite UI would work. And so when we went through the course and we learned about these technical services, like ITOps and branding, that actually helped us not only with composite UI, it also helped us eliminate a lot of the data duplication, that issue that we were facing. And so our architects attended the course and another benefit of having them attend the course was it was easier to bring the rest of the team along.
20:45 Mauro Serviente
That's an interesting point because one of the things I learned over the years in helping a organization like yours, is that the buy-in is one of the most biggest challenge. So you have to build trust into the new architectural design and choices and technology, and slowly bring people on board. And that's one of the challenge. And using courses, it's very helpful in that case. So going back a bit again to technology, we see a lot of developers having the mindset of recreating the wheel. So write your own abstraction layer, write your own logging, write your own messaging systems. And then there are some others that use the client SDK. So for example, they are on Azure and the cloud, they want to use the Azure Service passing, they just use the Azure Client SDK shipped by Microsoft. And why did you end up choosing NServiceBus instead of rolling your own solution or solely relying on the SDK provided by the cloud vendors in this case?
21:55 Neel Shah
Yeah, so if you just look at our history, so we started with MSMQ, and MSMQ is a very, very elegant piece of technology, we didn't think we were ever going to move away from MSMQ. And then along came .NET Core and there's no support for distributed transactions, and so we had to kind of switch to RabbitMQ. And as we were doing the cloud migration, we kind of had a similar experience where we thought Azure storage SKUs were really inexpensive and maybe better than using Azure Service Bus. And again, we started with storage queues and we had to switch to the Service Bus. And all of this was really possible because we were sitting behind an NServiceBus abstraction. Otherwise it's... Especially these transport technologies, they offer very different delivery guarantees.
22:51 Neel Shah
So you cannot even just take... You build a code once and just copy it and apply it to the other one. So from that perspective, NServiceBus was a big help. The NServiceBus also implements a lot of messaging design patterns, like Sagas process managers, out boxing, which gives you exactly once delivery guarantee. And even within Sagas you can reply back to a message, or you can reply back to the originator. I think these are extremely sophisticated behaviors to implement on your own. So if you do end up spending a lot of time writing infrastructure code, I think you have to realize that that is time that you are not spending writing business logic, and I think basically that's really what you want to do. You want to solve problems for the business rather than write plumbing code. So I think it's best to kind of source infrastructure code from the community. And sometimes even the community has a hard time keeping up. So Rockfor.NET Which is the most dominant library in .NET for logging, they don't have support for structured logging yet, so they're still working on it. I think you're writing business applications, and that's what you want to do most of your time, rather than write infrastructure code.
24:25 Mauro Serviente
Yeah. Interesting.
24:27 Neel Shah
So, Mauro, you also asked me earlier question about finding service boundaries.
24:32 Mauro Serviente
Oh, yes. You're right.
24:35 Neel Shah
It's an interesting one for a lot of folks who want to hear about it. I'll take a few minutes to try and at least discuss what we did. So Udi's approach is really helpful where he would say... And so again, like Udi says, that there is no one right way to find service boundaries, but that rules of thumb, but this one really helped us where you take an entity and you slice it into properties that will participate in a transaction and properties that will never participate in a transaction. So here you have properties that are more cohesive and loosely coupled from others. So that's a good rule of thumb to follow for someone like me who comes from a domain driven design background.
25:27 Neel Shah
I think I also pay attention to language. So for example, in the context of questions, an item has those exhibits and response skills, whereas when it comes to scoring an item has measurement opportunities, keys, and assertions. It's clearly separate language being used, which is indicative of a separate bonded context or a separate service in that sense. Other techniques that we used were... Obviously our system had grown to be hundreds of relational tables in the database. And so we also looked at the tables that had more connections in between them. And so they're likely to be more cohesive. And I think what Udi says about shared nothing, you can only share IDs between services, that is critical to ending up with extremely crisp service boundaries. So for example, the data duplication issue that I was referring to earlier, I think following this specific rule would mean that one service would be forced to own the entire feature of adaptive routing, including the schema and including the logic. So all of these were very helpful when we were trying to find service boundaries.
26:58 Mauro Serviente
Yeah. And we get back to the data duplication thing and splitting entities, looking at the single properties and grouping things based on what changes together stays together in a second, because it makes me think to an interesting question. But before moving to that, let's talk about the technology once again, what particular problems does NServiceBus solve for you?
27:28 Neel Shah
Okay. So I think what this... And I'll share my screen again. So we basically use NServiceBus for message handling, as process managers, as well as to organize business project. So I don't really think you need to see an example of a message handler, but we can discuss a process manager a little bit. So I like how CQRS journey explains the pattern. So the use case here in this diagram is that there is this customer who is trying to resolve seats for a conference. And in this kind of before implementation, you can see that multiple aggregates need to interact in order to complete the transaction. So the customer places the order and the order aggregate sends a command to make a reservation. And eventually the order aggregate sends another command to make the payment. So you can see that it is doing a little bit too much.
28:26 Neel Shah
And so when you have something like a process manager in the middle, the order aggregate no longer needs to know what is the next step in the process. And so the process manager receives the audit created event, then it sends a command to make a reservation, and then it sends another command to eventually make a payment. So as we've been in this kind of discussion, as we've been going through examples and give you an example of a process manager that we've implemented. So it's a very straightforward one if you think of it. We are batch importing items from a third party. So the way we implement it here is we import one item at a time. And when that item says it's complete, we import the the next item. However, when the item import command is sent, it is actually storing the data in multiple services.
29:34 Neel Shah
So multiple services have to confirm that the data is stored before you receive the import completed message. And so with this pattern, you even have more granular control over the import of each item. And so it also helps to engage domain experts in case you have to discuss compensating actions. So in our case, if there is a file that has, let's say 100 items to be imported and 99 of them worked fine, what do we do with the last one item? You don't want to make it an all or nothing transaction, you just want to see what to do with the one item that failed. And compensating action that we decided it was to put the item in an inactive status from an inventory service perspective. Once that happens, it's really kind of not visible to the rest of the system.
30:34 Neel Shah
So the process manager, it's treating business process as a first class citizen, that way it's really helpful. We've also used Sagas to implement what Udi calls the domain model. So unlike the process manager that manages process state, here you are actually doing business logic. So for a low stakes exam, we do percent correct scoring. And you can see that the actual scoring happens somewhere else, and this is for that score order. Here you are sending the candidate how many answers they got correct using the percent correct scoring policy. So this has its own set of advantages. I think you'll be able to see it very clearly when I show you the next slide, where we have a service where all the domain logic is organized as Sagas. So our Cisco service has all domain logic as Sagas. And what you also see in this case is that there aren't too many architectural layers because NServiceBus takes care of persistence, retrieval of Sagas. And also it provides the consistency and competency boundary that an aggregate is really used for. So we've implemented kind of NServiceBus extensively from basic message handling, command handling, or event handling, to a process manager, to using it as the domain model.
32:15 Mauro Serviente
Yeah, that's very interesting. And I do really like a lot the Sagas as aggregates pattern because it's a way to encapsulate and protect the domain model from a sudden interaction, because essentially the only way to get to that aggregate is by using comments or events or messages in general. So in that case, it's a sort of a very strict hexagonal architecture kind of implementation where the port and adapters are just messages, and the core domain is protected inside the Saga. So that's very nice. So let's go back for a second to the data problem. So, you said that obviously, when finding service boundaries you end up splitting what's an entity from the user mentor model perspective, let's say a product, and look at single attributes or properties on that entity. And then you're realizing that, "Oh, but this one belongs to service A and this other one belongs to service B." And so now the product is split into two different services, but obviously I, as a user, I don't want to see price and description on two different pages, or my name and the score of the exam on another page. I just want to view my exam results. So I guess that's a technical challenge that... How do you face that and solve the problem?
33:45 Neel Shah
So I think it depends on kind of the... So let's say generally speaking you will assign one service that creates an entity. Let's say, for example, in our domain the... In the CMS system the authors want to create an item, the first thing that they want to do is they want to start adding content to an item. So in our question service, it just raises an event that I created a question item. And so what happens is the other services are listening to that message. And the rubric service will say, "Okay, I know that an item was created, I'm going to create a shell of an item on my side, and over here, we will eventually have measurement opportunities and keys and assertions." That same happens on the inventory service side.
34:43 Neel Shah
It says, "Okay, this item is an initial state at this point..." And so on. So basically, the item gets created one place, and then other services subscribe to the event, do the work, and then obviously when you have to show everything on the same page, you have to use composite UI techniques. So they're, again, extremely interesting, and they're actually not very difficult to implement. So if you want I can show you an example of a composite UI, but in this case it's not a single entity split into multiple, but it is still data that is coming from multiple services. So let me go to that part of the slides. So this is an actual exam. This is a candidate who has taken a break. They attempted the first set of questions, and now they're on a break.
35:46 Neel Shah
On this break page, you see data that is coming from two different services. So the Testlet Compass tells the candidate that you've finish the first set of questions, and these many are remaining. Whereas this is just instructional content that the candidates need to see when they're on a break page. So this comes from the assessment service, whereas this comes from the delivery service. And so I laid the intersection side by side. And so on the left hand side, just pay attention to the name space, it's assessments, and on the right hand side it's delivery. So these were actually NuGet packages, or components that were built in two separate services deployed to the same system, and they're both intercepting the brake controller and the get method in the brake controller. And so delivery is doing that. Assessments is also doing that, it's intercepting break and the get method. And the assessment service in the view model that goes down to the client, it puts the tested content information, and the delivery service puts the brake content information in there.
37:03 Neel Shah
And we do a reverse operation on the client. So if you look at the client, they are both intercepting the response from the same URL. As you'll see the same happening here, they're both intercepting the break get. And when the response comes back, the assessment service takes the tested content from the view model, and the delivery service takes the break content from the view model. And that's how we kind of make composite UI work.
37:40 Mauro Serviente
Yeah. That's very cool. So let me try to see if I understood the frame correctly. So essentially the first part of the code that we saw, the C# one, is a sort of a reverse proxy. So it intercepts an HTTP request essentially, and then it splits that request on to those two handlers that are coming from different services. And now those two handlers, given that they logically belong to two different services, they can access different data in a single request and compose those data that returned back to the client, and on the client, it happens the opposite process. So two different components in this case, in angular, they can extract from the HTTP response, their own set of data, ignoring everything else because they just don't don't know what it is, and then do whatever they need to do with the... They run part of the results.
38:36 Neel Shah
Yeah. And I think again, we have to credit you more for the initial framework. I think you have it in your GitHub repository also. So we took that and then we kind of productionized it to fit our needs. But yeah, I think it works really well. It is not as complicated as you would think it would be, and it just works for us really well. I wanted to show you something else interesting on the client side. So we have a messaging service on the client side and the handlers are very NServiceBus style. So let me just show it to you over here. This is a TypeScript and it is intercepting the map and store item command, and it even has the isCorrelatedBy method, and then it does whatever it needs to do in terms of handling it. So I think we had to do a little bit of work on the UI composition side, but it is really worth the effort in terms of eliminating a lot of the unnecessary duplication of data across systems.
39:58 Mauro Serviente
Multiple services. Yeah. Let me ask the final question. If I recall correctly, you are also using on Azure, Azure functions. Is there any specific use case you're using Azure functions for, or you're using Azure functions in general when deploying to the cloud?
40:22 Neel Shah
So I think Udi talks about an autonomous component as independently deployable. And so our ACs are autonomous components. For a system like driver that works in a test center in a VM, they are NuGet packages that we deploy. Whereas now that we are working with a cloud native approach, each aggregate can be deployed as a function. That's the approach that we are currently using. We are using functions as the deployment unit.
41:06 Mauro Serviente
Oh, nice. That's very interesting, and that's one of the use cases where I see the serverless world very handy. Okay. So we have a question from one of the attendees, Stephan, and he is asking, so in the case you mentioned with the import of items where multiple services have to confirm the item is imported, how does the process manager determine all parts have succeeded? For example, how does it knows all services have actually done its job without introducing coupling?
41:37 Neel Shah
The level of abstraction that the import is sitting at... It's an ITOps kind of a function, like I want to pump data into the system. And so when it is doing that, and multiple services are intercepting that command and saying that they're done, they're replying back to wherever the request came from. So the ITOps, all it's doing is it's saying, "Well, have you services finished what you were supposed to do?" So in that case, that would be the role of ITOps, it's not doing something inappropriate when you're talking about sending kind of a composed document through the system. It's verifying that the composed document did end up in the system.
42:51 Mauro Serviente
Yeah. So it's like saying the import process manager loads a blob of data, which it knows nothing about, it just knows that it's a set of roles. And the scheme of the roles is unknown to the import process manager. And then essentially publishes a sort of an event saying, "Hey folks, import data. Import role ID 1, 2, 3, 4, 5." Whatever. And then it simply waits for replies as imported role, imported role two. And it doesn't really need to know what was the job done by the importers and how many steps required importing that specific role, because it just needs to know someone later in time will reply me, "Imported." And that's the thing. Yeah. Which is interesting because it opens up also to the possibility of saying there's a pipeline of importer... Or let's say translators. So you can have a sort of a mechanism where you can then dynamically load implementation of importers from different services in the ITOps thing, and simply blindly invoke them and say, "Okay, do your own job. And then when it's finished, I'm done."
44:06 Neel Shah
Yes, exactly.
44:08 Mauro Serviente
Okay. And Stephan comments that, "When you said ITOps, I understood." Okay, cool. We answered the question. So I think that if there are no more questions, we're done. I'd like to thank you very much for your time. That was an incredible pleasure to have you with us today and back to you Hadi.
44:33 Neel Shah
Likewise, thank you so much. I appreciate the opportunity to discuss the work that we've done for the last many years. It takes a long time to see something like this grow.
44:48 Hadi Eskandari
Thanks Neel, and thanks Mauro. That was very interesting. And thanks everyone for being here with us today. The recorded webinar will be shared with you and with anyone that could not attend this morning or this time. And thanks again for joining us and we'll see you on the next one.

About Mauro Servienti

Mauro Servienti is a Solution Architect at Particular Software. He helps developers build better .NET systems using Service-Oriented Architecture (SOA) and asynchronous messaging.

Additional resources