Fallacy #3: Bandwidth is infinite
This post is part of the Fallacies of Distributed Computing series.
Everyone who is old enough to remember the sound of connecting to the Internet with a dial-up modem or of AOL announcing that “You’ve got mail” is acutely aware that there is an upper limit to how fast something can be downloaded, and it never seems to be as fast as we would like it.
The availability of bandwidth increases at a staggering rate, but we’re never happy. We now live in an age when it’s possible to stream high definition TV, and yet we are not satisfied. We become annoyed when we run a speed test on our broadband provider only to find that, on a good day, we are getting maybe half of the rated download speed we are paying for, and the upload speed is likely much worse. We amaze ourselves by our ability to have a real-time video conversation with someone on the other side of the world, but then react with extreme frustration when the connection quality starts to dip and we must ask “are you there?” to a face that has frozen.
Today, we have DSL and cable modems; tomorrow, fiber may be widespread. But although bandwidth keeps growing, the amount of data and our need for it grows faster. We’ll never be satisfied.
🔗Problem of scale
The real problem with bandwidth is not one of absolute speed but one of scale. It’s not a problem if I want to download a really big movie. The real problem is if everybody else wants to download really big movies too.
When transferring lots of data in a given period of time, network congestion can easily occur and affect the absolute speed at which we’re able to download. Even within a LAN, network congestion and its effect on bandwidth can have a noticeable impact on the applications we develop. This is especially true since we don’t tend to notice these problems during development, when there is very little load and congestion is low.
This problem can commonly surface through the use of O/RM libraries, some of which will have the habit of accidentally fetching too much data. That data must then be transferred across the network, even if only some of it is used. Different tech stacks sometimes exacerbate this problem. In early ASP.NET Web Forms applications, for example, the default method for paging a DataGrid component was to load all of the data, and then accomplish the paging in memory. This held the possibility of loading millions of rows in order to only display ten.
Additionally, we have to be mindful that bandwidth isn’t only a concern at the network level. Disks, including those used by our databases, have bandwidth limitations as well. A complex join query may run quickly in development with a limited amount of test data. Run the same join query in a production scenario with millions of rows per table at the same time as dozens of other users, and you’ve got yourself a problem.
🔗Solutions
In order to combat the 3rd fallacy of distributed computing, there are a few strategies we can use.
🔗“Goldilocks” sizing
The first strategy is to realize that we can’t eagerly fetch all the data. We have to impose limits and, to an extent, only download what we need. The big challenge is that we must strike a balance. To prevent running afoul of bandwidth limitations, we cannot download too much. But to prevent running afoul of latency (the 2nd fallacy), we must also be careful not to download too little!
Like Goldilocks (who probably would have made a wonderful systems architect but has a thing or two to learn about trespassing), we must carefully analyze the use cases in our system and make sure that the amount of data we download is not too big or too small but just right.
It might even be necessary to have more than one domain model to resolve the forces of bandwidth and latency. One set of objects can be tightly coupled to the structure of individual database tables to be used for updating data, while another set of classes deals with read concerns only, combining data from different tables and transforming it into exactly what is needed for that use case.
🔗Sidestepping
Another thing to keep in mind is that bandwidth limitations and congestion in general will slow down delivery, which we can counteract by moving time-critical data to separate networks.
One strategy is to make use of the claim check pattern, in which large payloads are segregated into different channels and then referenced by URI or another identifier. Clients that are interested in the data can then choose to incur the download penalty for the large payload, but other parties can ignore it entirely.
This is especially useful in messaging systems, which work best when messages are small and can be transferred quickly. A good service bus technology will include an implementation of the claim check pattern to make dealing with large payloads more convenient.
🔗Summary
Whether designing distributed systems or just watching Netflix, the effect of bandwidth is so pervasive that it’s almost like a second currency. No matter how much we have, we’ll always want more, and as a valuable commodity, we need to be careful with how we use it.
If we have time-critical data, we may be able to move it to a separate network or cleave off the weight of a large payload using the claim check pattern. Otherwise, we’re left with hard choices: balancing the limitation of bandwidth against the limitations imposed by latency. This underscores the need for experience in analyzing these use cases.
We’ve come a long way since dial-up modems, but still we still perceive bandwidth as limited. And even though we know it’s limited, we still continue to act (when coding or architecting) as if it’s infinite. Our need for bandwidth will never be satisfied—but we need to stop acting as though there will ever be enough.