Evolving code and adding new technologies - Part 1

This is a start of a series starting on some simple old fashion code and evolving such code to something more modern. I will not use buzzwords like microservices or containers. This is beyond the scope of this article. What I will try to show is how to approach the past and future decisions step by step by way of refactoring and introducing new technologies. TL;DR There is none, read it all


The idea for this series is taken from a real life "war story". I am however not the least bit inclined to disclose the source. Nor does any part of the code reflect the full and actual system, the idea for the post has stemmed from.

Parts 2-5 are here: Part 2,  Part 3, Part 4 and Part 5

Part 1 - Old School

The code for the old school code is pretty straight forward and resides on the master brunch of this GitHub repository.

Initial architecture

The initial architecture of the system looks as such:

Monolithic application diagram

Consider an application with two major threads; One thread reads data from some input, does some pre-processing of the data read and then calls an event on the other thread, incidentally a UI thread,  to read the data. It does so on a message by message basis. The receiving thread upon reading each message does some more processing on the context of the event, synchronously. Meaning the calling thread of the event is stuck until the called thread is done.

If the volume of messages is not too big (say 100 per minute) then this architecture will work pretty well. Things will get messy when the volumeof inputs increases, the processing on the called thread gets slower and more cumbersome or more calling threads are added to handle different inputs yet ride on the same event message.

When volumes grow then this architecture is not good enough. The drawing will stay the same but something else will need to change. Something inside the code. The synchronous event based paradigm will have to change. Some would argue that instead of using invoke the APM method should be used. I would however skip this step in order to advance faster to the preferred solution.

So until the next part I leave it to you to think what could be the next step to handle larger volumes of data (without APM or TPL for now).


  1. Amazing! I was surfing the Internet, but I couldn't even find similar information. Thank you for sharing this post. It really helped me.


Post a Comment

Keep it clean, professional and constructive. Everything else including ethnic or religious references will be removed.

Popular posts from this blog

NUnit Console Runner and NLog extension logging for your tests

Tests code coverage in Visual Studio Code with C# and .Net Core

Applying an internal forums system in the company culture - thoughts