Evolving code and adding new technologies - Part 5

The implementation of the proposed architecture is at hand. We dive into the change and see how Redis (or any other message queue or pub/sub broker) makes our life easier. TL;DR Time to make the change. Read it all!

For a recap here is Part 1Part 2Part 3 and Part 4

Components of our example


Our monolithic example will be divided to three processes:
  1. Producer
  2. Redis (3rd party broker)
  3. Consuming Processor
  4. Thin client
After we are done with the change, the server will look like this:


On to the code

The Dataflow library from the previous part will be removed in the consumer. Now that the processes are separated the consumer is super fast up until the actual processing which takes around 1 second.

The different components will work directly with Redis which will in fact be the backbone of our pipeline.

Ten producers are being used to send messages into 10 Redis lists. Each list has a key named {Producer_i} where i is the number of the producer. The consumer is aware that there are 10 producers by configuration file. In an actual system this would be communicated to the consumer via an API (be it REST, WCF, or Redis Pub/Sub).

And so the producer will now look as such:


The messages are being pushed in a batch but not a transaction operation. The difference being that batch may be sent to the server at once but other command may come in between. In a transaction however isolation is guarantied, at the expense of speed of course. Since each producer is sending data to its own queue batch is good enough.

The consumer will now look as such:



Data is read serially from all producer queues in batches of 100 at a time. it is then de-serialized (between 0-10 millies per value) and then grouped and sent to the client. Data is being left in the grouped ConcurrentDictionary for no more than a minute, whereupon it is marked for deletion and deleted at every cycle. In order to emphasize how fast the system works milliseconds are used to display the amount of messages entered at a certain millisecond in the system. The lag is practically non existent to acceptable.

What is next

We could stop the series here but there is one bit that still bothers me... The client is quite passive and has only one handler to deal with the messages produced by the consumer.. Bow can the client be made less coupled with the pub/sub message handler and more versatile to the messages it displays or even the way it receives and reads them? Well stay tuned..... After the holidays the final chapter of the series will answer just this question!

Comments

Popular posts from this blog

NUnit Console Runner and NLog extension logging for your tests

Tests code coverage in Visual Studio Code with C# and .Net Core

Applying an internal forums system in the company culture - thoughts