Evolving code and adding new technologies - Part 2
In Part 1 of this series we saw a basic, naive implementation of a monolithic desktop client that draws input from some input producer and displays data. Now imagine that this is your system. You built and deployed it a while ago and time went by. On a quiet morning while you sit on your desk sipping coffee and reading your favorite news outlet, your PO and hands you over the new requirements for the next version. TL;DR it will get messy before it will get sweet Parts 3-5 are here: Part 3 , Part 4 and Part 5 Version 2 GitHub branch According to the requirements now you have more than one producer. And they produce data faster. However your processing of data just got slower. You have to fetch some older data from the DB, do some correlations and then save data back to the DB. No matter what the processing is your data processing takes anywhere from 600ms to 1000ms. Yes you read right; 1 sec by the time data is correlated, stored in the database and displayed. Database opti