Posts

Showing posts from November, 2017

.NET Core - a quick observation

.NET Core is:

A substantial rewrite of .NET in the makingA better .NETA multi-platform .NETA way to leverage an amazing language like C# over Linux and IoT (based on Linux) .NET Core is not:

A one stop shop  - yet Only for webOnly for cloud development

Evolving code and adding new technologies - Part 4

Image
In this part we will focus more on the process of changing a system architecture and the design at hand  than on code. After trying different approaches in an effort to not change the architecture entirely, with no success, there is only one thing to do. Change the architecture entirely. TL;DR Brace yourselves, major changes are coming. And they are good!

For a recap here is Part 1Part 2 and Part 3
What this change is not The system at hand is an on premise system. It does not run in the cloud. I dare say that it does not even run on the same network as the customer's other services. It is an island with a closed network with no access to the world outside including the client's kitchen. These things exist to the extent that remote debugging is not possible, not even plugging a USB key after installation (if there is a USB port on the machine). I plan to address debugging such systems in a post after the series is over so stay tuned ;) About the process of changing a system&…

Evolving code and adding new technologies - Part 5

Image
The implementation of the proposed architecture is at hand. We dive into the change and see how Redis (or any other message queue or pub/sub broker) makes our life easier. TL;DR Time to make the change. Read it all!

For a recap here is Part 1Part 2Part 3 and Part 4
Components of our exampleGitHub branch
Our monolithic example will be divided to three processes: ProducerRedis (3rd party broker)Consuming ProcessorThin client After we are done with the change, the server will look like this:

On to the code The Dataflow library from the previous part will be removed in the consumer. Now that the processes are separated the consumer is super fast up until the actual processing which takes around 1 second.

The different components will work directly with Redis which will in fact be the backbone of our pipeline.

Ten producers are being used to send messages into 10 Redis lists. Each list has a key named {Producer_i} where i is the number of the producer. The consumer is aware that there a…

Evolving code and adding new technologies - Part 3

Image
In this part of the series we will see how our code, after having evolved from a simple event call to a concurrent mess, will first and foremost get a bit into shape and then being throttled to produce best results. TL;DR The light at the end of the tunnel is not oncoming traffic.

For a recap here is Part 1 and Part 2

Before we start I urge you to go read through the documentation. These are the links that hlped me write this post and learn how to use the Dataflow Library
Dataflow (Task Parallel Library)Walkthrough: Creating a Dataflow pipelineWalkthrough: Using Dataflow in a Windows Forms applicationHow to: Write messages and read messages from a Dataflow blockUpdate: On Post and SendAsync On Post and SendAsync in TPL Dataflow Let the data flowGitHub branch

This may be "old" tech for some or all of you but it was new to me when I decided to pop my head out of the "company" man role and into the consultancy role in my career. So lets evolve and enrich that old syste…

Evolving code and adding new technologies - Part 2

Image
In Part 1 of this series we saw a basic, naive implementation of a monolithic desktop client that draws input from some input producer and displays data. Now imagine that this is your system. You built and deployed it a while ago and time went by. On a quiet morning while you sit on your desk sipping coffee and reading your favorite news outlet, your PO and hands you over the new requirements for the next version. TL;DR it will get messy before it will get sweet
Version 2GitHub branch

According to the requirements now you have more than one producer. And they produce data faster. However your processing of data just got slower. You have to fetch some older data from the DB, do some correlations and then save data back to the DB. No matter what the processing is your data processing takes anywhere from 600ms to 1000ms. Yes you read right; 1 sec by the time data is correlated, stored in the database and displayed. Database optimization and more threads are out of the scope of this serie…

Evolving code and adding new technologies - Part 1

Image
This is a start of a series starting on some simple old fashion code and evolving such code to something more modern. I will not use buzzwords like microservices or containers. This is beyond the scope of this article. What I will try to show is how to approach the past and future decisions step by step by way of refactoring and introducing new technologies. TL;DR There is none, read it all
Disclaimer The idea for this series is taken from a real life "war story". I am however not the least bit inclined to disclose the source. Nor does any part of the code reflect the full and actual system, the idea for the post has stemmed from.
Part 1 - Old School The code for the old school code is pretty straight forward and resides on the master brunch of this GitHub repository. Initial architecture The initial architecture of the system looks as such:

Consider an application with two major threads; One thread reads data from some input, does some pre-processing of the data read and th…