Friday, August 26, 2011

How much memory do you need to fit the world in it - Lessons Learned from Implementing Complex Domain Model – part 3

This one is going to be hard to accept for some people, but sometimes, just sometimes business applications don’t necessary have to use a relational database. The post is a continuation of the series based on my experiences from a project for a customer in oil&gas industry.

So here is what we knew when starting the project:

  • The usage pattern of the application is going to be the following: an administrator sets up a simulation, a limited number of users interact with the system for about a week, the results are archived (or just discarded), the cycle repeats.
  • We are going to work with a large and complex domain
  • We are going to develop iteratively and are expecting a lot of changes to the code already written (from refactoring to larger structural changes).
  • We want the software to be model based (Doman Model pattern) as we felt this was the only sensible way to tackle the complexity

A lot of people expected this was going to be backed up by a Sql Server database. Indeed, NHibernate and a relational database was one of the options we considered. Another was to use an object database (for example db4o), but we ended up doing something quite different.

First, why we have not decided to use a relational db. It just seemed that the effort required to keep the schema in synch with all the continuous changes in the structure of the model would become an overkill. Also, while I know how flexible NHibernate is and how granular the mapped object model can be, I also know it comes with the cost (custom types, exotic mappings). In addition, we did not really need a relational database. Our model was far from being relational. We feared that the mismatch would slow us down too much.

Then, we seriously considered db4o. I think that could have been a reasonable choice. Object databases seem to be pretty flexible and don’t put too much constraints on the model (they still tend to feel a bit like ORMs and rumours are they are not speed demons) but we found something even less limiting then that. Memory – yes – we have decided to keep the whole model in memory.

Of course now you asking yourself, what if the system crashes, do we loose the data, what about transactions, rollbacks, etc. What we have started with to address the above was a pretty naive approach which we later upgraded to something smarter but still very simple.

We have divided all the operations into queries and commands. Queries can access the model anytime (concurrently) and don’t need transactions (cannot change state). Commands on the other hand can only access the model sequentially (in the simple implementation) when no other command or query is executing. As soon as the command is executed (and the state of the model changed) we would serialize the whole model to a file. If the command throws for any reason we would deserialize previously saved object graph and replace potentially corrupted in-memory state. This worked quite well for a while. Before I continue, let’s look at the benefits:

  • finally we can write truly object oriented code (polymorphism, design patterns, etc.) – everything is in memory
  • finally we can utilize the power of data structures in our model (hash tables, queues, lists, trees, etc) – everything is in memory
  • it has gotten so much faster (no I/O) that you would find new bottlenecks (i.e. performance problems with our initial Quantity implementation)
  • Because we knew the simulations will last only a week or so, we could afford to just ignore schema migration strategy. If I needed to add/rename/move a class/field/property/method I just did it (no mappings, no schema update scripts)

Our naive implementation worked relatively well for a while – until our model gotten bigger and serialization no longer was instant. That affected command execution times which affected system responsiveness in general as queries waited for access to the system until a command was done, commands would pile up and disaster was unavoidable.

Now I have to confess, that when we decided to go for “everything in memory” option, we knew that the naive implementation would not take us very far, but we did it anyways. First, because we wanted to work on the model and limit the initial investment in the infrastructure. Second, we already knew how to upgrade to something more scalable – object prevalence.

The basic idea is still the same. Keep the model in memory. Have the queries access the model concurrently, and only allow changes to the model trough the commands. The difference is that instead of taking a snapshot of the whole graph after each command, you only serialize the command itself to a “command log”. Later if you need to restore the state of the system (after power failure?) you just “replay” all the commands from the log file. You still may want to take full snapshots every now and then and use them as starting points for the system recovery (just replay the commands executed/logged since the last snapshot).

This by no means was our invention. The above is an implementation of Event Sourcing pattern. Also the term object prevalence and an implementation of the event sourcing pattern as a persistence mechanism was done by the people behind Prevalayer for Java (their site seems to be down as I write this but here is a webarchive version of the FAQ). I’m not sure if this project has been discontinued (the last commit to the git repository was in 2009). Unfortunately, .NET port called BambooPrevalence does not seem to be maintained anymore neither. Initially we were reluctant to base our solution on a library not supported or maintained by anyone, but the idea behind prevalence is so simple, that we have decided, that if needed, we will be able to fix problems ourselves. We have based our code on slightly customized version of BambooPrevalence and have not had any problems related to it.

End of part 3

Using object prevalence was the best decision we have made in the whole project. To be honest I cannot image us finishing the project on time if we did not have all the freedom and flexibility of the in-memory model. I’m not sure I can recommend this approach to everyone but the ideas behind it become more and more popular in the form of CQRS. Also, not so long ago Martin Fowler published an article on The LMAX Architecture where similar approach worked extremely well in a retail financial trading platform (keeping everything in memory, single update thread, extreme throughput).

Sunday, August 21, 2011

Quantity Pattern - Lessons Learned from Implementing Complex Domain Model – part 2

 

Analysis Patterns: Reusable Object Models is a book by Martin Fowler, first published in 1996. It is not new nor it is an easy reading. As much as I like Fowler’s style of writing I struggled trough some of the chapters. Do I recommend the book – yes. Why? Because each time I am involved in a project with complexity above the average, dealing with real world business cases, I find the patterns described in the book helpful.

The project I mentioned in part 1 is all about real world business cases and it is a serious business – oil&gas business. In this post I will focus on Quantity Pattern that played a key role in a success of the project.

In our domain we had to deal with values expressed using various units and their derivatives. The formulas used for calculations were pretty complex and involved unit arithmetic. For example you want to be sure that calculated daily oil production rate is expressed in  mbbl/day (thousand oil barrels per day) or any unit that can be converted to OilVolume/Duration.

Quantity pattern tells you to explicitly declare a unit for every dimensioned value instead of just assuming the unit (representing the value as a bare number). This part is easy:

public class Quantity
{
public decimal Amount { get; private set; }
public Unit Unit { get; private set; }
}


Next there is Parsing quantities and units from strings and converting them to strings. This is not the hardest part but there are things you have to watch for. There are prefix units ($) and suffix units (most of them). Some of them require space after the amount, some look better concatenated with the amount. Examples could be: “$10.5m” and “42 km”.



The hardest part was implementing arithmetic on Quantities with support for all operators, compound units, conversions, reductions, etc. But it was worth it. Now we can write code like this:



var oilVolume = Quantity.Parse("100 mmbbl");
var duration = new Quantity(1, Units.Year);

var productionRate = (oilVolume/duration)
.ConvertTo(Unit.ThousandOilBarrels/Unit.Day);

Console.WriteLine(productionRate.Round(2)); // Gives us "273.97 mbbl/d"


When we thought we were done with the implementation, we had discovered that the performance of Quantity*Quantity is far worse then decimal*decimal. Profiling showed that operations on units (mostly reductions and conversions) caused Unit.Equals method to be called so many times (especially when looking for conversions between compound units) that despite the fact that a single Unit.Equals execution would take 1 ms the final result was not acceptable. We were crushed. Of course the first thought was to go back to using decimals, but we really did not want to give up all the Quantity goodness.



It took us a while to come up with the solution that depended on making sure we only ever have a single instance of any unit. That allowed us to compare any two units (including compound units) for equality using Object.ReferenceEquals.



This was easy for base units – we just made them singletons, i.e.:



public class Unit
{
public static readonly Unit OilBarrel = new BaseUnit("bbl", ... );

// ...
}


All other units were the problem. There are many ways one can create an instance of an unit, some examples:



var a = Unit.Parse("100 mbbl/d");

var b = (Unit)BinarySerializer.Deserialize(BinarySerializer.Serialize(a));

var c = Unit.ThousandOilBarrels/Unit.Day;


At the end we covered all of them using lookups, unit operation cache, conversion cache, implementing IObjectReference and such. The result was surprisingly good. We were able to achieve performance close enough to this of operations on pure decimals (after all caches got populated). What made us really happy was the fact, that we were able to solve the performance problem just by changing the implementation of the Unit and Quantity classes. Public interfaces used by all the code already written was unchanged.



The summary is that if you’re to work with a domain that deals with a lot of dimensioned values using a number of base units and their derivatives, implementing even the simplest form of Quantity pattern will make your life much easier. Implementing the full featured Quantity class will take time and depending on your performance requirements may or may not be worth it.



End of part 2



I strongly recommend reading Analysis Patterns. Even if you don’t remember the details of the patterns after the first pass, don’t worry – just keep the book around – you will read it again when the time comes. Other patterns from the book we used in the project were many of the Accounting Patterns.

Saturday, August 20, 2011

Lessons Learned from Implementing Complex Domain Model – part 1

It took us over a year to implement this application for a company in oil&gas sector. Customer seems happy, we are happy – let’s call it a success. It was Silverlight 4 front-end connecting to WCF services in the back-end. We have encountered quite a few technological challenges but the biggest challenge was the domain that none of us has ever worked with.

We had to learn a lot about licences, bidding, drilling, platform development, oil production, pipelines, gas contracts, taxation regimes, accounting, corporate finance… I could continue for a while, but the bottom line is that it is complex stuff. Now after a year I can say that the model we have built is not perfect and I would gladly rewrite some parts but overall I think we did pretty well.

Whenever anyone says Domain Model everyone thinks of DDD. I cannot say we have followed DDD by the (blue) book, but lets say we were inspired by the ideas.

Ubiquitous language and knowledge crunching

This part worked really well. The scope of the application and complexity of the domain made it hard to just explain (or rather understand) how it all should work. To get us started we would talk to the customer in a room with a whiteboard and draw UML-like diagrams trying to illustrate a story told by an expert. BTW - we were lucky - the customer was a real expert (infinite source of domain knowledge). We initially wanted to redraw the UMLs using a computer tool, but we only ended up taking pictures. At the end we have not even used the pictures too much. The diagrams were just very helpful to organize the information as we received it. They allowed us to create high level mental models of the domain and learn the vocabulary used by the experts.

An interesting bit is that we ourselves have created a lot of concepts that the experts found useful when describing the functionality. In some cases they were just names for things they don’t even care to name. In other cases we needed more precise names to avoid ambiguity. When creating new concepts be very careful. If an expert feels the concept is artificial and the word does not describe anything they are familiar with, it is probably not what you were looking for.

Most of the concepts we put in the diagrams ended up as C# classes. We have created many classes we have not anticipated when drawing the diagrams, but the high-level structure of classes was pretty close to what we discussed with the customer. Of course we did not get it right the first time and had to change the structure many times as we discussed new features. The good thing was that when discussing changes we all used the words that meant the same thing to the expert and us – developers.

Now you may wonder if it is possible to teach the customer enough UML to be able to use it as a communication tool. My experience shows that in most cases it is. At the end, you will need a smart person as an expert to be able to learn from them. If they are smart, they will understand that a rectangle with a name represents a concept. Then use a few examples like Pet, Dog, Cat or Vehicle, Car, Bus to illustrate generalization and specialization. Associations are also easy to explain (Car “has” wheels, Pet “has” Owner, etc). Just don’t try to explain composition vs. aggregation. A simple arrow is enough at the level of details you want to capture at this stage.

End of part 1

In the next parts (if I ever write them) I want tell you about:

  • why Analysis Patterns is a book worth reading,
  • why the customer was suppressed they will not need SQL Server licence,
  • why you need to be very careful when deciding to use State design pattern
  • why contracts are the best thing since sliced bread