How will you make it if you never even try?

February 26, 2010

Using CQRS!

Filed under: C# — charlieflowers @ 2:10 am

Man, I’m tired. I’m hoping the little cup of coffee I just had will give me one more short burst of energy.

Why am I so tired? Because the project I’m working on is so freakin’ awesome that I’m working night and day on it.

Really, I’m so excited about the architecture we’re using and the technology stack that I think this may be the most fun I’ve had (at work) in 8 years.

So what is all this great stuff, you ask. Well, glad you asked. We’re moving to an architecture called “Command / Query Responsibility Segregation” (CQRS). In an oversimplified nutshell, the idea is this: You break your system into 2 “channels”, one for Writes (aka, “Commands”) and one for Reads (aka, “Queries”). You have Domain Objects (and we’re following DDD), but they are Write-Only. That will sound very strange if you’ve never heard of CQRS. But it does a ton of good things for you.

So if your Domain Objects are write-only, then how do you populate a screen with existing data? Well, you use the Query “channel” for that.

When a write occurs, your Domain Objects do it. They apply all of the intelligent behavior and business logic that you’ve so carefully built into them. They won’t let the write occur if anything is wrong with it.

If the Domain Objects do decide to let the write occur, then they will also fire an event saying that something has happened (as in, “A payment has been taken” or “A new account has been created”).

These events are listened to by many possible “subscribers”, one of which is the “Query channel”. It then records or updates projections of the relevant state.

This means that you can represent the data that is your “true state” in many different ways (aka, various read-only projections). For example, you might write some data into a OLAP star schema. You might also make a separate projection that is tailored to “Customer Edit” screen. And some of that same data might be pushed into a projection tailored for your “Nightly Financial Report”. Since these representations are read-only, they can be denormalized and tailored to the needs of the task that is doing the reading.

Then, when someone comes to the “Customer Edit” screen, you do not use the Domain Object to populate that screen. Instead, you read from the Query channel, from the particular read-only projection that was written there for the “Customer Edit” screen.

You see, in a sense this data still comes from the Domain Objects … just not directly. The Domain Objects still control every bit of the data used by the system … but you don’t “call them” to get the info. They put the info somewhere, and you get it from there.

I’m not going to go into too much more detail right now. But in a nutshell, here are some of the main benefits:

1. Your Domain Model is free to emerge and evolve into the right way to express your complicated domain logic. It no longer is encumbered with the additional job of providing thousands of little pieces of data on demand. No other parts of your system even need dependencies on your Domain Objects, and they certainly do not place any “demands” on the shape of those Domain Objects. (It is this benefit, moreso than scalability, that led me in particular to CQRS).

2. The ability to produce many different representations of the state on the “Read channel” is surprisingly powerful. By that I mean that, even though it sounds powerful, it turns out to be surprisingly more powerful than it sounds at first. Your Domain Objects (and some other “helpers” on that end of the system) can custom-tailor specialized “summaries” and representations of the data … one each for the various “Read tasks” that you have. This means that all your “Read operations” get much simpler … there is very little or no “transformation” work to do, because it was already done for you when the events were fired from the “Command channel”. Granted, this is merely moving work around, because you used to do that “transformation” work during the Read, whereas now you do it as an indirect consequence of the Write. But there are significant advantages of doing that work “back there” as a consequence of the Write … a lot of that transformation work is quasi-business logic, and it is very nice and clean to do it back there. And it’s wonderful to not have to do it when rendering an Edit screen or showing a report.

3. Scalability. Duh. This is the most highly touted benefit and it is a big one. Now you can optimize your Write channel for writes, and your Read channel for Reads. You can have many instances of your Read database, which you load balance between. Most systems have many times more Reads than Writes … so now your “Write channel” won’t be burdened with serving all those Reads.

4. Real-time OLAP. (This sort of fits under # 2, but I feel it deserves its own bullet point). Star schemas are so fantastic for presenting information about what has happened in your business. So many businesses that would benefit greatly from them don’t have them. It’s often approached as a nice-to-have after the OLTP system itself is working. But CQRS let’s you have a real-time OLAP schema, merely because those events from the Command channel can be captured in an out-of-band manner and recorded into a star schema. I say “real-time” because, even though there will be a lag time in seconds between the Command channel write and the OLAP update, “seconds” is still very much real-time to the business world. Plus, you don’t have to justify the OLAP schema as a separate project. It can be the foundation of your Read channel, and therefore it is necessary (and powerful) for the system you are building right now. But you can also do bits and pieces, rather than the whole Data Warehouse enchilada.

So I’m fired up. A co-worked told me the other day that he couldn’t sleep the other night because he was so excited about it. I’m fired up, but I have been able to sleep at least 🙂

If you want to learn more about CQRS, here are some great links:


http://www.infoq.com/interviews/Architecture-Eric-Evans-Interviews-Greg-Young


http://www.infoq.com/presentations/greg-young-unshackle-qcon08


http://codebetter.com/blogs/gregyoung/archive/2010/02/13/cqrs-and-event-sourcing.aspx

http://www.udidahan.com/2009/12/09/clarified-cqrs/

February 11, 2010

Important new Term: “Prefactoring”

Filed under: C# — charlieflowers @ 5:37 am

“Prefactoring” is like “refactoring”, but much more proactive. It is when you tell a co-worker, “Hey, you know that code you were going to write this afternoon? Yeah, well … uh … why don’t you go ahead and let me write that instead? Why don’t you take some time to run a few errands, or get some fresh air?”

(Update: Or, perhaps, it is when a co-worker does that to you. I know it can happen to me. After all, I always presume myself “ignorant until proven guilty.”)

April 3, 2009

C# Event handlers: a good idea immediately superseded by a BETTER one

Filed under: C# — charlieflowers @ 12:39 am

When you have events in C#, you need to check to make sure they’re not null before firing them. They will be null if no one has ever registered for the event before. So you gotta do this:

namespace ConsoleApplication1
{
  class Program
  {
    public event EventHandler<EventArgs> someEvent;
    
    static void Main(string[] args)
    {
      // Imagine something happened, need to fire event
      if(someEvent != null) // Don't forget this vital null check!
         someEvent(this, EventArgs.Empty);
    } 

If you forget the null check, you will get a null reference exception if no one has registered for your event.

So here’s the “only good idea” (to be superseded below with the “great idea”)

The “only good idea” is to add an empty delegate to the event immediately when you declare it. Like this:

public event EventHandler<EventArgs> someEvent = delegate {};

I used to think that was a fabulous idea. After all, it has a lot of benefits. Now, you can freely just fire the event. The only overhead is that it will always call your empty delegate, which is the overhead of one unnecessary method call (not usually a big deal).

But here’s the better idea: Define an Extension Method on EventHandler that does the null check for you!

This is better because it is more readable and because it gets rid of that slight performance overhead of the empty delegate.

Here’s how:

public static void Fire<T>(this EventHandler<T> self, object sender, T args) where T : EventArgs
{
   if(self != null) self(sender, args);
}

Then, when it is time to fire an event, you do this:

someEvent.Fire(this, EventArgs.Empty);

I love it. You see, what is happening here is that, even though much of the community knows the new C# 3.0 language features pretty well, we continue to find new, powerful, delightful ways to use them.

By the way, the first place I learned this technique was from this question on StackOverflow.

April 2, 2009

Nice C# idiom for parameterless lambdas

Filed under: C# — charlieflowers @ 9:35 pm

The C# syntax for a lambda with no parameters is kind of ugly:

public static void SomeMethod(Person person)
{
   Console.WriteLine( ()=> person.FirstName );
   // The ugly syntax is: ()=> person.FirstName
}

However, there is a nice, relatively new idiom springing up that makes it a little better. The idiom is to use underscore (“_”) as the parameter. Underscore is a valid name for an identifier in C#.

The convention is that, as the idiom becomes more widespread, people reading your code will know that you would never name a meaningful parameter with underscore, so therefore they know you’re going to ignore the parameter. So in effect, they understand that the intent is a parameterless lambda.

As a result, you wind up with this:

public static void SomeMethod(Person person)
{
   Console.WriteLine( _=> person.FirstName );
}

There’s another reason this idiom is appealing: its harmony with F#. F# is a language on the rise, and we’re likely to see more and more projects which have a mix of F# and C#. In F# (and many other functional languages as well), there is a key language feature called pattern matching. At an over-simplified level, it is like a case statement. And there is syntax for a “wild card” pattern that matches everything. That syntax happens to be underscore! More specifically, when used in an F# pattern match, the underscore means, “Match whatever input I’m being compared to, and I don’t plan to use the value of that input in the expression I’m about to execute.” Which is almost exactly the same meaning we’re trying to express here.

Of course, you’ll have to decide for yourself whether you like the underscore or the original ()=> syntax better. Clearly that’s a subjective matter.

Also, beware that sometimes the two aren’t interchangeable, depending on what you’re doing with the lambda. If you use the underscore idiom, then of course you really do have a single-parameter lambda. If you’re passing the lambda to a method that is going to require a zero-parameter lambda (because it takes an expression tree from the lambda or something like that), then the underscore idiom is not going to work.

You can find more info here.

Some more good ideas about parameter validation in C#

Filed under: C# — Tags: , , , , , — charlieflowers @ 6:20 am

As you can tell from several recent posts, I’m very interested in good syntax for parameter validation. The new features in C# 3.0 make so many things possible. I found another excellent post, from John Gilliland. He “amplifies” each plain old argument value into an ArgumentEx<T> instance, and then hangs extension methods such as “NotNull” and “InRange” off of ArgumentEx<T>. He uses an implicit conversion operator to make it easy to treat an ArgumentEx<T> as the plain old argument value.

Very nice, very thorough. Check it out. I’d like to combine elements of his approach with the lambda expression idea that allows you to avoid specifying both the parameter and the parameter name as a string.

Example where calling Extension Methods on null references is useful: Parameter Validation

Filed under: C# — Tags: , , , , , — charlieflowers @ 2:56 am

In a recent post, I pointed out that extension methods can be called on null references. For example, this works perfectly fine:

public static void PrintToConsole(this string self)
{
   if(self != null)
      Console.WriteLine("The string is: " + self);
   else
      Console.WriteLine("The string is NULL.");
}

// Elsewhere in the code
string myString = null;
myString.PrintToConsole();

I said I’d give some examples of where this would actually be useful (not just a gimmick as it might appear on first blush).

One such case is parameter validation. Rick Brewster has come up with a fantastic approach for parameter validation, which lets your code look something like this:

public static void Copy(T[] dst, long dstOffset, T[] src, long srcOffset, long length)
{
    Validate.Begin()
     .IsNotNull(dst, “dst”)
     .IsNotNull(src, “src”)
     .Check()
     .IsPositive(length)
     .IsIndexInRange(dst, dstOffset, “dstOffset”)
     .IsIndexInRange(dst, dstOffset + length, “dstOffset + length”)
     .IsIndexInRange(src, srcOffset, “srcOffset”)
     .IsIndexInRange(src, srcOffset + length, “srcOffset + length”)
     .Check();
  
     // Further code snipped.
}

No doubt that’s beautiful syntax. But one of Rick’s main goals was this: Incur the least possible overhead if the parameters are all correct. In particular, don’t instantiate any additional objects if the parameters are correct.

And the way this is achieved depends on the fact that extension methods can be called on null references. If all the parameters are OK, the Begin() method, the IsNotNull() method, and so on, all return null. However, they still have a return type, and that return type has extension methods on it called “Begin”, “IsNotNull”, “IsPositive” and so forth.

You can learn more about Rick’s approach here and here.

C# Delights: Extension methods can be called on null references (and that’s extremely useful)

Filed under: C# — Tags: , , , — charlieflowers @ 12:42 am

Did you know that C# extension methods can be called on null references?? Yes, they can. For example, the following method …

public static void PrintToConsole(this string self)
{
   if(self != null)
      Console.WriteLine("The string is: " + self);
   else
      Console.WriteLine("The string is NULL.");
}

… can be called as follows:

string someString = null;

someString.PrintToConsole();

The output would be:
The string is NULL.

Is this just a gimmick? You might think so at first blush, but it is actually remarkably useful. I’ll write another post soon giving some examples of when it is useful. Here’s one hint: imagine a case where you don’t want the overhead of instantiating objects unless you’re in an unusual situation. (OK, here’s another hint … what if you want to write code that applies business logic to non-null values, but seamlessly ignores nulls, so that the code doesn’t have to be all cluttered up with null checks).

April 1, 2009

NHibernate and FluentNHibernate Rock!

Filed under: C# — Tags: , — charlieflowers @ 9:18 pm

Saw a very interesting presentation last night by Brendan Erwin, on how he uses NHibernate and FluentNHibernate. FluentNHibernate is a tool that lets you specify your mappings between the database and your domain objects in C# code, instead of in XML. This is awesome for several reasons:

  1. You can refactor common bits of mappings into helper methods, thus keeping your mappings more DRY.
  2. It is much more refactoring friendly. It uses lambda expressions and is able to avoid strings for property names. Thus if you rename a property using .NET refactoring or Resharper, the update applies everywhere.
  3. You get intellisense as you create and edit mappings. You’d be surprised how greatly this improves the process.

Git on Windows and behind Firewalls

Filed under: C# — Tags: , , , , — charlieflowers @ 9:11 pm

I do .NET development on my Mac, and for source code management I use Git from the Mac Terminal window. But once I’ve developed the code, I have to get it onto the company’s development server, which is a Windows machine behind a firewall. I do this using git on Windows.

There are multiple choices for Git on Windows, but I chose the MSysGit route. I used putty to get around the firewall issues. I also had to set the whitespace setting in the MSysGit repo to match what it is by default on Unix — without that, things still worked but Git would change Windows-typical line endings to Unix line endings, causing a diff to think every single line had changed.

Once I worked through all that, everything worked beautifully (and it has for months now). I code on my machine, push to a GitHub repo, and the pull from GitHub to the Development Server (using MSysGit / Putty).

I culled through 100’s of articles while getting this set up. Here are the hand-picked articles that were the most helpful for me.

C# Delights: You can put Extension Methods onto Enums!

Filed under: C# — Tags: , , , , — charlieflowers @ 8:04 am

Often, you need to associate other information with the members of an Enum. For example, say you have the following enum:

public enum DaysOfWeek : int
{
   Sunday = 1,
   Monday = 2,
   Tuesday = 3,
   Wednesday = 4,
   Thursday = 5,
   Friday = 6,
   Saturday = 7
}

That’s all well and good. But say you need to associate additional information with each enum member. For example, say your legacy database represents the days of the week with the following 2 letter codes: “Sn”, “Mo”, “Te”, “Wn”, “Tr”, “Fi”, “St”. Notice I picked codes that are not intuitive and are not always the first 2 characters. Crazy, but we all know legacy databases can be crazy.

Also, let’s imagine that your company needs to associate a decimal hourly rate with each day, representing the fact that you charge different rates for different days.

C# has a very nice, new way you can do this. Before C# 3.0, the best way I knew of to handle this was to not use an Enum at all. Rather, I would make a class that was much like a singleton, but with more than one instance. It would have a private constructor, so that no other classes outside of it could make instances of it. However, it would expose static properties with the exact set of instances that were allowed (7 in our case, and the properties would be named “Sunday”, “Monday”, etc.). Each instance would have properties for “DatabaseCode”, “Name” and “HourlyRate”. That’s not bad, but the new way is better in many cases.

The new way is this: You can place extension methods onto Enums! So, in our case, we would do the following:

public static class DaysOfWeekExtensions
{
   private static string[] databaseCodes = new string[] { "Sn", "Mo", "Te",
      "Wn", "Tr", "Fi", "St" };

   private static decimal[] rates = new decimal[] { 2.5m, 3.6m, 0m, 1.2m,
      8.8m, 42m, 3.6m };

   public static string DatabaseCode(this DaysOfWeek self)
   {
      int index = (int)self - 1;
      return databaseCodes[index];
   }

   public static decimal HourlyRate(this DaysOfWeek self)
   {
       int index = (int)self - 1;
       return rates[index];
   }
}

And then you’d use it like this:

Console.WriteLine("For Tuesday, the database code is " + 
   DaysOfWeek.Tuesday.DatabaseCode() + " and we charge " +
   DaysOfWeek.Tuesday.HourlyRate() + ".");

Click here for another example.

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.