How will you make it if you never even try?

June 23, 2011

Analogies for 1000 Mediocre Programmers vs. 5 Great Programmers

Filed under: C# — charlieflowers @ 5:30 pm

Recently, there was an article and a corresponding Hacker News discussion on whether you’d rather have 1000 mediocre programmers, or 5 great programmers. To me, the choice is so blindingly obvious that I can’t believe there’s a debate.

To help illustrate why, here are some fun (and pretty darn accurate) analogies:

1. You’re on trial, facing the death penalty, for a crime you didn’t commit, but with a lot of circumstantial evidence that makes you look guilty. You are given a choice — have a team of 1000 mediocre lawyers, or a team of only 5 great lawyers.

2. Would you rather buy a Sports Illustrated Swimsuit edition with 5 incredibly gorgeous models, or one with 1000 mediocre models?

3. You’re in a chess tournament. You don’t get to choose your moves directly … rather, you must select a team to choose your moves. All moves must be made in 30 minutes or less, and your team must reach consensus on each move. You may have either 5 brilliant chess players, or 1000 mediocre ones.

4. You’re going on a 18 hour road trip. You must drive straight through, with no more than 4 30-minute breaks. You must listen to music from an iPod the entire way. Do you want the iPod filled with music from 5 distinct, excellent musicians, or do you want it filled with music from 1000 distinct, mediocre musicians?

5. You’re going to read a novel that is over 1500 pages and then take an extensive reading comprehension quiz. Do you want the novel that is written by a collaboration of 1000 mediocre authors? Or would you prefer the novel written by a collaboration of 5 excellent authors?

6. Your wife is in the hospital, giving birth, but there are complications. You can’t be there for some strange reason, and you’re a nervous wreck. Which would you rather hear from the hospital:
a. “Unfortunately, sir, we have only mediocre doctors available right now. However, don’t worry, we have *1000* of them on the case!”
b. “Don’t worry sir, we have 5 outstanding doctors on her case.”

Think about it! (And let me hear your analogies too!)

March 9, 2011

Joy and Software Development

Filed under: C# — charlieflowers @ 2:44 am

I was just hit by a Deep Thought.

I was thinking back to the way I used to think about programming when I was in high school and college. Everywhere I looked, I could envision a new software creation that would make things better for people. I’d stop at a convenience store for a Coke, and I’d think, “Wow, a software system could automate their cash register, track their sales, and know when to order more inventory.” (That was before every convenience store had such a system, and in fact, I wrote one and sold it to a few).

I’d look at movie theaters, restaurants, and bookstores the same way. I imagined a little computer you could carry around in your pocket that could contain all your favorite songs, letting you hear them whenever you want (waaaaaay before Apple finally did it right and made it pervasive).

The point I want to focus on is this: Every time such a “vision” comes, it comes with one overwhelming, predominant emotion — Joy. You are seeing an opportunity to create something new that will solve problems and make things better, and just seeing that is *thrilling*. Excitement, passion, joy, borderline euphoria.

My Deep Thought is that Software Development, in its essence, is built upon and inseparable from this Joy. It is the joy of seeing in your mind that new thing which could exist that leads you to decide that the thing damn well should exist and further, leads you to decide that, so help you God, that thing is going to exist — and that drives software development.

I’m saying the two cannot be separated. Is that really true?

Imagine a great painter, who can create works that inspire and move people. Now, put him on a 20-hour a day schedule and *force* him to produce new paintings non-stop. Could you rationally expect the same kind of results? No, because you have removed the joy from his creative process and in essence turned him into a machine. And for all Watson can do on Jeopardy, I don’t think he can make world class paintings.

Can you take a great writer of love songs, and demand her to produce 30 new songs in the next week? Yes, but the songs will suck. I realize my logic here is not air-tight. I’m not exactly proving my case, but hopefully I’m clumsily articulating it, which is enough for now.

It comes down to this: Certain human acts of creation are driven by Joy. If you separate the creation process from the Joy, what you get is only a shadow of what could have been. And Software Development is one of those acts.

Yet, most Software Development jobs are for corporations, and most such jobs don’t allow any place for Joy. Why? Not for any pre-meditated, evil reasons. Simply because a corporation’s role is to continually increase its revenues and profits. A corporation is not human and cannot experience joy. But it does demand relentlessly improving quarterly numbers.

But the corporation cannot get the software it wants effectively and efficiently if it inhibits the engine that produces that software. And that engine is joy. There’s no way around it. Some corporations get this, some don’t. But it remains a fact of life.

Hell, sometimes, while working in the salt mines of some joy-sucking corporation, I have forgotten it myself. I have operated as a machine for some pretty long stretches. But I’m glad I remember it now, and I’m going to try to keep it ever-present in my mind. I’ll have a hell of a lot more fun, and it’s going to allow me to produce more software, faster, and that software will be a better fit for its users, and it will make things better for the humans who work with it.

February 11, 2011

An improvement to the Javascript Module pattern

Filed under: C# — charlieflowers @ 2:59 pm

Wow, a nice idea just hit me while I was in the shower. It is an improvement to the Javascript Module pattern. (It’s not earthshaking, but it offers a nice additional bit of protection).

The general idea of the module pattern is that you take some script that would otherwise execute in global scope and wrap it in a function, which you then execute immediately, like this:

(function(GLOBAL) {
    var x = 42;
    // more code here
}(window));

Note that right after the function has been declared, we have followed it with (), which causes it to execute. This gives you a scope within which you can put your code. You can return objects out of here that contain closures, and hence create effects such as private internal state.

It’s fantastic, and though it can be overused (like any good thing), it should be used a lot. You’ll see it used often in the source code of jQuery, JavascriptMVC, and many other well-factored JavaScript code bases.

The Improvement: Use call() to control the “this” pointer

So here’s the improvement. We’re going to take control of the “this” pointer inside our module (aka, the “context”), and make sure that it is not the global scope. We can still pass the global scope in to our module, so that we can have global effects if we want to. But this way, all our global effects will be made obvious. Plus, there are some common mistakes such as this one that can accidentally pollute the scope pointed to by your “this” pointer. By making sure our “this” pointer is not the global scope, we prevent those mistakes from accidentally vomiting directly into the global scope.

Here’s what we do:

(function(GLOBAL) {
    var x = 42;
    // more code here
}.call({}, window);

Notice we have called the module function using “call()“, which allows us to specify the “this” pointer. And we’ve passed an empty object for the “this” pointer.

Often times when you’re doing heavy JavaScript development, your particular implementation of the Module pattern is boilerplate that you roll into a snippet that automatically gets inserted whenever you create a new file. Incorporating this improvement just introduces yet one more level of protection from some of the kinds of mistakes you’re trying to watch out for.

October 6, 2010

Going Beyond ASP.NET MVC and JQuery

Filed under: ASP.NET MVC, C#, Javascript, JQuery, SproutCore — charlieflowers @ 11:14 pm

I’ve been having a blast for the past year building an app in ASP.NET MVC (I’ll call it “MVC” through the rest of this post) and JQuery. And I’m very fond of both.

But an interesting thing happens when you build something complex with these 2 technologies. You’re very likely to find yourself facing a conundrum regarding how far to go with Javascript.

Here’s how it happened for us…. We started out trying to keep our logic on the server as much as possible. We can utilize the full power of C# and the .NET framework there. Even our controllers are unit testable, thanks to MVC and IoC. We have mature tools and patterns for unit testing that code. We can refactor it with Resharper. Etc, etc.

But of course, the beauty of JQuery and MVC in tandem is that you can make your web apps more interactive and responsive. So of course, we did some JavaScript on the client. Matter of fact, we didn’t shy away from any UI request our business users had. “You want a grid that lists Personal References, and the ability to select one from the list and Edit it? You want to be able to add new and delete from the list? Cool, we can do that without a single postback.”

And we have done that. And it is nice. A responsive web app with some nice usability features. But here’s where the conundrum comes in.

To make it nicer, snappier, more responsive and even more usable, we’d like to add more JavaScript. And we’d like to use Javascript in more places than we do currently. Sometimes, we wonder why we’re even building HTML on the server … why not return JSON from the server, and have some Javascript code on the client generate a DOM from it? That would certainly make better usage of bandwidth.

But … the JavaScript we have is already complicated. And when you’re building a UI with Javascript, you don’t have a lot of the benefits that you’re accustomed to in other UI technologies, such as change notifications, widgets, controllers, mature unit testing capabilities that are built in to your build process, etc. So the grid for Personal References, for example, took a while to get right. We were working at a primitive level, with individual DOM elements, click events, grid rows, etc.

The key realization was this: You either need to keep the amount and complexity of your Javascript very, very limited, or you need to really jump all in and do almost all of your UI work in Javascript. Anything in-between is a no-man’s land.

If you keep the Javascript very limited (more limited than what we needed for that Personal References screen, for example), then MVC and JQuery are enough for you to keep matters well in hand. But when you approach moderate or greater complexity, you need to make a quantum jump to a very Javascript-centric approach. And when you do that, you need Javascript Framework capabilities, for things like change notifications, controllers, widgets, unit testing, etc.

At that point, you have grown beyond just ASP.NET MVC and JQuery. Of course, both of those may still play a role in your solution, but you need more. It’s at this point that you should consider things like SproutCore, Cappuccino, JavascriptMVC, etc.

Which are some of the things we’re thinking about now.

May 24, 2010

Your most limited resource

Filed under: C# — charlieflowers @ 12:58 am

The most limited resource you have is not money, it is time. One Year of your life is a HUGE piece of your life. If life expectancy is around 70, and you don’t really get full control of your life until you’re around 20, then you have 50 years of life that you can direct as you will. One year is a whopping 2% of that. So wasting even a year being stuck in a crappy job or otherwise undesirable situation is a bad bad deal, even if you’re getting paid big money. You need to be doing whatever it is that you really want to be doing, whatever it is that you’re innately wired for and drawn to … and you need to be doing it right now. If not, then you need to be on a road that will get you there, and that road needs to have a realistic chance of getting you there very soon. Because remember, 50 years is the optimistic number. You might only have 5, or 1. Stop chasing the dollar or whatever else leads you astray, and start doing whatever it is you are “meant” to be doing. ASAP.

March 21, 2010

Some nice Nullable extensions

Filed under: C# — charlieflowers @ 7:11 am

Hey, I just banged out something nice and thought I’d share it. Some convenient extensions on Nullable.

I ran into a case where I have an object with a property of

 Nullable<DateTime> 

. If the value is not null, I want to call ToShortDateString() on it. But if it *is* null, then I merely want to return empty string.

It was really a pain in the ass to do it before the extension methods, because it looked something like this:

string x;

if(theObject.TheDateProperty.HasValue)
{
   x = theObject.TheDateProperty.ToShortDateString();
}
else
{
   x = string.Empty;
}

With the extension methods I wrote, it can now be much nicer:

string x = theObject.TheDateProperty.Safe(d => d.ToShortDateString());

I love it when my language comes through for me. Anders is the man.

Here are the extension methods

public static class NullableExtensions
    {
        public static TReturn Safe<TType, TReturn>(this Nullable<TType> nullableValue, TReturn defaultValue, Func<TType, TReturn> func) where TType : struct
        {
            if (!nullableValue.HasValue) return defaultValue;

            return func(nullableValue.Value);
        }

        public static TReturn Safe<TType, TReturn>(this Nullable<TType> nullableValue, Func<TType, TReturn> func) where TType : struct
        {
            return nullableValue.Safe<TType, TReturn>(default(TReturn), func);
        }

        public static string Safe<TType>(this Nullable<TType> nullableValue, Func<TType, string> func) where TType : struct
        {
            return Safe<TType, string>(nullableValue, string.Empty, func);
        }
    }

Notice the specific one for a return type of String, that lets you avoid explicitly stating that the default should be “” all the time. I could definitely envision other specific Safe() methods being added for other types down the road.

February 27, 2010

The CQRS Light Bulb Moment

Filed under: ASP.NET MVC, C#, CQRS, Domain Driven Design (DDD), nhibernate, nServiceBus, OLAP — charlieflowers @ 11:17 pm

As I recently blogged, the project I’m on has recently decided to move to CQRS (Command Query Responsibility Segregation). We’re going to use nServiceBus as a message bus that let’s us separate our system into 2 “channels”: The “Read side” and the “Write side” (aka, the “Query side” and the “Command Side”).

This decision has been the result of several “Light Bulb Moments”, in which various ones of us had a flash of insight that helped us see how an architecture that at first sounded wierd and unorthodox would actually solve a number of problems and help us tremendously.

I’ve decided to share here one of those Light Bulb Moments in raw form. Here’s the text of an email I sent to two other architects on our team (over the weekend, from my own account … we talk about this stuff all the time because we love it). It expresses well many of the reasons we made the move (although I understand more about CQRS at this point and would tweak a few details). (Note: Names changed to protect the guilty).

The Email…

Guys,

I’m seeing the opportunity to do something truly awesome here. It is based on the CQS reading I’ve been doing while thinking about what our “dto’s” or “commands” or etc. should look like.

I have created, worked with, and seen first hand the power of an OLAP database for read operations. It really is unbelievable in terms of the freedom it gives someone looking at the data. And it lets reads be very fast. But a lot of projects I’ve been on have said, “Let’s build the transactional system first. It is so obviously core to our business that we need it, and we need it yesterday. Once we get that done, we can think about maybe doing OLAP.”

But the way people are approaching CQS as a architectural concept these days, we have the opportunity to do both at the same time. It should help us get to the finish line faster, with screaming fast software and high scalability.

And it’s not that big of a change from what we’re doing now. It boils down to this:

1. We make the “flat view models” you guys are working on. They are designed to serve the view that they populate, and nothing else.

2. We express our edits to the domain in terms of “Commands”. These are merely Declarative … you look at one and it intuitively makes sense.

3. Our Domain Objects accept those Commands and process them. Our Domain Objects apply rules to decide whether or not a Command is valid. The Domain Objects have complete authority over accepting or rejecting an Edit Command.

4. Once the Edit Command is accepted by the Domain Objects, it is “applied”.

Now, right now, you’re both saying, “No shit, that’s what I said on Friday.” Yes, but let’s take stock of where this puts us, and see what else it allows us to do.

5. Since those “flat view models” don’t enforce any important business rules, they don’t have to come from our Domain Objects. (They can STILL come from NHibernate if that’s important or helpful, but they don’t have to come from our Domain Objects). Remember, our Domain Objects are in charge of *writing* all updates. Therefore, the written data can include calculated fields and anything else necessary to ensure that what comes back in on the read side is valid and complete. Complete domain integrity is maintained by the Domain Objects, so Reading is simplified. Needing a bunch of business logic on read has some challenges to it, plus I don’t think we have very many (maybe not any) kinds of calculated fields that would really require a full domain object.

6. We *are* still talking about NHibernate pulling the data that ultimately goes into our View Models. So there are probably some “DTO’s” that are *also* mapped to the same NHibernate tables that our Domain Objects write to. But those DTO’s can be “screen-shaped” (more accurately, “task-shaped”, since we want to include web services and other users of our system besides just the web-based human interface).

7. Now, the domain no longer needs many (possibly any) getters or setters.

8. Every single Edit Command can cause 2 things to happen: 1) our normalized OLTP database can get updated by our Domain Objects with the new data. 2) The very same Edit Command can get queued somewhere else to cause an update to our OLAP database for read access. We can essentially get an OLAP database that doesn’t need ETL … it gets updated from our Edit Commands and only lags a few seconds behind our OLTP database.

9. The Edit Commands also make it easy for us to have *MANY* copies of the readable OLAP database. We can update 3 databases as easily as one. Now we can load balance between them, and they’re equivalent.

10. We don’t actually need the fix Billy added to submit disabled controls. After all, we *know* those values didn’t change. Why should we need them on a Post? Our Edit Commands can be as sparse as what the user actually changed. (This is a minor thing, but still worth mentioning).

11. Here’s one of the main benefits of the whole thing: Once we get to this point, when we make a new screen, we make a DTO for it and a View Model for it. *Both* are custom designed to fit the screen itself. They will be coupled to the purposes of that screen because they need to be … this is good coupling. However, that screen will not exert *any pressure whatsoever* on our Domain Model. Our Domain Model will simply be exactly what it needs to be to express the logic of the domain. Think about how much easier things will be for us than they are right now. Now, we have to have a Domain Object Graph, and a parallel DTO object graph, and (soon) a View Model that gets mapped from the DTO object graph. Keeping the parallel Domain Object and DTO in sync has proven to be something we invest a lot of time in. They were drifting apart before I added the ApplyEdits() stuff. I then added the Interfaces that sometimes have 4 or 5 generic types riding along. Sam went further with it and has cases with 8 or 12 generic types, including multiple levels of nesting. We’re working too hard here. *IF you LOVE THE SMELL OF DELETED BITS IN THE MORNING”*, then you are going to get enjoyment out of moving to an approach like this.

Normally, doing something to fix problems you are having requires some extra work you hadn’t anticipated, and is a bit of a setback, though necessary. In this case, the fix for some of the problems we’re running into actually opens up whole new vistas of possibility, and these opportunities basically come for free after applying the correct fix.

February 26, 2010

Using CQRS!

Filed under: C# — charlieflowers @ 2:10 am

Man, I’m tired. I’m hoping the little cup of coffee I just had will give me one more short burst of energy.

Why am I so tired? Because the project I’m working on is so freakin’ awesome that I’m working night and day on it.

Really, I’m so excited about the architecture we’re using and the technology stack that I think this may be the most fun I’ve had (at work) in 8 years.

So what is all this great stuff, you ask. Well, glad you asked. We’re moving to an architecture called “Command / Query Responsibility Segregation” (CQRS). In an oversimplified nutshell, the idea is this: You break your system into 2 “channels”, one for Writes (aka, “Commands”) and one for Reads (aka, “Queries”). You have Domain Objects (and we’re following DDD), but they are Write-Only. That will sound very strange if you’ve never heard of CQRS. But it does a ton of good things for you.

So if your Domain Objects are write-only, then how do you populate a screen with existing data? Well, you use the Query “channel” for that.

When a write occurs, your Domain Objects do it. They apply all of the intelligent behavior and business logic that you’ve so carefully built into them. They won’t let the write occur if anything is wrong with it.

If the Domain Objects do decide to let the write occur, then they will also fire an event saying that something has happened (as in, “A payment has been taken” or “A new account has been created”).

These events are listened to by many possible “subscribers”, one of which is the “Query channel”. It then records or updates projections of the relevant state.

This means that you can represent the data that is your “true state” in many different ways (aka, various read-only projections). For example, you might write some data into a OLAP star schema. You might also make a separate projection that is tailored to “Customer Edit” screen. And some of that same data might be pushed into a projection tailored for your “Nightly Financial Report”. Since these representations are read-only, they can be denormalized and tailored to the needs of the task that is doing the reading.

Then, when someone comes to the “Customer Edit” screen, you do not use the Domain Object to populate that screen. Instead, you read from the Query channel, from the particular read-only projection that was written there for the “Customer Edit” screen.

You see, in a sense this data still comes from the Domain Objects … just not directly. The Domain Objects still control every bit of the data used by the system … but you don’t “call them” to get the info. They put the info somewhere, and you get it from there.

I’m not going to go into too much more detail right now. But in a nutshell, here are some of the main benefits:

1. Your Domain Model is free to emerge and evolve into the right way to express your complicated domain logic. It no longer is encumbered with the additional job of providing thousands of little pieces of data on demand. No other parts of your system even need dependencies on your Domain Objects, and they certainly do not place any “demands” on the shape of those Domain Objects. (It is this benefit, moreso than scalability, that led me in particular to CQRS).

2. The ability to produce many different representations of the state on the “Read channel” is surprisingly powerful. By that I mean that, even though it sounds powerful, it turns out to be surprisingly more powerful than it sounds at first. Your Domain Objects (and some other “helpers” on that end of the system) can custom-tailor specialized “summaries” and representations of the data … one each for the various “Read tasks” that you have. This means that all your “Read operations” get much simpler … there is very little or no “transformation” work to do, because it was already done for you when the events were fired from the “Command channel”. Granted, this is merely moving work around, because you used to do that “transformation” work during the Read, whereas now you do it as an indirect consequence of the Write. But there are significant advantages of doing that work “back there” as a consequence of the Write … a lot of that transformation work is quasi-business logic, and it is very nice and clean to do it back there. And it’s wonderful to not have to do it when rendering an Edit screen or showing a report.

3. Scalability. Duh. This is the most highly touted benefit and it is a big one. Now you can optimize your Write channel for writes, and your Read channel for Reads. You can have many instances of your Read database, which you load balance between. Most systems have many times more Reads than Writes … so now your “Write channel” won’t be burdened with serving all those Reads.

4. Real-time OLAP. (This sort of fits under # 2, but I feel it deserves its own bullet point). Star schemas are so fantastic for presenting information about what has happened in your business. So many businesses that would benefit greatly from them don’t have them. It’s often approached as a nice-to-have after the OLTP system itself is working. But CQRS let’s you have a real-time OLAP schema, merely because those events from the Command channel can be captured in an out-of-band manner and recorded into a star schema. I say “real-time” because, even though there will be a lag time in seconds between the Command channel write and the OLAP update, “seconds” is still very much real-time to the business world. Plus, you don’t have to justify the OLAP schema as a separate project. It can be the foundation of your Read channel, and therefore it is necessary (and powerful) for the system you are building right now. But you can also do bits and pieces, rather than the whole Data Warehouse enchilada.

So I’m fired up. A co-worked told me the other day that he couldn’t sleep the other night because he was so excited about it. I’m fired up, but I have been able to sleep at least đŸ™‚

If you want to learn more about CQRS, here are some great links:


http://www.infoq.com/interviews/Architecture-Eric-Evans-Interviews-Greg-Young


http://www.infoq.com/presentations/greg-young-unshackle-qcon08


http://codebetter.com/blogs/gregyoung/archive/2010/02/13/cqrs-and-event-sourcing.aspx

http://www.udidahan.com/2009/12/09/clarified-cqrs/

February 11, 2010

Important new Term: “Prefactoring”

Filed under: C# — charlieflowers @ 5:37 am

“Prefactoring” is like “refactoring”, but much more proactive. It is when you tell a co-worker, “Hey, you know that code you were going to write this afternoon? Yeah, well … uh … why don’t you go ahead and let me write that instead? Why don’t you take some time to run a few errands, or get some fresh air?”

(Update: Or, perhaps, it is when a co-worker does that to you. I know it can happen to me. After all, I always presume myself “ignorant until proven guilty.”)

April 3, 2009

C# Event handlers: a good idea immediately superseded by a BETTER one

Filed under: C# — charlieflowers @ 12:39 am

When you have events in C#, you need to check to make sure they’re not null before firing them. They will be null if no one has ever registered for the event before. So you gotta do this:

namespace ConsoleApplication1
{
  class Program
  {
    public event EventHandler<EventArgs> someEvent;
    
    static void Main(string[] args)
    {
      // Imagine something happened, need to fire event
      if(someEvent != null) // Don't forget this vital null check!
         someEvent(this, EventArgs.Empty);
    } 

If you forget the null check, you will get a null reference exception if no one has registered for your event.

So here’s the “only good idea” (to be superseded below with the “great idea”)

The “only good idea” is to add an empty delegate to the event immediately when you declare it. Like this:

public event EventHandler<EventArgs> someEvent = delegate {};

I used to think that was a fabulous idea. After all, it has a lot of benefits. Now, you can freely just fire the event. The only overhead is that it will always call your empty delegate, which is the overhead of one unnecessary method call (not usually a big deal).

But here’s the better idea: Define an Extension Method on EventHandler that does the null check for you!

This is better because it is more readable and because it gets rid of that slight performance overhead of the empty delegate.

Here’s how:

public static void Fire<T>(this EventHandler<T> self, object sender, T args) where T : EventArgs
{
   if(self != null) self(sender, args);
}

Then, when it is time to fire an event, you do this:

someEvent.Fire(this, EventArgs.Empty);

I love it. You see, what is happening here is that, even though much of the community knows the new C# 3.0 language features pretty well, we continue to find new, powerful, delightful ways to use them.

By the way, the first place I learned this technique was from this question on StackOverflow.

Older Posts »

Create a free website or blog at WordPress.com.