How will you make it if you never even try?

October 23, 2014

The Stack is a MUCH Deeper Concept than “GOSUB”

Filed under: functional-programming — charlieflowers @ 11:21 am


(I just had an AHA! moment. I’m excited about it. It’s valuable and I want to share it. Warning: I USE CAPS a lot when I’m having an aha moment. It’s the most direct way to capture my enthusiasm on the page, and I like it. If you don’t, this post is not for you. OTOH, if you appreciate a powerful insight communicated enthusiastically, then read on!)

I learned BASIC before any other language. So I’ve always thought the stack was “what it takes to allow for gosub.” And ever since then, as I learned many different languages, there have always been features that use the stack (mostly, function calls).

When I encountered functional programming, I thought, “Interesting, these folks decided to see what would happen if you rely on the stack for everything.” They got rid of mutable state. When they wanted to change a variable, they instead created a new stack frame with new value. (And don’t worry about Tail Call Optimization … it’s an optimization, not a concept).

I like functional programming and see the benefits of avoiding mutation. But I always admired the “hacky creativity” of the inventors of functional programming in that they were using the stack to acheive things it was not originally meant for. How resourceful!


I was wrong, because functional programming existed LOOONG BEFORE structural programming! Structural programming (which GOSUB is a big part of) came about in the 1960’s, partly propelled by Dykstra’s famous “GOTO Considered Harmful” paper. But functional programming WAAAY predates it, going back AT LEAST as early as Alonzo Church and Lambda Calculus in the 1930’s! Yes, the THIRTIES.

Functional programming relies on the stack because Alonzo Church figured out a way of REASONING, BEFORE THE FIRST COMPUTER WAS EVER INVENTED! The way of reasoning that Church created relies on step-by-step transformation of an initial “formula” into an “answer” using concepts of “function application” and “substitution”.

He did not invent it in order to take advantage of the stack (kind of like how we build parsers based on recursive descent to take clever advantage of the stack). THERE WAS NO STACK YET! THERE WAS NO COMPUTER YET!

He invented it because it was a POWERFUL, PRECISE WAY OF THINKING. It fell under the umbrella of FORMAL MATHEMATICS, which is really just a collection of powerful, precise ways of thinking that help people reason out certain difficult problems.

He had never seen spaghetti code, or mutliple threads clobbering a mutable variable. He was free to reason through the problems however he wanted to. But he came to prefer an approach that he felt was PARTICULARLY POWERFUL, which involved a “stack-like” series of transformations (aka, “function application”). It wasn’t a compromise to him. It was empowering.

When you come to functional programming via structured/imperative programming (as I did and probably most programmers did), it’s easy to feel like the claim is, “If you stop using some of the core features you’re used to, you’ll be better off.” If you’re open minded, you’re willing to try it.

But the real claim is much stronger: “This is a more powerful way to think. You will be able to handle more complexity, and reach solutions faster, and maintain them more easily, because this is a more powerful way to think.” And it was this misguided notion of where the idea for a stack came from that was blocking me from seeing that.

(Edit: By the way, I was working through “The Little Schemer” in Clojure when this insight hit me.)


June 23, 2011

Analogies for 1000 Mediocre Programmers vs. 5 Great Programmers

Filed under: C# — charlieflowers @ 5:30 pm

Recently, there was an article and a corresponding Hacker News discussion on whether you’d rather have 1000 mediocre programmers, or 5 great programmers. To me, the choice is so blindingly obvious that I can’t believe there’s a debate.

To help illustrate why, here are some fun (and pretty darn accurate) analogies:

1. You’re on trial, facing the death penalty, for a crime you didn’t commit, but with a lot of circumstantial evidence that makes you look guilty. You are given a choice — have a team of 1000 mediocre lawyers, or a team of only 5 great lawyers.

2. Would you rather buy a Sports Illustrated Swimsuit edition with 5 incredibly gorgeous models, or one with 1000 mediocre models?

3. You’re in a chess tournament. You don’t get to choose your moves directly … rather, you must select a team to choose your moves. All moves must be made in 30 minutes or less, and your team must reach consensus on each move. You may have either 5 brilliant chess players, or 1000 mediocre ones.

4. You’re going on a 18 hour road trip. You must drive straight through, with no more than 4 30-minute breaks. You must listen to music from an iPod the entire way. Do you want the iPod filled with music from 5 distinct, excellent musicians, or do you want it filled with music from 1000 distinct, mediocre musicians?

5. You’re going to read a novel that is over 1500 pages and then take an extensive reading comprehension quiz. Do you want the novel that is written by a collaboration of 1000 mediocre authors? Or would you prefer the novel written by a collaboration of 5 excellent authors?

6. Your wife is in the hospital, giving birth, but there are complications. You can’t be there for some strange reason, and you’re a nervous wreck. Which would you rather hear from the hospital:
a. “Unfortunately, sir, we have only mediocre doctors available right now. However, don’t worry, we have *1000* of them on the case!”
b. “Don’t worry sir, we have 5 outstanding doctors on her case.”

Think about it! (And let me hear your analogies too!)

March 9, 2011

Joy and Software Development

Filed under: C# — charlieflowers @ 2:44 am

I was just hit by a Deep Thought.

I was thinking back to the way I used to think about programming when I was in high school and college. Everywhere I looked, I could envision a new software creation that would make things better for people. I’d stop at a convenience store for a Coke, and I’d think, “Wow, a software system could automate their cash register, track their sales, and know when to order more inventory.” (That was before every convenience store had such a system, and in fact, I wrote one and sold it to a few).

I’d look at movie theaters, restaurants, and bookstores the same way. I imagined a little computer you could carry around in your pocket that could contain all your favorite songs, letting you hear them whenever you want (waaaaaay before Apple finally did it right and made it pervasive).

The point I want to focus on is this: Every time such a “vision” comes, it comes with one overwhelming, predominant emotion — Joy. You are seeing an opportunity to create something new that will solve problems and make things better, and just seeing that is *thrilling*. Excitement, passion, joy, borderline euphoria.

My Deep Thought is that Software Development, in its essence, is built upon and inseparable from this Joy. It is the joy of seeing in your mind that new thing which could exist that leads you to decide that the thing damn well should exist and further, leads you to decide that, so help you God, that thing is going to exist — and that drives software development.

I’m saying the two cannot be separated. Is that really true?

Imagine a great painter, who can create works that inspire and move people. Now, put him on a 20-hour a day schedule and *force* him to produce new paintings non-stop. Could you rationally expect the same kind of results? No, because you have removed the joy from his creative process and in essence turned him into a machine. And for all Watson can do on Jeopardy, I don’t think he can make world class paintings.

Can you take a great writer of love songs, and demand her to produce 30 new songs in the next week? Yes, but the songs will suck. I realize my logic here is not air-tight. I’m not exactly proving my case, but hopefully I’m clumsily articulating it, which is enough for now.

It comes down to this: Certain human acts of creation are driven by Joy. If you separate the creation process from the Joy, what you get is only a shadow of what could have been. And Software Development is one of those acts.

Yet, most Software Development jobs are for corporations, and most such jobs don’t allow any place for Joy. Why? Not for any pre-meditated, evil reasons. Simply because a corporation’s role is to continually increase its revenues and profits. A corporation is not human and cannot experience joy. But it does demand relentlessly improving quarterly numbers.

But the corporation cannot get the software it wants effectively and efficiently if it inhibits the engine that produces that software. And that engine is joy. There’s no way around it. Some corporations get this, some don’t. But it remains a fact of life.

Hell, sometimes, while working in the salt mines of some joy-sucking corporation, I have forgotten it myself. I have operated as a machine for some pretty long stretches. But I’m glad I remember it now, and I’m going to try to keep it ever-present in my mind. I’ll have a hell of a lot more fun, and it’s going to allow me to produce more software, faster, and that software will be a better fit for its users, and it will make things better for the humans who work with it.

February 11, 2011

An improvement to the Javascript Module pattern

Filed under: C# — charlieflowers @ 2:59 pm

Wow, a nice idea just hit me while I was in the shower. It is an improvement to the Javascript Module pattern. (It’s not earthshaking, but it offers a nice additional bit of protection).

The general idea of the module pattern is that you take some script that would otherwise execute in global scope and wrap it in a function, which you then execute immediately, like this:

(function(GLOBAL) {
    var x = 42;
    // more code here

Note that right after the function has been declared, we have followed it with (), which causes it to execute. This gives you a scope within which you can put your code. You can return objects out of here that contain closures, and hence create effects such as private internal state.

It’s fantastic, and though it can be overused (like any good thing), it should be used a lot. You’ll see it used often in the source code of jQuery, JavascriptMVC, and many other well-factored JavaScript code bases.

The Improvement: Use call() to control the “this” pointer

So here’s the improvement. We’re going to take control of the “this” pointer inside our module (aka, the “context”), and make sure that it is not the global scope. We can still pass the global scope in to our module, so that we can have global effects if we want to. But this way, all our global effects will be made obvious. Plus, there are some common mistakes such as this one that can accidentally pollute the scope pointed to by your “this” pointer. By making sure our “this” pointer is not the global scope, we prevent those mistakes from accidentally vomiting directly into the global scope.

Here’s what we do:

(function(GLOBAL) {
    var x = 42;
    // more code here
}.call({}, window);

Notice we have called the module function using “call()“, which allows us to specify the “this” pointer. And we’ve passed an empty object for the “this” pointer.

Often times when you’re doing heavy JavaScript development, your particular implementation of the Module pattern is boilerplate that you roll into a snippet that automatically gets inserted whenever you create a new file. Incorporating this improvement just introduces yet one more level of protection from some of the kinds of mistakes you’re trying to watch out for.

October 13, 2010

Hello, JavascriptMVC!

Filed under: ASP.NET MVC, Javascript, JavascriptMVC, JQuery, SproutCore — charlieflowers @ 11:37 pm

I am *very* excited about the fact that the project I’m working on has just decided to use a powerful framework called JavascriptMVC.

JavascriptMVC is a framework that helps you build and maintain robust Javascript client-side applications. It gives you Models, Views and Controllers inside of Javascript. It gives you dependency management between Javascript files. It gives you unit tests that you can run in the browser, OR from a command line in a browser-simulating Javascript environment as part of your Continuous Integration build. It gives you a lot of fantastic tools for going further with your Javascript than just adding some bling to your server-driven web pages.

I mentioned in my last post that I believe SproutCore is far and away the best framework for building interactive web apps in Javascript. That’s true. However, JavascriptMVC is also an excellent choice, and I’m very much looking forward to working with it.

SproutCore Rocks, and you will be hearing about it

Filed under: ASP.NET MVC, Javascript, JQuery, SproutCore — charlieflowers @ 11:24 pm

SproutCore is an AWESOME framework for building rich internet applications in JavaScript. In a nutshell, it’s like having Cocoa inside of JavaScript (but better) … and the world is just starting to realize that is the “Right” way to be building web apps.

I have to admit, though, it sounded crazy and weird to me 2 weeks ago. But that was before I and others on my team started to be able to articulate and understand some of the problems we were running into and some opportunities for fixing them (as I described in my last post).

Look, Javascript is now a solid, reliable, powerful language for serious development. “I know,” you reply, “and that’s why I’m using JQuery and Ajax.” Right, that’s “level 1” awareness that Javascript is now a real, reliable language. At that level, your MO is to build the same server-driven web apps we’ve been building (in Rails, ASP.NET, or whatever), but to sprinkle in some JavaScript for some “bling”.

But after you do that for a while, you may find yourself staring at “level 2” awareness that Javascript is a real, reliable language. That’s when you say, “Hey, why don’t we build a full-blown dynamic GUI in JavaScript, that is in control of its own “flow”, that *can* choose to pull some data from a server, or send some data to a server, if and when it wants to. After all, JavaScript is a *real* language suitable for real development.”

This has been dubbed the “thin server architecture”. Your server sends and receives JSON, and does not *at all* involve itself with html or any presentation concerns. And your client doesn’t even have to follow a “page” model (although you’re likely to be running in a browser and want the back button to feel normal, so you might “map onto” a page model). Your client can be rich, and can pull great ideas from GUI toolkits such as Swing, AWT, Cocoa, etc.

And I really believe the best framework out there for doing this is SproutCore. It severely lacks documentation, but that doesn’t matter as much as you might think, because the source code for it is absolutely beautiful. It is elegant, succinct, and well-factored. That plus some of the excellent tutorials out there really put a tool of enormous power in your hands. Once you make an initial investment in deeply understanding SproutCore, I really believe you gain the kind of power that lets one person do the amount of work that previously required 10 people.

Check it out. Unfortunately, my current project just decided not to use it. It was just too weird to them. There were a lot of developers on the team who balked at learning something so “different”. What they couldn’t see was that the tool is so powerful that only 1 or 2 people would need to learn it, and our UI would simply become a “solved problem.” Oh well, I can understand … the world hasn’t yet caught up to the “level 2” awareness that I mentioned above, and so SproutCore seems weird.

But you wait 6-12 months, and I bet you everybody and their brother will have heard of SproutCore, and more than just the “cool kids” will be using it. It is going to leave its mark on the web development world.

October 6, 2010

Going Beyond ASP.NET MVC and JQuery

Filed under: ASP.NET MVC, C#, Javascript, JQuery, SproutCore — charlieflowers @ 11:14 pm

I’ve been having a blast for the past year building an app in ASP.NET MVC (I’ll call it “MVC” through the rest of this post) and JQuery. And I’m very fond of both.

But an interesting thing happens when you build something complex with these 2 technologies. You’re very likely to find yourself facing a conundrum regarding how far to go with Javascript.

Here’s how it happened for us…. We started out trying to keep our logic on the server as much as possible. We can utilize the full power of C# and the .NET framework there. Even our controllers are unit testable, thanks to MVC and IoC. We have mature tools and patterns for unit testing that code. We can refactor it with Resharper. Etc, etc.

But of course, the beauty of JQuery and MVC in tandem is that you can make your web apps more interactive and responsive. So of course, we did some JavaScript on the client. Matter of fact, we didn’t shy away from any UI request our business users had. “You want a grid that lists Personal References, and the ability to select one from the list and Edit it? You want to be able to add new and delete from the list? Cool, we can do that without a single postback.”

And we have done that. And it is nice. A responsive web app with some nice usability features. But here’s where the conundrum comes in.

To make it nicer, snappier, more responsive and even more usable, we’d like to add more JavaScript. And we’d like to use Javascript in more places than we do currently. Sometimes, we wonder why we’re even building HTML on the server … why not return JSON from the server, and have some Javascript code on the client generate a DOM from it? That would certainly make better usage of bandwidth.

But … the JavaScript we have is already complicated. And when you’re building a UI with Javascript, you don’t have a lot of the benefits that you’re accustomed to in other UI technologies, such as change notifications, widgets, controllers, mature unit testing capabilities that are built in to your build process, etc. So the grid for Personal References, for example, took a while to get right. We were working at a primitive level, with individual DOM elements, click events, grid rows, etc.

The key realization was this: You either need to keep the amount and complexity of your Javascript very, very limited, or you need to really jump all in and do almost all of your UI work in Javascript. Anything in-between is a no-man’s land.

If you keep the Javascript very limited (more limited than what we needed for that Personal References screen, for example), then MVC and JQuery are enough for you to keep matters well in hand. But when you approach moderate or greater complexity, you need to make a quantum jump to a very Javascript-centric approach. And when you do that, you need Javascript Framework capabilities, for things like change notifications, controllers, widgets, unit testing, etc.

At that point, you have grown beyond just ASP.NET MVC and JQuery. Of course, both of those may still play a role in your solution, but you need more. It’s at this point that you should consider things like SproutCore, Cappuccino, JavascriptMVC, etc.

Which are some of the things we’re thinking about now.

May 24, 2010

Your most limited resource

Filed under: C# — charlieflowers @ 12:58 am

The most limited resource you have is not money, it is time. One Year of your life is a HUGE piece of your life. If life expectancy is around 70, and you don’t really get full control of your life until you’re around 20, then you have 50 years of life that you can direct as you will. One year is a whopping 2% of that. So wasting even a year being stuck in a crappy job or otherwise undesirable situation is a bad bad deal, even if you’re getting paid big money. You need to be doing whatever it is that you really want to be doing, whatever it is that you’re innately wired for and drawn to … and you need to be doing it right now. If not, then you need to be on a road that will get you there, and that road needs to have a realistic chance of getting you there very soon. Because remember, 50 years is the optimistic number. You might only have 5, or 1. Stop chasing the dollar or whatever else leads you astray, and start doing whatever it is you are “meant” to be doing. ASAP.

March 21, 2010

Some nice Nullable extensions

Filed under: C# — charlieflowers @ 7:11 am

Hey, I just banged out something nice and thought I’d share it. Some convenient extensions on Nullable.

I ran into a case where I have an object with a property of


. If the value is not null, I want to call ToShortDateString() on it. But if it *is* null, then I merely want to return empty string.

It was really a pain in the ass to do it before the extension methods, because it looked something like this:

string x;

   x = theObject.TheDateProperty.ToShortDateString();
   x = string.Empty;

With the extension methods I wrote, it can now be much nicer:

string x = theObject.TheDateProperty.Safe(d => d.ToShortDateString());

I love it when my language comes through for me. Anders is the man.

Here are the extension methods

public static class NullableExtensions
        public static TReturn Safe<TType, TReturn>(this Nullable<TType> nullableValue, TReturn defaultValue, Func<TType, TReturn> func) where TType : struct
            if (!nullableValue.HasValue) return defaultValue;

            return func(nullableValue.Value);

        public static TReturn Safe<TType, TReturn>(this Nullable<TType> nullableValue, Func<TType, TReturn> func) where TType : struct
            return nullableValue.Safe<TType, TReturn>(default(TReturn), func);

        public static string Safe<TType>(this Nullable<TType> nullableValue, Func<TType, string> func) where TType : struct
            return Safe<TType, string>(nullableValue, string.Empty, func);

Notice the specific one for a return type of String, that lets you avoid explicitly stating that the default should be “” all the time. I could definitely envision other specific Safe() methods being added for other types down the road.

February 27, 2010

The CQRS Light Bulb Moment

Filed under: ASP.NET MVC, C#, CQRS, Domain Driven Design (DDD), nhibernate, nServiceBus, OLAP — charlieflowers @ 11:17 pm

As I recently blogged, the project I’m on has recently decided to move to CQRS (Command Query Responsibility Segregation). We’re going to use nServiceBus as a message bus that let’s us separate our system into 2 “channels”: The “Read side” and the “Write side” (aka, the “Query side” and the “Command Side”).

This decision has been the result of several “Light Bulb Moments”, in which various ones of us had a flash of insight that helped us see how an architecture that at first sounded wierd and unorthodox would actually solve a number of problems and help us tremendously.

I’ve decided to share here one of those Light Bulb Moments in raw form. Here’s the text of an email I sent to two other architects on our team (over the weekend, from my own account … we talk about this stuff all the time because we love it). It expresses well many of the reasons we made the move (although I understand more about CQRS at this point and would tweak a few details). (Note: Names changed to protect the guilty).

The Email…


I’m seeing the opportunity to do something truly awesome here. It is based on the CQS reading I’ve been doing while thinking about what our “dto’s” or “commands” or etc. should look like.

I have created, worked with, and seen first hand the power of an OLAP database for read operations. It really is unbelievable in terms of the freedom it gives someone looking at the data. And it lets reads be very fast. But a lot of projects I’ve been on have said, “Let’s build the transactional system first. It is so obviously core to our business that we need it, and we need it yesterday. Once we get that done, we can think about maybe doing OLAP.”

But the way people are approaching CQS as a architectural concept these days, we have the opportunity to do both at the same time. It should help us get to the finish line faster, with screaming fast software and high scalability.

And it’s not that big of a change from what we’re doing now. It boils down to this:

1. We make the “flat view models” you guys are working on. They are designed to serve the view that they populate, and nothing else.

2. We express our edits to the domain in terms of “Commands”. These are merely Declarative … you look at one and it intuitively makes sense.

3. Our Domain Objects accept those Commands and process them. Our Domain Objects apply rules to decide whether or not a Command is valid. The Domain Objects have complete authority over accepting or rejecting an Edit Command.

4. Once the Edit Command is accepted by the Domain Objects, it is “applied”.

Now, right now, you’re both saying, “No shit, that’s what I said on Friday.” Yes, but let’s take stock of where this puts us, and see what else it allows us to do.

5. Since those “flat view models” don’t enforce any important business rules, they don’t have to come from our Domain Objects. (They can STILL come from NHibernate if that’s important or helpful, but they don’t have to come from our Domain Objects). Remember, our Domain Objects are in charge of *writing* all updates. Therefore, the written data can include calculated fields and anything else necessary to ensure that what comes back in on the read side is valid and complete. Complete domain integrity is maintained by the Domain Objects, so Reading is simplified. Needing a bunch of business logic on read has some challenges to it, plus I don’t think we have very many (maybe not any) kinds of calculated fields that would really require a full domain object.

6. We *are* still talking about NHibernate pulling the data that ultimately goes into our View Models. So there are probably some “DTO’s” that are *also* mapped to the same NHibernate tables that our Domain Objects write to. But those DTO’s can be “screen-shaped” (more accurately, “task-shaped”, since we want to include web services and other users of our system besides just the web-based human interface).

7. Now, the domain no longer needs many (possibly any) getters or setters.

8. Every single Edit Command can cause 2 things to happen: 1) our normalized OLTP database can get updated by our Domain Objects with the new data. 2) The very same Edit Command can get queued somewhere else to cause an update to our OLAP database for read access. We can essentially get an OLAP database that doesn’t need ETL … it gets updated from our Edit Commands and only lags a few seconds behind our OLTP database.

9. The Edit Commands also make it easy for us to have *MANY* copies of the readable OLAP database. We can update 3 databases as easily as one. Now we can load balance between them, and they’re equivalent.

10. We don’t actually need the fix Billy added to submit disabled controls. After all, we *know* those values didn’t change. Why should we need them on a Post? Our Edit Commands can be as sparse as what the user actually changed. (This is a minor thing, but still worth mentioning).

11. Here’s one of the main benefits of the whole thing: Once we get to this point, when we make a new screen, we make a DTO for it and a View Model for it. *Both* are custom designed to fit the screen itself. They will be coupled to the purposes of that screen because they need to be … this is good coupling. However, that screen will not exert *any pressure whatsoever* on our Domain Model. Our Domain Model will simply be exactly what it needs to be to express the logic of the domain. Think about how much easier things will be for us than they are right now. Now, we have to have a Domain Object Graph, and a parallel DTO object graph, and (soon) a View Model that gets mapped from the DTO object graph. Keeping the parallel Domain Object and DTO in sync has proven to be something we invest a lot of time in. They were drifting apart before I added the ApplyEdits() stuff. I then added the Interfaces that sometimes have 4 or 5 generic types riding along. Sam went further with it and has cases with 8 or 12 generic types, including multiple levels of nesting. We’re working too hard here. *IF you LOVE THE SMELL OF DELETED BITS IN THE MORNING”*, then you are going to get enjoyment out of moving to an approach like this.

Normally, doing something to fix problems you are having requires some extra work you hadn’t anticipated, and is a bit of a setback, though necessary. In this case, the fix for some of the problems we’re running into actually opens up whole new vistas of possibility, and these opportunities basically come for free after applying the correct fix.

Older Posts »

Blog at