The Second System Effect
More than 40 years ago, in his influencing book (The Mythical-Man month), Frederick Brooks said about what he called: The Second System Effect:
The second-system effect proposes that, when an architect designs a second system, it is the most dangerous system they will ever design, because they will tend to incorporate all of the additions they originally did not add to the first system due to inherent time constraints. Thus, when embarking on a second system, an engineer should be mindful that they are susceptible to over-engineering it.
Unfortunately, I have learned this lesson the hard way, and used to see developers repeat my fault again and again!
So why do they tend to re-invent every wheel they are working on???
Mess and Myth
Joel Spolsky, one of the influencing guys in the industry (and the co-founder of stackoverflow, the CEO of StackExchange, BTW) answered this question in one of his most popular articles.
There's a subtle reason that programmers always want to throw away the code and start over. The reason is that they think the old code is a mess. And here is the interesting observation: they are probably wrong. The reason that they think the old code is a mess is because of a cardinal, fundamental law of programming:
It’s harder to read code than to write it.
Legacy code may contain messy code, lengthy methods, too much conditions that you can't figure out why on earth they are there? But the answer is very simple: these are bug fixes developed over years! and when you try to rewrite the application from scratch you are trying to throw away all the knowledge/experience in the code!
Why thinking that re-developing messy code from scratch will lead to better application is simply a myth?
Let's continue with Spolsky...
When programmers say that their code is a holy mess (as they always do), there are three kinds of things that are wrong with it.
- First, there are architectural problems. The code is not factored correctly. The networking code is popping up its own dialog boxes from the middle of nowhere; this should have been handled in the UI code. These problems can be solved, one at a time, by carefully moving code, refactoring, changing interfaces. They can be done by one programmer working carefully and checking in his changes all at once, so that nobody else is disrupted. Even fairly major architectural changes can be done without throwing away the code. On the Juno project we spent several months rearchitecting at one point: just moving things around, cleaning them up, creating base classes that made sense, and creating sharp interfaces between the modules. But we did it carefully, with our existing code base, and we didn't introduce new bugs or throw away working code.
- A second reason programmers think that their code is a mess is that it is inefficient. The rendering code in Netscape was rumored to be slow. But this only affects a small part of the project, which you can optimize or even rewrite. You don't have to rewrite the whole thing. When optimizing for speed, 1% of the work gets you 99% of the bang.
- Third, the code may be doggone ugly. One project I worked on actually had a data type called a FuckedString. Another project had started out using the convention of starting member variables with an underscore, but later switched to the more standard "m_". So half the functions started with "_" and half with "m_", which looked ugly. Frankly, this is the kind of thing you solve in five minutes with a macro in Emacs, not by starting from scratch.
Resume Driven Development
I have noticed another hidden reason why developers tend to refactor working software? They want to try a new technology to write it in their resumes! even if it is not mature enough to be used in production code!
This behavior becomes more dangerous when they want to try these immature technologies in a working software! Which is one of the main reasons of software failures!
This post is aimed mainly at reinventing working/legacy applications from scratch. If it works, you shouldn't replace it, rather refactor pain parts gradually. Sure, there will be some cases that re-inventing the legacy application is the correct decision, but most of the times it is not, rather it will cost you a lot.
There are some exceptional cases that reinventing the wheel is worth doing (again, unless you are working on a working application):
- You may reinvent parts of a working application not the whole application.
- You may follow DIY principle (Do It Yourself) if alternatives are not feasible, for example your application depends on an expensive library that your budget can't afford, they you go for building it yourself.
- When you are starting a new application (not refactoring an existing one)
- When you want to learn something new.
Jeff Atwood, the other co-founder of stackoverflow, wrote a nice article about such exceptional cases, check it!