Here are five reasons you should wait before moving on to the next phase of your software development process:
- Get the requirements right. It's so often repeated that it's almost a cliché to say that requirements errors will cost 10x or more to fix during coding or testing. But it's true, so I can risk repeating it here. Spend a little more time getting the requirements right, and you'll get every second back, with interest during the testing phase. The risk here is overanalysis, but if you tend to be the impatient type -- jumping into design or coding with very thin requirements -- then you're unlikely to suffer paralysis.
- Get the design right. It doesn't have to be an elaborate 500 page document. One of the best designs for a complex system that I've ever seen was a state machine described in a two-page document with a big matrix on the first page. Remember, a written design is a communication tool. Just like the rest of your software, it can be the simplest thing that will work. Marker on a whiteboard works, if your team decides that is an effective way to communicate. An extra hour spent preparing and reviewing the design will save at least two hours in testing. Really. I've seen too many designs that were rushed (and clearly broken upon review) because of an unexplainable urgent desire to get a document into review so that coding can begin. The state machine I just mentioned went through probably 20 man-hours of reviews in maybe three passes, and we must have found at least 50 defects in it. Those defects took a few seconds each to fix (some may have taken a few minutes), but if there had been code and tests based on this broken design it would have taken hours to fix some of them.
- Get the code right. I used an Extreme Programming mantra above
("the simplest thing that will work"), but I think XP overall often
leads to bad practices -- primarily an overemphasis on testing. I
agree that automated regression tests are a valuable tool. However,
it is important to recognize that testing has limits. It takes an
immense effort to achieve full branch coverage, and even this can't
be considered "complete" testing. Testing must be used in
conjunction with other tools: code review and static analysis are
two such tools for removing defects from code. And before you're at
the point where code is ready for review or even the compiler, take
the time to consult a reference when you're unsure how to use an
API. I don't know how many bugs I've fixed (including some of my
own!) that were a few lines down from a
/*TBD: how is this supposed to work?*/comment. Instead of putting in that comment, take the time to figure out how it is supposed to work.
- Get the tests right. Ever waste time on a bogus bug report? Testers telling you a feature is broken when they're just doing the wrong thing? It's not their fault if you failed to communicate how the feature is supposed to work. In the same way your test team should be insisting on some written communication from you about the feature set, you should insist on some written communication from them about what they're going to test. Again, this "test plan" doesn't have to conform to an ISO/IEEE standard that requires it be 1200 pages long and six inches thick. Like with the design, a whiteboard works just fine. Whatever communication method you use, if it helps head off problems then it is working. If you're finding too many problems late in the game, then maybe you need to have something a little more formal -- try posting short documents to an internal wiki.
- Are you keeping score? A little bit of measurement can do great things for you. I track time, lines of code, and defects. I was reviewing one of my designs this morning and this data helped me decide: (a) how many defects I should expect to find in my design and (b) how long it should take me to find those defects. All because I have historical data that tells me how many defects I find per hour of design review, and how many defects I make per hour while designing. I also know how much testing time this is likely to save... so I don't feel any particular rush to start coding from a buggy design. I'd rather get the design right first.
As I mentioned above, the risk to getting into the habit of waiting is that you spend too much time analyzing the current phase and don't just get on with it. But if you start from good requirements, it should be obvious when you're done with design and coding. And if you're keeping score, you'll know how much time to devote to reviews to keep them efficient.