Removing Defects From Django Apps

I'm working on a set of Django apps right now. Here are some of the things I'm doing to filter out defects as I work:

Design Review

I do a handwritten (paper) design. I like to design offline — I find that this forces me to do two things. First, to slow down and get it right. Second, to stay at about the right level of abstraction instead of drilling down into too much detail. When I'm done the paper design, I review it against a checklist that I build from mistakes found in downstream phases. My design reviews catch about 70% of the mistakes I make while designing. I'd like that to be higher, but it is definitely worth taking the time to do it. This isn't specific to Django or Python, I do this for all projects.

Written Test Plan

After design review is done, I write (on paper, informally) a unit test plan based on the design. This is only possible when the design is at a low enough level that you can know what the general flow of control looks like through each method. I aim for full branch coverage in my unit tests. This doesn't mean it will be "fully tested", but at least I'll have exercised most of the code. I usually find (and fix) a couple of defects while writing the test plan. This is also something I do for all projects.

Code Review

After I'm done coding, and before I run any of the code, I print it out and review it offline. The following bash alias is helpful (prints 2-up):

# usage: py2pdf OUTPUTFILE INPUTFILE(s)
py2pdf ()
{
    out=$1;
    if [ -f $1 ]; then
        echo $1 exists, aborting;
    else
        shift;
        echo $*;
        a2ps -A fill -E -2 -C -o - $* | ps2pdf - > $out;
    fi
}

It's controversial, but I like the advice from Watts Humphrey in the PSP: review code before compiling (or in this case, before running any unit tests since there isn't really a compile phase). It forces me to be much more thorough looking for defects since I know that python hasn't filtered anything out yet.

To review the code, I use my checklist which is built in the same way as the design review checklist. I also use the test plan, walking through the code with each test case to see if there are any defects that testing will expose. It's better to fix these now instead of waiting for the unit test to find them. Two reasons: First, it's faster because you can see what the problem is. Second, you may notice something that the unit test won't catch for whatever reason — maybe you don't end up including the right data pattern to catch the defect. Code review is a generic defect removal practice.

Unit Test - XHTML Validation

Again, this seems to be controversial though I don't really understand why. Some people don't think valid XHTML is worth much. I validate because I want to make sure that my generated content isn't horribly broken, and — just like compiler warnings — you either validate cleanly or you don't validate at all. As soon as you start ignoring "a couple of silly errors" everything goes downhill.

I wrote an XHTML validator middleware for Django but I haven't yet used it on this project. Why not? Well, I was trying to use Dojo as a javascript library, and it's not possible (or at least not easy) to write valid XHTML while using Dojo widgets. So I had to write some special code in my test harness to strip out all of the Dojo garbage and then validate the result.

I'm using jQuery now, I'm much happier, and I suspect I'll be able to validate using the middleware on my next iteration. (Which involves ripping out the Dojo false starts.)

Validation is generic for any web project. The middleware is specific to Django.

Unit Test - Test Plan

I implement the written test plan using PyUnit, the Django enhancements based on PyUnit, and Selenium for browser-based testing.

My test plan usually generates a lot of test cases (a recent app had over 100 test cases for about 800 lines of Python, 700 of Javascript, and 200 XHTML). So I like to keep them in separate files in a "tests" directory in each app. One trick that I use is to implement a suite() function in my APP/tests/__init__.py. The suite function globs "APP_tc_*.py" and sorts them by time stamp, most recent first. This way if a test is failing I can make sure that it runs first when I fix the problem and rerun the suite. The other part of the trick is to add an optional argument to the globbing/sorting function that limits the number of test cases returned. So when I need to rerun a test a few times to fix a tricky problem, I don't have to wait for the entire suite to run to get the results of the test-under-development. This is all highly specific to Django development.

Unit Test - Pylint

This is still experimental for me. Pylint generates a lot of noise. Most of what it reports is legitimate, but about 90% are minor (i.e. not-user-impacting) defects. I've had to spend a fair amount of time tuning the report parameters to stifle the noise. The most time consuming part so far has been filtering out results from the test code. I'd like to figure out a way to make pylint skip the test files but there doesn't seem to be a setting for this. On the bright side, it nagged me into making some of my test code more maintainable. Obviously pylint is specific to python, I use other lint-like tools for other projects when the signal-to-noise ratio is high enough.

Unit Test - jslint

I'm a javascript novice, so jslint has been helpful in ferreting out problems that I would not have caught through testing. As I let the feedback from the tool and testing form better design and coding habits/patterns, the signal-to-noise ratio may drop so that this is less effective.

Posted on 2009-01-02 by brian in process .

Comments

You may be interested in http://chris-lamb.co.uk/projects/django-lint/

lamby
2009-03-01 01:59:57

Thanks, I'll check it out.

Brian St. Pierre
2009-03-01 19:48:17
Comments on this post are closed. If you have something to share, please send me email.