Visions of Tool Integration

It's an amazing thing when your development tools work together.

In the bad old days, we thought it was cool when the editor could work in concert with source control. It blew my mind when we first had source control with integrated bug tracking. (Wow, you can see the changes that go along with each bug fix!) In some ways, we've come a long way, but I still see a lot of integration that is within reach but still doesn't exist.

Look at something like Trac, which is open source and widely used. It is described as a "Project management and bug/issue tracking system. Provides an interface to Subversion and an integrated wiki." If you've seen it in action, it is an effective combination of the tools in that description. However, it is missing a few tools that are key pieces in an effective development process:

  • Integration with code review tools like codestriker or CodeCollaborator from Smartbear.
  • Integration with test automation.
  • Integration with static analysis tools --- lint-like tools or (at the high end) Coverity.
  • Integration with runtime analysis tools like valgrind.

Imagine the following scenario with all of the above features:

Alice finishes coding and testing the fix to bug 1257 and checks in the fix, all from within her favorite editor. The source control backend notifies the build server and a new build is launched. When the build completes it automatically kicks off a regression test. Whoops! Alice reintroduced bug 1142 with this fix. She checks in a new patch and this time all tests pass. The backend automatically adds notes to bugs 1257 and 1142 both when the regression occurred and when it was fixed.

Bob was already selected as the reviewer for this fix within the defect tracker, so he gets an email once the regression tests have passed to let him know that the patch is ready for review. He opens his browser to see the changeset diffed against the previous version. Alice added a couple of new function calls, and he clicks through to the function definition because the source display includes clickable cross-references. Whoops! Alice forgot to include a check for a NULL return from one of the functions. Bob clicks the call site to add a note, and rejects the patch.

Alice gets a notification about the rejection and sees the note Bob left. She adds a new test case, sees a crash from the NULL return, and then checks in another fix. When all tests pass, Bob gets another request for review. This time it all looks good, he approves the patch and it is applied to the development trunk. The backend makes notes in the appropriate issue tickets along the way.

Bob's review approval triggered a merge to the trunk, so the backend system automatically kicks off a trunk build and regression test. It also launches a full reanalysis using a commercial static analysis tool, and re-runs the regression test using valgrind. Because this is all automated and human intervention is only needed when something goes wrong, developers are free to focus on fixing reported bugs and adding features.

We aren't quite there today, but the individual pieces do exist, and fortunately there's no rocket science involved in putting the functionality together.

Posted on 2008-11-21 by brian in tools .
If you liked this, you should also read:

Comments

This scenario sounds very nice and I wonder if or when we will ever get there.

Going even further, what about some more integration where once you tag a major release, some software automatically kick starts the process of burning an installer to CD/DVDs for distribution, prints shipping labels, and leaves them by the door for the shipping company to pick up.

And it should deliver a reward whenever I commit a bugfix that is approved on the first try. A small danish would be nice!

Kyle Tolle
2009-01-14 16:43:12
Comments on this post are closed. If you have something to share, please send me email.