Computerworld

The art and science of software testing

  • Jon Udell (InfoWorld)
  • 02 February, 2004 13:59

The Greek roots of the word "technology" suggest the translation "systematic treatment of an art or craft." Dictionaries define the English word as "practical application of knowledge" or "manner of accomplishing a task." TDD (test-driven development), as it has been popularized recently, is a technology that harkens back to those original meanings.

TDD's premise is simple but radical: Write tests that validate the behavior of a module before writing the module. Tests create a safety net so programmers can "refactor with confidence." As the pace of outsourcing quickens, they become part of the glue that holds a distributed project together.

The activity of test writing can also subtly transform the craft of programming. Some find it not merely a chore, but rather a useful way to explore a problem domain, one that's complementary to spec-writing, in ways congenial to people who think better in code than in prose. Others cherish the psychological lift they get from TDD. Software project milestones are few and far between and there's rarely much positive reinforcement along the way. TDD aficionados report that the green bars displayed when their tests pass are powerful motivators.

TDD does require a lot of time and effort, which means something's got to give. One Java developer, Sue Spielman, sent a Dear John letter to her debugger by way of her Weblog. "It seems over the last year or two we are spending less and less time with each other," she wrote. "How should I tell you this? My time is now spent with my test cases."

Clearly that's a better use of time, but when up to half of the output of a full-blown TDD-style project can be test code, we're going to want to find ways to automate and streamline the effort. Agitar Software Inc.'s forthcoming Java analyzer, Agitator, which was demonstrated to me recently and is due out this quarter, takes on that challenge. The Agitator starts by exercising a class and making observations -- for example, that the size of an array increased by one after a method call. After 100 trips through the code, it might report that the array grew 97 times, and didn't grow three times. The 3 percent case doesn't necessarily signify a bug. But a developer who determines that it does can convert the observation that the array didn't grow into the assertion that it should. Thus, a new test is born.

Let's suppose the method should grow the array. Passing along a wrong value from its caller is one way it could fail. So as the Agitator repeatedly exercises the method; it tries different combinations of input values, just as a human test-writer would do. But as Director of Product Development Kent Mitchell points out, constructing the objects that drive tests can be nontrivial. "I may know that I can call your Employee class and ask if the person is an active employee," he says, "but I may not know how to construct an Employee that is active because, through the wonders of OO (object-oriented) design, I haven't had to know that until now." To write the test, then, a developer has to unpack a lot of carefully packaged abstractions. Agitar's unarguably good idea is that we let the computer do that for us. TDD, of course, has always been about writing down assumptions so they can be mechanically checked. Think of Agitator as a power tool for that kind of writing.