© Conrad Weisert, Information Disciplines, Inc.
NOTE: This document may be circulated or quoted from freely, as long as the copyright credit is included.
Recent articles and presentations have been presenting some awfully strange views of unit testing and how it is practiced. Reputable sources for over two decades agree on what unit testing is:
|. . unit testing . . is a process of testing the individual subprograms,
subroutines, or procedures in a program.
-- Glenford Myers: The Art of Software Testing (1979)
|Testing of a single module (or small group of related
modules), usually done with or immediately following coding.
-- The Que Programmers' Dictionary (1993)
|A test of individual programs or modules in order to ensure that there are
no analysis or programming errors.
-- IBM Dictionary of Computing (1993)
A 1996 paper "Why Bother to Unit Test" from IPL Information Processing, Ltd. adds a vital point:
The unit test is the lowest level of testing performed during software development, where individual units of software are tested in isolation from other parts of a program.
The emphasis is on the individual module or other unit of program structure and isolation from other parts of the program. Later stages of software testing are:
|Testing stage||What's being tested|
|Integration||a complete executable program|
|System||the complete suite of software and manual procedures|
|Volume (or stress)||capacity and performance under extreme load|
|Acceptance||developers' contractual obligations|
If the software itself is to be a product we may also have
|alpha test||actual use by internal users|
|beta test||actual use by friendly external users|
We won't discuss those later stages of testing, but will concentrate on the surprising confusion surrounding unit testing. Let's examine what unit testing is not.
Several recent presentations and panel discussions on Extreme Programming (XP) or so-called "agile" methods, have described unit testing as if it were an XP innovation! In the discussion it was clear that many audience members accepted that claim.
Some OOP textbooks and newer dictionaries define unit testing as the testing of a single object-oriented class.
Of course unit testing didn't originate with either agile methods or object-oriented programming. Fortran programmers were doing it 40 years ago. It's not specific to any particular life-cycle, design paradigm, programming language, or sequence of development.
An unfortunate exception: COBOL
You can't do unit testing with a monolithic program structure, since it would be prohibitively difficult to isolate an individual module and test it outside its eventual context.
Monolithic organization characterized most COBOL programming. Early COBOL had no facility for linking separate subroutines and passing parameters and results among them. By the time those features were added (awkwardly and with serious restrictions) to the language, the monolithic style had become solidly established, and even many later COBOL textbooks and courses with "structured" in the title continued to promote monolithic program organization.
Not surprisingly, then, many of those who believe that unit testing is something new are either young entry-level programmers or former COBOL programmers.
In several decades I have never1 worked on a software development project where the programmers didn't do reasonably thorough unit testing. While walking out after one of those extremo panels, I made that observation to a colleague. That amazed him and he amazed me right back. He reported that in his organization, a huge multinational manufacturing company, application system projects were often under such intense pressure that most programmers felt they just didn't have time to test their modules individually.
Apparently it has become common practice in some organizations to toss one's untested modules into a "nightly build" in the expectation that any bugs will be found in the integration test. As you'd expect, those organizations are finding their projects in deep trouble, and are seeking some dramatic breakthrough, like extreme programming or some tool for "automatic unit testing"2, to bail them out of chronic chaos.
Fortunately, such organizations are in the minority. My clients and most colleagues report that unit testing is so routine in their organizations that they haven't thought about whether to do it. They may argue about whether to unit-test top-down or bottom-up, whether to write the test driver before or after coding the module, or when to hold a walkthrough peer review, but it never enters their minds to bypass unit testing altogether.
Readers of this web site know that I keep urging more granularity in a project plan. Projects fail when the tasks are too big and too few. It might therefore seem prudent to specify three tasks on the project plan for each identified module, e.g.:
But in practice that's rarely advisable. Note that each of those tasks is the sole prerequisite to the next one and the sole successor of the previous one. Note also that they're normally assigned to the same individual. Finally, note that in today's interactive programming environments the three activities are so closely intertwined that we can hardly expect a competent professional to finish doing one activity before starting to do one of the others.
A sound project plan should contain a task to develop Module X. The task deliverable is a fully tested module with associated drivers and test cases. We trust the competent programmer to produce that deliverable. (In making such task assignments, of course, we match the expected difficulty with the individual's skill level.)
Earlier tasks, which might have been assigned to different project team members would identify the modules and then specify module X, i.e. define its interface and results.
Return to Technical articles.
Return to IDI home page.
Last modified 7 February 2004