May 23, 2014
NOTE: This article may be reproduced and circulated freely, as long as the copyright credit is included.
I've recently heard or read several more claims that unit testing is an important component of agile methodologies and a significant recent contribution1 to enlightened programming practice. That's jarring to experienced programmers who have been doing unit testing for decades, long before anyone heard of the so-called "agile" practices. So, how common was unit-testing twenty or thirty years ago?
Looking back to the early days of Fortran (ca 1960) I can't recall encountering any serious programmer who did not routinely test any non-trivial program module, with the exception of one group that I'll discuss below. We called testing "checkout". We wrote throw-away test driver programs to exercise the module under test. Then we learned not to throw the driver programs away. Later we debated the merits of traditional bottom-up versus creative top-down strategies, but no responsible programmer or programming manager questioned the need for or the value of thorough unit testing.
That one exception, however, was a big one: the culture of COBOL programming. Unlike Fortran, Algol,
Pascal, PL/I, and even the advanced macro-assemblers early COBOL
encouraged (some would say forced) a monolithic program structure. Parameterless routines within a
PROCEDURE DIVISION were tightly bound to specific items
in an equally mammoth
DATA DIVISION. It would have been a daunting and
error-prone challenge to isolate one of them for testing and then to reintegrate it back into the monolithic
COBOL programmers waited until a complete program was ready to execute before running their tests. Many of
them retained that habit even after bona fide subroutines were introduced into COBOL in the 1970s. Veteran
programmers, managers, and users recall how expensive and unpredictable testing COBOL software was in those days.
Meanwhile, programmers using other languages, eventually those with object-oriented extensions, continue to practice unit-testing more or less as we still know it. The routine practice of unit-testing has very little to do with the popularization of so-called "agile" approaches, except perhaps for the sequence in which the test driver and the module under test are developed. Some "agile" advocates favor a "test-first" or "test driven" sequence, which offers pluses and minuses, but the result is still centered on traditional thorough unit testing the way the agilists' parents did it.
The height of confusion was attained this month in one of those on-line forums that posed this absurd inquiry:
|Tough question: between software development & software testing/debuging, which one is the more costly?|
That elicited dozens of responses, hardly any of which pointed out that "testing/debugging" is an inseparable component of "software development"! How could anyone develop software without testing it (even in COBOL)? What would be the value of such "developed" but untested software?
It's possible that some of the forum participants were thinking only of the late stages of software testing: system testing, volume (stress) testing, and final acceptance testing. Those activities may be deferred to a later project phase and performed by testing specialists or user representatives rather than by the original programmer. But any "programmer" who submits raw untested modules as candidates for integration into a production application doesn't understand what programming is about.
Young programmers soon learn that the complexity and unpredictability of debugging is closely tied to the quality of the original code. A module that's loaded with unnecessary repetition, convoluted logic flow, and hard-coded constants is more likely to have flaws—subtle hard-to-find ones—than an elegant simple "clean" module. If they just toss their untested modules into a pile for others to deal with, how will they ever learn those lessons?
Testing strategy begins with program design and coding.
Last modified May 24, 2014
Return to Technical topics