Conrad Weisert, October 22, 2007
©2007, Information Disciplines, Inc.
NOTE: This article may be reproduced and circulated freely, as long as the copyright credit is included.
Consider this situation, which isn't unusual:
Three months ago we needed to make some minor changes to a critical application in order to cater to competitive business pressure and to comply with a new government regulation. The maintenance programming manager estimated that the modified system could, after thorough testing, be put into production in six weeks.
Unfortunately that schedule has been slipping by two-week increments, and the maintenance programmers are still struggling. Although they report that their new features are working just fine, several seemingly unrelated parts of the system no longer work reliably.
The head of the maintenance team complains that the programs are a nightmare. Documentation is incomplete and misleading. Modules have extremely poor cohesion and there are instances of cheating in the coupling between them, so it's very hard to find the right place to apply a change. Source code is loaded with repetition, so that some changes have to be made in multiple places, and we're never sure if we found them all.
The application development manager from three years ago, who is now the manager of another part of the organization, assures us that the in-house development was done by a highly competent team following the latest agile methodology. The manager of Quality Control recalls that the end product was subjected to extremely rigorous testing, for which documentation is still available.
What a mess! As a manager responsible for current project, you have to face the following questions:
What's clear is that the people now working on the problem did not cause the problem. They're probably doing their best to cope with a difficult and unforeseen problem.
Of course the original development team bears a large share of the blame. They produced application software of extremely poor quality. But we mustn't place 100% of the blame on a team of bright programmers who put in long hours and believed they were carrying out their assignments well.
The real problem most likely stemmed from a lack of supporting infrastructure. In particular inadequate or non-existent:
But who is to blame for the weak infrastructure? Let's look at another situation before we try to answer:
Eighteen months ago our company solicited proposals for a new Order Processing and Accounts Receivable system. After comparing the presentations and proposals from eight vendors and carefully checking with other customers, we signed a contract with one of the vendors, calling for their assistance with training and limited customization leading to operational start-up fifteen months later.
We have twice been on the verge of start-up when key users have raised an objection, citing the new system's inability to handle special cases that are considered essential to our relationship with very large customers. We then negotiated additional customization with the vendor to support the required special cases.
Now the vendor has announced a much-improved new release of the system; at the end of next year the old version will no longer be supported. They propose a follow-on contract to implement the customization changes in the new version. We have already far exceeded the budget upon which the project was justified, and the availability of the new system for the critical Christmas selling season is already doubtful.
Although this one appears very different from the first situation, the causes are organizationally the same. Hoping to avoid the high risks in custom application development the user department decided to buy an existing software product. But management naïvely oversimplified the process, assuming that they didn't have to specify detailed rigorous requirements. Are those naïve managers to blame for the mess we're in now?
Not really. We can't expect user managers to understand the process of specifying detailed requirements and writing a foolproof specification. Again the organization has paid a high price for the lack of necessary I.T. infrastructure, this time:
The difficulty, common in large organizations, lies in the long interval of time between the organization's choosing (deliberately or by omission) not to establish the necessary I.T. infrastructure and the manifestation of unpleasant consequences of that choice. It's unlikely that upper management will associate the effects with their cause. They're more likely to blame the difficulties on the unmanageable nature of information technology, and perhaps overreact by indiscriminately outsourcing future projects (with equally disastrous results).
Fortunately the infrastructure needed to assure quality in information systems development, both in-house and purchased products:
See my 1992 paper on methodology development and administration; today's web technology has made it even easier and cheaper than it was then.
Furthermore, in some circles there's an automatic presumption that support staff activities are red tape bureaucracy, organizational fat. A newly appointed manager impresses his boss by eliminating such overhead leftovers from the discredited previous regime. Faced with a difficult burden of justification and the pressure of the current quarter's bottom line, many middle managers will elect to ignore the issue and just hope for the best.
Obviously, upper management must understand and firmly support the need for I.T. infrastructure. If they don't already, then it's up to knowledgeable professionals to explain and sell the concepts to them. We must persuade top management that the infrastructure is absolutely essential to success and also assure them that it won't cost much. I've gotten good response to a 90-minute presentation to the decision makers.
The benefits of and the urgent need for I.T. infrastructure are now recognized by international accreditation bodies, most prominently ISO 9000 standards and the Capability Maturity Model (CMM). Obtaining the blessing of one of those bodies can bolster an organization's confidence in its ability to manage I.T. projects.
Certification can also be a requirement for bidding on some contracts from government or other organizations. Therefore, companies that plan to pursue such business are motivated to secure such certification.
But that must never be the sole motivation. An organization should establish methodology and other supporting infrastructure in order to improve its performance, not just to satisfy contracts. We've seen too many situations where an I.T. development organization has struggled to secure certification and then ignored the resulting infrastructure except as a selling point in contract bidding. And never engage the certifying/auditing organization to develop your infrastructure; that's a clear conflict of interest.
Last modified October 22, 2007
Return to IDI home page