Change Impact Analysis

It is my personal opinon, that these primitive metrics are only of minimal utility: they're provided because they're easy to compute, rather than because they're particularly useful.

In my opinion, the kind of metric which is actually a useful measure of progress and software quality is what I'll call a change impact analysis.

Extensive changes are indicators of poor software quality. Basically, if a programmer is changing many files, the system is unstable and poorly structured.

If, on the other hand, changes are intensive, and isolated, the system is likely to be robust and well structured. The intense work being done indicates new functionality, improved performance, or correction of a bug, rather than tediously re-engineering many interfaces as a consequence of a minor change to one module.

The theory behind this kind of metric is based on notions of coupling and modular dependencies: if, according to principles of loose coupling, as espoused by Bertrand Meyer in Object Oriented Software Construction, a program is well modularised, then changes to a module should have a limited impact on other modules. Indeed, a fundamental tenant of the Open/Closed principal (again from Meyer), and notions of information hiding, is that variations in the implementation of an interface should have no impact on other modules at all.

These indicators include documentation. Therefore, traditional development technology is certain to get a lousy rating by an impact change analysis: requirements are written first, then the system designed, then implemented, then tested, and finally documented (if there is any time left).

This is a woeful strategy. I subscribe to the philosophy most strongly emphasised by Robert Martin: that all aspects of software development should be done together, because they feed back into each other.

Interscript is especially designed to support this superior strategy by providing integrated documentation, design, programming, and testing facilities, along with progress indicators: it is designed to support ongoing simultaneous development of requirements, analysis, design, implementation, testing, and management. As such, there is no notion of 'maintenance': the same strategy is used throught the lifetime of the software, although the investment of resources will likely vary over this period.

Change impact analysis is best based on interscript sources, because target programming languages provide dubious support for proper localisation. For example, C and C++ require separate header and body files: parallel maintenance of function signatures is an impediement to quality imposed by the language. An interscript programmer will write generating tools to unify declaration and definition of a C function, so the parallel maintenance will be handled by the computer system.

It would be entirely wrong to interpret extensive changes as bad practice, however. On the contrary, historical impact change analysis ought to show alternating phases of intensive and extensive changes. The reason is that while extensive changes indicate poor software quality and lack of robustness, such indication is a good thing if the changes are followed by intensive changes, because they may indicate the programmer has recognized poor structure, devised a solution, and implemented it.

On multi-programmer projects, it is essential to coordinate the extensive change phase; whereas during times of intensive development, programmers can largely be left to work independently.

Any sizeable project which fails to go through at least one complete reorganisation is likely to be of very dubious quality. For example: interscript went through a complete reoganisation before 1.0a7 was released; the original single file was restructured into a package of separate modules, and the single global scope was partitioned into several distinct frames.