True to its title, this article argues for the use of automated code inspection as an addition to regular testing. Chou starts off by stating that software testing is one of the largest time and resource drains in the development cycle of a product, taking up as much as 50% of the effort. Additionally, there are always more test cases to be tried, so "testing is never finished, only abandoned."
The premise is that software fails for two main reasons: logic/algorithmic errors and exception failures (the latter accounting for up to two thirds of all system crashes, according to CMU's Roy Maxion).
Chou quotes an unnamed study that says programmers routinely introduce three errors with every 100 modifications they make to production software. Errors include omissions, altering code that should not be changed, and making incorrect changes. In this respect there is no significant difference between experienced and inexperienced programmers (the difference is in overall productivity).
An HP report places manual code inspection's efficiency in finding bugs at 60%, while Chou's experience shows a 30% bug-finding efficiency for automated regression testing. He works out an example in which testing alone finds 30% of the errors (as expected) and code inspection followed by testing finds 72% of the errors.
One of the biggest hindrances in deploying software inspection is the lack of automation, hence today it is mostly used selectively on mission-critical sections of code. Once this can be done automatically, Chou sees it as an additional, inexpensive line of defense (not to replace the rigorousness of regular testing).
Some interesting nibbles: