The final production build of the software is ready. The last full run of testing starts. After two weeks, a potentially serious problem is found and the product manager decides it has to be fixed. The fix is rushed in, unit tested and built into a new production build.
Now, there are only two days left before the product must be shipped to customers. The three week final test phase now goes “out the window” and the test team do the best they can to verify that the product still works without any serious regressions.
This is an example of “Targeted Testing” being applied. Every test team has to do it at some stage, there simply isn’t enough time and resource to do an ideal job. The effectiveness of that final test of the shipped product depends on the skills of all those involved in the decisions affecting every aspect of the life cycle of that last change – and probably a great deal of luck.
“What I need is a list of specific unknown problems we will encounter.”
(“Dilbertism” from Lykes Lines Shipping)
Time constraints like this are actually happening throughout the whole product development cycle. All the tests in the official plan may be run in several test phases, but by the end of each phase, the product has moved on, and in an ideal world, all those tests (and some not even dreamed of yet) need to be re-run.
Wouldn’t it be great if there were some tools and techniques to help do Targeted Testing so that at any time in the lifecycle of the product, we could know exactly which test is most likely to find a possible problem? All the available tests would be dynamically ordered based on this ever changing likelihood of finding problems. That way, we could be sure that the best possible testing was being done at any point in time in whatever time is available.