I’ve had a really exciting couple of weeks working with one of our System Test teams to define a better way of measuring test progress and product quality. For too long I’ve been fed up with the traditional test tracking metrics where we measure passes and fails or effort remaining. Historically, these measures seem to be used just because they are simple to gather. The assumption being that all you have to do is define what test cases need to be run, then track them until they all pass. The two major flaws in this are, firstly, that it’s a big assumption that the original test plan contains everything it needs to, and secondly, it is rare for any test plan to execute smoothly and at some stage in the project the project manager realises that the pass and fails aren’t telling them anything and start asking questions like “Just tell me what works and what doesn’t”. Invariably this is either impossible to determine or requires a lot of effort from the test team. At which point the simple solution is ‘Test team, work harder!’
I’ve failed miserably so far at trying to convince project teams that they should be looking at the outstanding risk in a project, rather than test case results. But I think I have finally realised why. People don’t like talking about risk. It sounds like something bad and most project teams don’t want to be associated with something bad.
The breakthrough we had this week came when my colleague Russell Finn came up with the idea of measuring the ‘confidence’ we have in the product or system rather than the outstanding risk. Now you could argue that confidence is just the inverse or risk in this case, but I think it has a much more positive spin on it.
We had been challenged by our lead engineer, Brian Cope, to redefine how we represented our status and with the help of system test leaders, Eileen Dreyer and Chris Osbourn we set about rethinking everything we do in terms of status reporting.
What we decided to show was effectively two columns of data. One showing areas of the product that we had high confidence in and one showing the backlog of areas we currently have low confidence in (or if you like the risky areas). Now, from a very simplistic view, we can answer the question ‘What works and what doesn’t?’ or at least have a good stab at it.
The next step was to work out a way of quantifying the ‘confidence’. Fortunately, this was relatively simple as we piggybacked on a piece of work that Russell had already done, where he had defined a ‘taxonomy’ for the system under test. This taxonomy split the system into its important parts, from a capability view point. With this taxonomy we were able to prioritise and apply relative weightings for each area using ‘Planning Poker’ (http://www.planningpoker.com/). A quick Friday afternoon game involving Jon Isaac, Russell, Brian and I and we had a pretty good view of the system with each area given a number of ‘story points’. (we have since done a sanity check with other members of our department and so far our estimates are holding up).
We could then chart the confidence in the system using a couple of pictures. The first showing the confidence in different areas (and their relative weighting), the second, showing the overall system. We decided to add a third ‘state’ to show areas of risk that we would be mitigating in the current iteration.
N.B The data shown here is for a fictitious system, but imagine that it is a system that is highly valued for it ability to recover from failures and outages and has a high expectation on performance.
Once we have this picture we can view automated test cases as tools that help us build our confidence in the system. Other tools included ‘manual testing’, ad-hoc testing, code reviews, code coverage metrics and ‘tester gut feel’. These other tools are not used in traditional tracking and can be a valuable source of information for determining the quality of the product. If these things feel a bit hokey now then spend a second or two thinking about what a traditional test status showing 54% pass actually means.
Riding on the back of another piece of work, where all the existing test cases had been ‘tagged’ to show which areas of the taxonomy they exercised. We held a review with the test team to weight each test case by area. Note that a test case can cover more than one area and would be weighted independently for each area. (For instance a test case might be very highly rated in the recovery area, but do a small amount of connectivity, this would mean that the test case would be weighted in both areas appropriately).
The following charts show the quality of the system during the early iterations. The highly weighted (and therefore most important areas) are being mitigated first (in true agile fashion) and we can see that a portion of recovery is now showing high confidence, a portion is being mitigated in this iteration and the rest is still outstanding in the backlog. Clearly the system is not suitable for shipping at this point.
As the iterations proceed we can see the backlog reduce and the confidence rise.
Finally we reach the last iteration and a decision must be made on whether we can ship or not. It looks like we have a small amount of risk in recovery, performance load and stress and a high confidence in everything else.
So, do we ship it or not?
The decision is still a tough one, but I’m sure that this sort of information will be far more useful than the traditional method where at this point we would be claiming 98% attempted and 94% successful!
I think this is a radical new way of thinking about product quality and will make a huge difference in how we do business.
I’d appreciate any thoughts and ideas on how this could be improved.