- Posted on October 2, 2007 6:50:PM by Earl Beede to Practicing Earl
- Testing & QA, humor, testing, quality
At Construx I teach both the Estimation seminar and the Advanced Quality seminar. One question I usually get during the Estimation seminar goes something like this, "How can I estimate how long quality will take?"
Now this is a fascinating question in that it is so wrong and yet so important to the people asking it. Let's look at the "so wrong" part first. The question as stated assumes that quality can be added on during some activity, like icing a cake. "We created the software," a swaggering project lead may say, "and now all we have to do is give it quality!"
"How long will that take?" asks the confused but kindly business partner.
"Well, that is why we took the estimation class. We estimate between two to five quarters." At which point the kindly business partner outsources the entire staff.
Perhaps the question is so wrong because it is the kind of question that is not designed to be answered by a direct answer but by another question. Perhaps a good response is, "How poorly do you plan to do requirements, design, and code?"
It is also so wrong since nobody has actually defined what quality means on the project. Do they want to know when key use scenarios work 90% of the time? 95%? 99.99999%? When we have reached a defined level of brokenness? (Um, I mean number of outstanding defects.) Maybe the questioner wants to known when we have spent enough time doing "quality"? Perhaps the questioner wants to be able to defend themselves later. "We spent this much time on quality, how could you possibly complain?"
It is the need to plan, however, that makes the question so important. My seminar attendees are in charge of testing and they need submit their staffing needs and time-line to the primary planners. Will they need four weeks with eight testers or ten weeks with fourteen testers?
The trick, I think, is to change the question. A better question, one that might be answered, is "How long, given our past performance, will it take to find 95% of the defects we insert?" Let's break that down.
Past performance. We can not even begin to estimate "quality" unless we have some idea of how many defects we create. Most organizations have some sort of defect tracking system and a management infatuation with defect numbers so this shouldn't be too hard to get. And even if you don't, you can bootstrap this with estimation techniques.
Defects we insert. We also need to have some idea about our defect detection rate. Let's say that the project will insert, based on past projects, about 500 defects. The test groups finds, on the average, 2 defects per staff hour. Simple math tells us the project needs about 240 staff hours to find 95% of the defects.
If the project has better than average data it can step this up and say that the 500 defects are broken into 50 requirements defects, 150 design defects, and 300 code defects. Now, the quality lead can ask the development lead how long it takes to correct, on the average, a requirement/design/code defect. This is a great trick for the quality lead as it puts the, "how long does quality take" problem back on the development team!
Finally, if the project has that better than average data, the quality lead can start saying, "I plan to use peer reviews (or collaboration or formal methods ...) at this point to find 45 of the 50 requirements defects." This takes a lot of pressure of the end testing game and a much better way to "do quality".
To me the "how long does quality take" question is the same ilk as "what is the sound of one hand clapping". The primary purpose may be to lead to a better question, not to come up with an answer.