Quality Time

  1. Posted on October 2, 2007 6:50:PM by Earl Beede to Practicing Earl
  2. Testing & QA, humor, testing, quality

At Construx I teach both the Estimation seminar and the Advanced Quality seminar. One question I usually get during the Estimation seminar goes something like this, "How can I estimate how long quality will take?"

Now this is a fascinating question in that it is so wrong and yet so important to the people asking it. Let's look at the "so wrong" part first. The question as stated assumes that quality can be added on during some activity, like icing a cake. "We created the software," a swaggering project lead may say, "and now all we have to do is give it quality!"

"How long will that take?" asks the confused but kindly business partner.

"Well, that is why we took the estimation class. We estimate between two to five quarters." At which point the kindly business partner outsources the entire staff.

Perhaps the question is so wrong because it is the kind of question that is not designed to be answered by a direct answer but by another question. Perhaps a good response is, "How poorly do you plan to do requirements, design, and code?"

It is also so wrong since nobody has actually defined what quality means on the project. Do they want to know when key use scenarios work 90% of the time? 95%? 99.99999%? When we have reached a defined level of brokenness? (Um, I mean number of outstanding defects.) Maybe the questioner wants to known when we have spent enough time doing "quality"? Perhaps the questioner wants to be able to defend themselves later. "We spent this much time on quality, how could you possibly complain?"

It is the need to plan, however, that makes the question so important. My seminar attendees are in charge of testing and they need submit their staffing needs and time-line to the primary planners. Will they need four weeks with eight testers or ten weeks with fourteen testers?

The trick, I think, is to change the question. A better question, one that might be answered, is "How long, given our past performance, will it take to find 95% of the defects we insert?" Let's break that down.

Past performance. We can not even begin to estimate "quality" unless we have some idea of how many defects we create. Most organizations have some sort of defect tracking system and a management infatuation with defect numbers so this shouldn't be too hard to get. And even if you don't, you can bootstrap this with estimation techniques.

Defects we insert. We also need to have some idea about our defect detection rate. Let's say that the project will insert, based on past projects, about 500 defects. The test groups finds, on the average, 2 defects per staff hour. Simple math tells us the project needs about 240 staff hours to find 95% of the defects.

If the project has better than average data it can step this up and say that the 500 defects are broken into 50 requirements defects, 150 design defects, and 300 code defects. Now, the quality lead can ask the development lead how long it takes to correct, on the average, a requirement/design/code defect. This is a great trick for the quality lead as it puts the, "how long does quality take" problem back on the development team!

Finally, if the project has that better than average data, the quality lead can start saying, "I plan to use peer reviews (or collaboration or formal methods ...) at this point to find 45 of the 50 requirements defects." This takes a lot of pressure of the end testing game and a much better way to "do quality".

To me the "how long does quality take" question is the same ilk as "what is the sound of one hand clapping". The primary purpose may be to lead to a better question, not to come up with an answer.

Stephen said:

October 3, 2007 2:41:PM

Quality can be defined much more broadly than by defects.  For example, an application that is user entry intensive may be higher quality if a mouse click can be omitted through the use of a default entry box.  It worked without it.  There weren't any defects in the code.  The design was OK.  There may be perceived increased quality if more users can use the same server at a time, or more applications can use the same database server, etc.  These are efficiency quality issues.  And, there may be optimization vs. maintainability trades that must be made.

Often, when the programmer is given a fixed, arbitrary, deadline, the programmer, in the absence of other input, makes many of these decisions.  And rightly so.  All nontrivial programs can all be improved.  You might get an architect to make such decisions, but it would take longer to communicate these 'requirements' to the developer, and they'd be wrong anyway.

Fred Brooks (20th anniversary of Mythical Man Month) advocates development by starting with a running skeleton, and adding functionality as the application grows.  The advantage is that one can ship the product at nearly any time, and it won't hurt anyone.  But, one could easily say that the more effort that is expended, the better the product.  For many, there is a psychological boost to constantly modify something that works.

Post a Comment:

 
 

Earl Beede

Earl Beede, CSDP is a Senior Fellow at Construx Software, where he designs and leads seminars and provides consulting services on early project-lifecycle practices, estimation, requirements, quality assurance, contract management, and software methodologies.

With more than 20 years experience as a quality assurance representative, systems analyst, process architect, and manager, Earl has designed and written software development processes for companies across a wide variety of industries. Prior to joining Construx, he held quality assurance and systems analyst positions at organizations that include the Department of Defense, Boeing Computer Services and Verizon Wireless.

Earl has a Bachelor's degree from the University of Washington. He is a member of the IEEE Computer Society and a past coordinator of the Seattle Area SPIN (Software Process Improvement Network).

 

Contact Earl