- Posted on September 8, 2008 1:32:PM by Earl Beede to Practicing Earl
- Testing & QA, Technique, humor, quality, done
In software development, like many other areas of life, we need to decide when some item of work is done. The decision of "doneness" has wide impacts as under-done creates defects, downstream rework, and lost opportunity costs while over-done wastes time and resource and incurs its own lost opportunities.
To be even more critical, in my review of documents from hundreds of clients I find that work items are often under-done in important areas and over-done in trivial ones. That is, the document cover, table of contents, document purpose statement, and sign-off areas have been vetted to precision. However, the requirement, design, test plan, or code contained within has defects both minor and major.
This may be explained by human nature as the trivial parts can easily be checked and confirmed. Committees or teams charted with creating common process and practices occasionally find that the only place where they can garner agreement and claim success is in the trivial. One instructor from my past called these the blah-blah pages; they just seem to go blah, blah, blah and not say anything really important.
What about that other part, the important part? Why can't the committees or teams which gain success on the trivial part garner the same agreement on these parts? Well I think the answer here lies not in human nature so much but in the nature of the problem. The issue is that the important stuff in software development, as in many parts of life, is contextual. What is going on in the project, the team, and the organization at the moment when the work artifact is completed all have an effect on the decision of done. You can't really spell out in advance what done looks like.
For example, let's look at the requirement written on a story card, "Make it faster". If I were to consult my requirement books, articles, and heck, even the class I teach, all would proclaim "Make it faster" a woefully inadequate and a completely NOT done requirement. Way too much ambiguity. No scale identified. Not tagged adequately. Not testable as it stands. And the list could go on. This requirement is doomed to cause a lot of defects and angst.
However, on my imaginary project where this story card has been written, the small team has been together for six years and through four releases. The story is written shortly after the entire team had witnessed the prototype of the fifth version perform reasonable well but much slower than the fourth release did. Having a well defined target customer understood by every team member, the entire team knew what it would take to make that customer happy. For this team, the requirement "Make it faster" is in fact done. It is "good enough" to get the team to focus on the right work to the right level. There will be no defects or angst.
So we can't come up with a clear, complete, consistent definition of done for the parts of software development that really matter. Faced with this challenge, out committee and teams often take one of two paths. The first path is to create the "mother of all templates" and put in everything and every practice they can think of and give (often in small print, with dire consequences if actually attempted) direction that the template may be tailored. This offer to tailor is seen as the compromise to the reality of contextuality. Unfortunately, the compromise is not required as most implementers of templates know that if they do it all—make it over-done—then the process police will give them their blessing and all will be right in their world.
The other path that committees and teams often take to deal with not being able to define done is to slip into a "father knows best" syndrome. The person with the most experience in a given area (even if that means it is the recent hire who is the only one who claims to know the new technology) gets to define "done". So the entire team starts to do what the most experienced—or the loudest—person on the team does. Occasionally, like any flock or pack, there is a fight for dominance or pecking order. Most of the time everybody does it the same way that, by definition, fails the contextuality test.
Given the two paths I seen, what is a committee or team to do? Contextuality demands that doneness can't be defined ahead of time but the costs of not being done are so high. The answer, I believe, is not in defining "done" but defining how to determine "doneness" within a context. The process I use I call my "good enough" criteria. That is, I have four criteria I use to help me decide if the work artifact is done to a level that is good enough for what the project needs to do.
The four criteria are
- Sufficient to Proceed. Is the work to a level that the next person who must take up the work has what is needed to do their job?
- Appropriate for the Environment. Are the people who take up the work likely to understand it?
- Sanity Checks. Has the work committed a classic mistake that can easily be detected by the review of a short checklist of critical attributes?
- Feedback from Stakeholders. Do the critical stakeholders tell me that it is OK?
I find using the combination of these four criteria gives me insight into how done the work artifact is and is fully contextual. Process standardization zealots can take heart in the sanity checks and experience anarchists can rejoice in the feedback.
In future entries, I will explore each of these four criteria. Until then, I am anxious to hear how you define "done".
That's it, I'm done... for now.