Software Project Archaeology

A colleague asked me the following question:

Assume you were asked to assess a software development team from outside of the organization (that might occur as due diligence or some other context), and you had full access to all internal artifacts of the organization, but you were not allowed to talk directly with anyone from inside. To what degree could you evaluate the quality and effectiveness of the software team just from reviewing just their work, without knowing anything else about them?

This is a wonderful question, and it isn’t just theoretical. We do consulting engagements in which we review project artifacts before we talk to team members, and we use those reviews to target the questions we will ask when we do in-person interviews. When we look at “artifacts” we look at code, test cases, documents, drawings, post it notes, emails, wiki pages, graphs, database contents, digital whiteboard photos — basically any repository for project data.

We look at the following kinds of questions:

What artifacts exist, and what is their scope? Does the project have artifacts that at least attempt to cover all project activities including requirements, design, construction standards, code documentation, general planning, test planning, defect reporting, etc.? If artifacts are not comprehensive, is there any logic behind what is covered and what isn’t?

What is the depth of coverage of the artifacts? Do the artifacts try to document every detail, or are they more general? Is the level of detail appropriate to the kind of work the company does?

Are the artifacts substantative? We often see artifacts that are so generic that they are useless to the project. Sometimes we see unmodified boilerplate presented as project documentation. Related: does it appear that the people creating the artifacts understand why they are creating the artifacts, or does it look more like they’re “going through the motions” without understanding why they’re doing what they’re doing.

What is the quality of the work in the artifacts? For example, are requirements statements well formed? Is there evidence that customers have been involved in formulating requirements? Is there evidence that work is getting reviewed? Do the plans look realistic and achievable? Does the design go beyond just drawing boxes and lines and appear to contain some thought?

How long does it take the organzation to produce the artifacts? It isn’t unheard of for organizations to generate artifacts for the first time when they receive our request to show us their work. These organizations know at some level that they should be creating certain artifacts, but they haven’t been.

How recently have the artifacts been updated? This gives one indication of whether the artifacts are actually being used. We assume that if no artifacts have been updated for the past 6 months, they are most likely being ignored (or were never relevant in the first place).

What evidence do we see that the artifacts are being used? I.e., is the team creating “write-only” documentation that isn’t really serving any useful purpose on the project, or are the artifacts being used?

Are the artifacts readily accessible to the project team via a revision control system, wiki pages, or some other means? If team members don’t have ready access to materials, that calls into question the degree to which they can actually be using the materials.

We’ve worked with so many different companies in so many different industries that we no longer have many preconceived notions of what specific artifacts need to look like. We’ve seen good organizations with minimal documentation, and we’ve seen bad organizations with extensive documentation. What we are looking for is, Do the artifacts, considered as a set, show us a project that is being run in an organized, deliberate way–paying attention, and learning from its experience? Or do the artifacts show a project that is chaotic, constantly in crisis mode, and mostly working reactively rather than proactively?

When we do assessments with organizations, occasionally we’re surprised to find an organization that is more effective than we would have thought based on our document reviews, but that’s the exception, and usually we can draw numerous valid conclusions just by doing software archaeology.