Primary References

**McConnell, Steve. Software Estimation: Demystifying the Black Art, **Redmond, Wa.: Micro-soft Press, 2006. The companion book to this class, focuses on art-based estimation tech-niques. Full details are available at www.stevemcconnell.com/est.htm.

**Stutzke, Richard D. Estimating Software-Intensive Systems, **Upper Saddle River, N.J.: Addi-son Wesley, 2005. Stutzke provides several additional methods for estimating schedule, most of which are more mathematically intensive than the techniques described in this class. Chapter 5 of Stutzke’s book discusses judgment-based estimation techniques and provides background on some of the math described in this chapter. Chapters 8 and 9 de-scribe additional size estimation techniques including Use Case Points, Application Points, Web Objects, and simplified function point techniques. Stutzke also discusses size estimation on COTS (commercial off the shelf) projects. Chapter 12 describes ap-proaches to effort allocation that are based on Cocomo 81 and Cocomo II. Chapters 15 and 23 of Stutzke’s book focuses on detailed cost estimation issues. Cost estimation and other cost-related issues are a major focus of Stutzke’s book, and various cost-related tips are sprinkled throughout. Sections 12.1 and 12.2 discuss the relationships between effort, duration, and staff availability. Chapter 20 describes how to create a WBS. Ap-pendix C of Stutzke’s book contains a summary of measures of estimation accuracy.

**Tockey, Steve. Return on Software, **Boston, Mass.: Addison Wesley, 2005. Chapter 15 of Tockey’s book contains a good discussion of determining unit cost, including methods of allocating overhead using different costing methods and hazards associated with some of the methods.Chapters 21-23 discuss basic estimation concepts, general estimation tech-niques, and allowing for inaccuracy in estimates. Tockey includes a detailed discussion of how to build your own cone of uncertainty. Chapter 22 of Tockey’s book discusses es-timation with multiple methods. Chapter 23 discusses how to account for inaccuracy in estimates, and Chapters 24-25 discuss how to make decisions under risk and uncertainty.

Secondary References

**Armstrong, J. Scott, ed., 2001. Principles of forecasting: A handbook for researchers and practitioners.** Boston: Kluwer Academic Publishers, 2001. Armstrong is one of the lead-ing researchers in forecasting in a marketing context. Many of the observations in this book are relevant to software estimation. Armstrong has been a leading critic of overly complex estimation models.

**Boehm, Barry W. Software Engineering Economics. **Englewood Cliffs, New Jersey: Pren-tice-Hall, Inc., 1981. Although this edition has been largely superseded by Software Cost Estimation with Cocomo II (below), this edition contains interesting, detailed reference tables for effort and schedule breakdown across activities. Chapter 21 describes a Seven step approach to estimating software projects. Section 22.2 of Boehm’s book describes the original Delphi method and Boehm’s creation of Wide Band Delphi.

**Boehm, Barry, et al. Software Cost Estimation with Cocomo II, **Reading, Mass.: Addison Wesley, 2000. This book is the definitive description of Cocomo II. The book’s size is daunting, but it describes the basic Cocomo model within the first 80 pages, including detailed definitions of the effort multipliers and scaling factors discussed in this chapter and how Cocomo II accounts for diseconomies of scale. The rest of the book describes extensions of the model. Boehm was the first to popularize the Cone of Uncertainty (he calls it a funnel curve). This book contains his most current description of the phenome-non. Appendix A of Boehm’s book describes effort and schedule breakdowns for water-fall projects, MBASE projects, and Rational Unified Process projects. Table A.10 (which is actually six tables) provides detailed breakdowns of effort and schedule across different activities. Appendix E of Boehm’s book contains a checklist that’s useful for precisely defining what constitutes a "line of code."

**Cohn, Mike. Agile Estimating and Planning, **Upper Saddle River, N.J.: Prentice Hall Professional Technical Reference, 2006. Cohn’s book contains an extended discussion of story points, including planning considerations as well as estimation techniques. Chapter 5 of Cohen’s book contains a nice description of the difference between ideal effort and planned effort.

**Conte, S. D., H. E. Dunsmore, and V. Y. Shen. Software Engineering Metrics and Models. **Menlo Park, California: Benjamin/Cummings, 1986. Conte, Dunsmore, and Shen’s book contains the definitive discussion of evaluating estimation models. It discusses the "within 25% of actual 75% of the time" criteria, as well as many other evaluation crite-ria.

**DeMarco, Tom and Timothy Lister. Waltzing with Bears: Managing Risk on Software Projects, **New York: Dorset House, 2003. This book presents a readable introduction to software risk management and introduces the "nano-probability" term.

**DeMarco, Tom. Controlling Software Projects. **New York: Yourdon Press, 1982. DeMarco discusses the probabilistic nature of software projects.

**Fenton, Norman E. and Shari Lawrence Pfleeger. Software Metrics: A Rigorous and Practi-cal Approach, **Boston, Ma.: PWS Publishing Company, 1997. Chapter 10 contains a de-tailed discussion of estimating software reliability. If you don’t like equations with lots of Greek symbols, then this is not the book for you.

**Fisher, Roger, William Ury, and Bruce Patton. Getting to Yes, 2nd Ed. **New York: Penguin Books, 1991. This book lays out the details of the principled negotiation strategy de-scribed in this chapter. The book is packed with memorable anecdotes and makes for in-teresting reading even if you’re not very interested in negotiation.

**Garmus, David and David Herron. Function Point Analysis: Measurement Practices for Successful Software Projects.**Boston, Mass.: Addison Wesley, 2001. This book describes function point counting and presents some simplified counting techniques.

**Gilb, Tom. Principles of Software Engineering Management. **Wokingham, England: Addi-son-Wesley. Section 7.14 of Gilb’s book describes using project-specific data to refine estimates. The description of evolutionary delivery throughout the book is based on the expectation that projects will build feedback loops that allow them to be estimated, planned, and managed in a way that allow the project to be become self-correcting.

**Goldratt, Eliyahu M. Critical Chain, **Great Barrington, MA: The North River Press, 1997. Goldratt describes an approach to dealing with student syndrome as well as an approach to buffer management that addresses Parkinson’s Law.

**Grady, Robert B. and Deborah L. Caswell. Software Metrics: Establishing a Company-Wide Program, **Englewood Cliffs, NJ: Prentice Hall, 1987.

**Grady, Robert B. Practical Software Metrics for Project Management and Process Improvement. **Englewood Cliffs, N.J.: PTR Prentice Hall, 1992. These two books describe Grady’s experience setting up a measurement program at Hewlett-Packard. The books contains many hard-won insights into the pitfalls of setting up a measurement program and some interesting examples of the useful data you can ultimately obtain.

**Humphrey, Watts S. A Discipline for Software Engineering.** Reading, Mass: Addison Wesley, 1995. Humphrey lays out a detailed methodology by which developers can col-lect personal productivity data, compare their planned results to their actual results, and improve over time. Chapter 5 of Humphrey’s book discusses proxy-based estimation, which he calls the PROBE method, and goes into detail on some supporting statistical techniques. Chapter 5 also discusses Fuzzy Logic. Appendix A contains a short, readable summary of statistical techniques that are useful for software estimation.

**ISBSG. Practical Project Estimation, 2nd Edition: A Toolkit for Estimating Software Development Effort and Duration,**Australia: International Software Benchmarking Standards Group, February 2005. This book contains numerous useful formulas for computing ef-fort estimates from size estimates. The book is refreshingly candid about the accuracy of its formulas; it includes sample size and r-squared values you can use to assess the validity of its formulas.

**Jones, Capers. Applied Software Measurement: Assuring Productivity and Quality, **2d Ed. New York: McGraw-Hill, 1997. Jones discusses the history of function points in detail and presents the arguments against lines of code measures. Chapter 3 of this book pre-sents an excellent discussion of the sources of error in size, effort, and quality measure-ments.

**Jones, Capers. Estimating Software Costs, New York: McGraw-Hill,** 1998. Chapter 14 of Jones’s book contains a detailed discussion and examples of how cost buildups can vary between different kinds of organizations. Chapter 21 explains how unpaid overtime af-fects cost estimates.

**Jones, Capers. Software Assessments, Benchmarks, and Best Practices, **Reading, Mass.: Addison Wesley, 2000. Jones’s book provides some data that is updated or expanded from the data he presents in Estimating Software Costs.

**Jørgensen M., "A Review of Studies on Expert Estimation of Software Development Effort," **2002. This paper presents a comprehensive review of the research on expert estimation approaches. The author draws numerous conclusions from the common research threads and presents 12 tips for achieving accurate expert estimates.

**Laranjeira, Luiz. "Software Size Estimation of Object-Oriented Systems," ***IEEE Transactions on Software Engineering*, May 1990. This paper provided a theoretical research foundation for the empirical observation of the Cone of Uncertainty.

**Larsen, Richard J. and Morris L. Marx. An Introduction to Mathematical Statistics and Its Applications, **3rd Ed., Upper Saddle River, N.J.: Prentice Hall, 2001. This book is a fairly readable introduction to mathematical statistics; at least it’s readable when you consider the subject matter. It’s an unavoidable fact that if you want to use statistical techniques sooner or later you’ll have to do some math!

**Lorenz, Mark and Jeff Kidd, 1994. Object-Oriented Software Metrics**, Upper Saddle River, N.J.: PTR Prentice Hall, 1994. Lorenz and Kidd present numerous suggestions of quanti-ties that can be counted in object-oriented programs.

**McGarry, John, et al. Practical Software Measurement: Objective Information for Decision Makers, **Boston, Mass.: Addison Wesley, 2002. Section 5.1 discusses considerations to include in an estimation procedure.

**NASA SEL. Manager’s Handbook for Software Development, Revision 1. **Document number SEL-84-101. Greenbelt, Maryland: Goddard Space Flight Center, NASA, 1990. This document describes the NASA SEL’s estimation approach in more detail.

**NASA, "ISD Wideband Delphi Estimation," **Number 580-PROGRAMMER-016-01, September 1, 2004,http://software.gsfc.nasa.gov/AssetsApproved/PA1.2.1.2.pdf. This document describes a Wide Band Delphi technique used by the NASA Goddard Space Flight Center.

**Putnam, Lawrence H. and Ware Myers. Five Core Metrics.** New York: Dorset House, 2003. This book presents a compelling argument for collecting data on the five core metrics of size, productivity, time, effort, and reliability. Chapter 4 contains an extended discussion of the importance of predictability compared to other project objectives. Chapter 11 goes into detail about the efficiency penalty for exceeding seven people on a medium-sized business systems project.

**Putnam, Lawrence H. and Ware Myers. Measures for Excellence: Reliable Software On Time, Within Budget**. Englewood Cliffs, N.J.: Yourdon Press, 1992. This book describes Putnam’s estimation method including how it addresses diseconomies of scale. I like Putnam’s model because it contains few control knobs and works best when it is cali-brated with historical data. The book is mathematically oriented, so it can be slow going. Putnam and Myers also provide numerous useful rules of thumb for planning. The over-all context of Putnam and Myers’ book is a detailed, mathematical explanation of Put-nam’s estimation model.

**Software Engineering Institute’s Software Engineering Measurement and Analysis (SEMA) website**, www.sei.cmu.edu/sema/. This comprehensive website helps organizations create data collection (measurement) practices, as well as practices for using the data they col-lect.

**Wiegers, Karl. "Stop Promising Miracles,"** *Software Development*, February 2000. Wiegers’ paper describes a variation on the Wide Band Delphi technique described in this chapter.

**Wiegers, Karl. Software Requirements, 2d Ed.** Redmond, WA: Microsoft Press, 2003. Wiegers describes numerous practices that help elicit good requirements in the first place, which substantially reduces requirements volatility later in a project.

**www.construx.com/resources/surveyor/. **This site provides a free code-counting tool called *Construx Surveyor*.

**www.ifpug.org. **The International Function Point User Group is the definitive source for current function point counting rules.