Software Estimation Lessons Learned from Covid-19 Forecasting

What software estimation lessons can be learned from forecasting the pandemic?

 Jan. 28th @ 12:00pm Pacific Time

Code Complete author Steve McConnell is an active contributor to the Ensemble model, the CDC’s coronavirus forecast of record. In addition to Steve’s forecasts, the Ensemble model receives contributions from teams at Johns Hopkins University, MIT, Harvard, USC, IHME, Google, Microsoft, Los Alamos National Labs, and other well-known organizations. A total of more than 40 high-profile, prestigious institutions contribute to the model.

The forecast methods used by these teams are, for the most part, public information. Teams have submitted more than 3000 forecast sets, which includes more than 150,000 individual forecasts. This data set of forecast methods, forecasts, and how each forecast actually performed provides an unmatched opportunity to learn lessons about effective and ineffective forecasting techniques.

Steve is not an epidemiologist, but his forecast model, CovidComplete, has consistently been among the top 5 most accurate forecast models, and it is frequently the single best performing model. His coronavirus forecasting has benefited from past lessons learned in software estimation.

Now join Steve to see just how much the world of software estimation can benefit from current lessons learned forecasting the pandemic.

In this webinar, you’ll learn

  • The relationship between forecasts and estimates
  • How much difference exists between good and bad forecasts
  • The factors that make some forecast models more accurate than others
  • How to express the right amount of uncertainty in an estimate
  • Specific techniques to improve your software estimates

Presented by

Steve McConnell
Construx CEO & Estimation Expert

Sign up today

Share this webinar