We know more today than we knew yesterday, and tomorrow we’ll know even more.
The clue to giving a more reliable and accurate estimation is simple: empirical knowledge. Not only about tech specifics, but also about information and level of details on each stage of the project process.
One of the central themes in McConnell's Software Estimation: Demystifying the Black Art is the concept of the Cone of Uncertainty. As the book says: The Cone of Uncertainty shows how estimates become more accurate as a project progresses. So, how does it work? Web development projects imply making literally an enormous number of various decisions. Uncertainty comes from various sources like:
You need to make decisions on those, and uncertainty in the project comes from uncertainty on how the decisions will be resolved. But as you make progress, accumulate knowledge and dive deeper into the details, you reduce the uncertainty and can give an estimate with the higher confidence level.
The graphic is taken from the same book by McConnell. This is a simple visualization of the Cone. The horizontal axis states common project stages. Terminology could vary from one team to another, it's up to every team to set up the project flow they find useful for their needs. We refer to the agreed-upon vision that applies to web development services, internal business systems, and most other kinds of software projects. The vertical axis represents the degree of error that has been found in estimates. The estimates could be for how much a particular feature set will cost and how much effort will be required to deliver that feature set, or it could be for how many features can be delivered for a subject to a high degree of error.
Estimates created at Initial Concept time can be inaccurate by a factor of 4 x on the high side or 4 x on the low side (also expressed as 0.25 x, which is just 1 divided by 4). The total range from high estimate to low estimate is 4 x divided by 0.25 x or 16 x!
A mistake that any client/manager could do is to ask developers: "If you spend another week, could you work up your estimate to contain less uncertainty?". Fair question, but unfortunately not very feasible demand.
Research by Luiz Laranjeira suggests that the accuracy of the software estimate depends on the level of refinement of the software’s definition (Laranjeira 1990). The more refined the definition, the more accurate the estimate.
Since the project itself contains variability, it could be a logical assumption that to reduce the variability you should get to the end of the project, what can take forever. Fortunately, this is a misleading statement. In reality, the milestones listed tend to be front-loaded in the project’s schedule. If we redraw the Cone on a calendar-time basis, it would look like this:
As you can see from this version of the Cone, estimation accuracy improves rapidly for the first 30% of the project, improving from ±4 x to ±1.25 x.
The cone has several ramifications, the most important of which is that early project estimates will always be wildly inaccurate. Furthermore, the Cone doesn't narrow itself. If a project is not well controlled or well estimated, you can end up with a Cloud of Uncertainty that contains even more estimation error than that represented by the Cone.
No matter what methodology is chosen for the project, the level of information is usually the same: we know less at the beginning and we know all when we close the project and reach its goal. It means, that the concept of the Cone is helpful for project managers, development teams or entrepreneurs, involved in the digital production process.