This page lists the primary sources underpinning this wiki. Curated to the top 20% of references — those with the highest evidentiary weight and broadest applicability across project types and sectors.
The most commonly cited figures on project failure come from the Standish Group's CHAOS Report. These are widely recognised but also legitimately challenged. Critics point to real methodological concerns: the sample skews toward IT projects, the definition of success has shifted across editions, and the underlying data is proprietary and not independently replicable.
The appropriate response is not to drop the claim — it is to widen the evidentiary base. The pattern of poor project outcomes is not a Standish invention. It is replicated across independent datasets, sectors, and methodological traditions spanning 70 years.
The historical record is blunt. IT project failure has been documented since the beginning of the field (Caminer, 1958). As early as 1963, Garrity observed that only 1 in 3 successful computer systems efforts would produce tangible benefits, while the average 2 in 3 would produce neither. There is no credible evidence that success rates have improved significantly in the seven decades since (Brynjolfsson & Hitt, 1998; Clegg et al., 1997; Sauer, 1999; Willcocks & Margetts, 1994; Schmidt, 2023).
Critically, poor outcomes are not an IT-sector anomaly. The same pattern is found across mega-projects, engineering and construction, business transformation, and organisational change (Flyvbjerg & Gardner, 2023; Radujković et al., 2021).
Part of the difficulty is definitional: there is currently no consensus on what project success or failure means, which has made it impossible for the profession to make systematic progress on the factors that drive improvement (Ika & Pinto, 2022). This wiki addresses that directly — see Why This Wiki Exists.
For a practitioner-level account of this 70-year pattern and why the profession has been too slow to confront it, see: Raymond Young, 70 Years of Poor Project Success Rates (LinkedIn).
How Big Things Get Done
Penguin Random House
Publisher page | Original 2014 peer-reviewed paper, Project Management Journal
The most rigorous cross-sector empirical analysis of project outcomes in the literature. Flyvbjerg's database spans 16,000+ projects across 136 countries and 20+ sectors, going back to 1910. Peer-reviewed findings show roughly 9 in 10 major projects exceed budget or schedule, and fewer than 1% deliver on time, on budget, and with the promised benefits. The pattern holds across 70+ years and shows no signs of improvement over time.
This is not Standish data. It is independently assembled, peer-reviewed, and spans infrastructure, IT, defence, and urban development across public and private sectors. McKinsey has independently cited and built on Flyvbjerg's methodology. When critics challenge Standish, Flyvbjerg is the answer.
Delivering Large-Scale IT Projects on Time, on Budget, and on Value