Skip to main content
← Back to blog

Essay

The Incentives Are Wrong: Why We Can Predict Baseball But Not Bridges

Every NBA possession is modelled. Every esports round is priced. Every Elon Musk tweet has a probability attached to it before he sends it. We cannot tell you, within a factor of two, what a $30 billion rail line will cost.

26 April 2026 · 5 min read

Somewhere along the way, forecasting split into two worlds. In one, predictions are precise, continuously updated, and ruthlessly evaluated. In the other, they are vague, infrequent, and largely consequence-free.

And yet both are called “forecasting.”


The illusion of forecasting

Forecasts are everywhere. Governments publish them. Consultants produce them. Ministers cite them at press conferences.

They are formatted, footnoted, and caveated. They look rigorous.

But most share three properties:

  • No one is forced to take a position
  • No one is continuously scored
  • No one is meaningfully penalised for being wrong

A forecast without a commitment is just an opinion with formatting.


Where forecasting actually works

If you want to see forecasting done properly, look at sports.

Not because it is simple — it is not — but because the system forces accuracy.

Every game resolves quickly. Predictions are tested within hours. Models are updated constantly. Performance is tracked through markets. If your probabilities are wrong, you lose money. If they are right, you gain it. There is no hiding.

Prices aggregate information in real time. Disagreement becomes trades. Trades become better forecasts.

This is forecasting under pressure. Being wrong is expensive.


Infrastructure: the opposite world

Now look at infrastructure.

The Sydney Opera House was budgeted at AUD $7 million in 1957 and opened in 1973 at a final cost of $102 million — roughly fourteen times its original estimate. California’s high-speed rail was approved by voters in 2008 at $33 billion; the latest official estimates put the full system above $100 billion, with the opening date repeatedly pushed back. The UK’s HS2 began life as a £37 billion programme and is now estimated above £100 billion before a single passenger has travelled on it.

These are not outliers. Bent Flyvbjerg’s database of more than 250 large transport projects across 20 countries finds cost overruns on roughly nine in ten — averaging 45% on rail and 34% on bridges and tunnels. The errors are not symmetric. Projects almost never come in cheaper than promised.

This is usually framed as a modelling failure.

It isn’t.

It is an incentive success.

Projects take 5 to 15 years. By the time outcomes are known, the people who made the forecasts have moved on. There is no continuous scoring. No leaderboard. No persistent record of accuracy.

Consultants are paid for producing forecasts, not for being right. Politicians are rewarded for getting projects approved, not for delivering them on budget. Costs are underestimated to secure approval. Benefits are overstated to justify investment.

The system is not failing. It is doing exactly what it is designed to do.

The forecasts are accurate — just not about the bridge. They are accurate forecasts of what it takes to get a bridge funded.


The real constraint

Forecasting quality is not primarily a function of intelligence, data, or models.

It is a function of structure.

Good forecasting systems share four properties:

  • Fast feedback loops
  • Continuous scoring
  • Skin in the game
  • Market-based aggregation

Sports has all four. Infrastructure has none.

Being wrong about a bridge is staggeringly expensive — to taxpayers, to commuters, to the next generation that inherits the debt. The cost is real. It just doesn’t connect back to the forecast — and neither does the credit when someone gets it right. Until that connection exists, the careful forecaster looks the same as the careless one.

We don’t have a forecasting problem. We have an incentives problem.