accuracyprediction-marketsresearchpolymarketkalshipollssuperforecasters

Can Prediction Markets Really Predict the Future?

Updated March 2026 · By PredictionCircle Editorial

By Prediction Circle Team|Mar 2026|14 min read

There's a moment, usually during an election or a major geopolitical event, when someone shares a screenshot from Polymarket or Kalshi and says: the market thinks there's a 73% chance of X. And the implication is clear: this is smarter than polls, smarter than the experts on TV, smarter than your gut. Maybe it is. But the answer is more conditional than most headlines suggest, and the conditions matter enormously.

"Smarter than the experts on TV" is a low bar. The real question is how accurate are prediction markets at what they claim to do, and the research gives a genuinely interesting answer: sometimes yes, sometimes no, and the difference is almost never about the market itself.

Prediction markets are platforms where people buy and sell contracts that pay out based on real outcomes, and the trading price is often interpreted as the crowd's probability estimate for an event. Research finds these prices are often informative, especially for short-horizon events with clear resolution and adequate participation, but accuracy drops sharply at longer time horizons, in thin markets, and when settlement rules are ambiguous. For more on how those contracts actually settle, see how prediction market contracts resolve.

What Prediction Markets Are Actually Claiming

A prediction market is a platform where people trade contracts that pay out based on what actually happens. In the most basic version, a "Yes" share pays $1 if an event occurs and $0 if it doesn't, so if a contract trades at $0.65, that's often interpreted as the crowd's estimate that there's a 65% chance the event happens.

That "price equals probability" shortcut is useful. It's also an approximation. Whether it holds depends on who can trade, how the market is structured, and whether traders are acting purely on their beliefs or also on risk appetite, capital constraints, and platform fees.

Researchers who study this distinguish two separate questions that get collapsed in popular coverage:

Informativeness: does the price move in the right direction as real information arrives? Does it contain signal, not just noise?

Calibration: when a market says 70%, do events of that type actually happen roughly 70% of the time, across many comparable cases?

Economist Charles Manski of Northwestern University made an influential version of this point: a market price doesn't tell you exactly what the crowd believes. It gives you a range. Two people can trade at the same price while holding very different underlying beliefs, and you can't tell from the price alone which belief the crowd actually holds. A market can be informative without being well-calibrated. It can tell you something meaningful about direction without telling you precise probabilities. Keeping that distinction alive is the difference between using prediction markets intelligently and trusting them blindly.

How Accurate Are Prediction Markets? The Evidence for When They Work

The research case for prediction markets is real, and it's strongest under specific conditions. The question of how accurate are prediction markets comes down to three factors: short time horizons, clearly resolvable questions, and adequate participation.

Scientific replication. A study published in Science used prediction markets to forecast which psychology experiments would replicate. Markets correctly predicted outcomes in about 71% of cases, well above the 50% you'd get from random guessing, and better than a survey-based baseline run alongside it. When the crowd has dispersed information and a clean binary outcome, it performs.

Disease surveillance. A pilot program deploying prediction markets for influenza forecasting across multiple US states reported high accuracy up to four weeks ahead, with performance improving as contracts approached their resolution date. Clinicians with different patient populations were deliberately mixed in the participant pool. The design explicitly tried to prevent a monoculture of shared blind spots.

Awards markets. When Kalshi and Polymarket traded Oscars contracts this year, one outlet tracked results across 24 categories: one platform got 18 right, the other 19. Clean resolution dates, massive public information, strong financial incentives. These are conditions prediction markets like. The hit rate showed it.

Three conditions made each of these work: the crowd had real, dispersed information to contribute; the outcome resolved unambiguously; enough people were trading to prevent thin-market noise from dominating. Change any of those three, and the picture changes.

Where Prediction Markets Fail: Accuracy Breaks Down

The failures aren't random. They follow patterns.

Long-horizon bias. A major study by researchers Page and Clemen at Duke University analyzed thousands of markets and hundreds of thousands of transactions. They found that prediction markets are reasonably well-calibrated for near-term events but become systematically biased for events further in the future. Prices drift toward 50%, underpricing strong favorites and overpricing longshots. Part of it is economic: holding capital in a contract for months has a cost, which distorts prices away from true probabilities. This isn't a footnote. It's a recurring empirical finding that limits how much you should trust a prediction market pricing something six months out.

Attention shocks. A detailed transaction-level study found that when major news breaks, a wave of relatively inexperienced traders floods in. Measured market efficiency decreases immediately afterward, before experienced traders correct the mispricing. The crowd is most useful when it's composed of people who know what they're doing. Flash attention from casual observers degrades the signal before it recovers.

The hardest failure: deciding what happened. The most visible breakdowns in crypto-based prediction markets aren't cases where the crowd guessed wrong about an event. They're cases where the contract couldn't cleanly map reality into a binary outcome, and the market ended up pricing not the event's probability, but traders' beliefs about how the platform's resolution mechanism would rule. A high-volume Polymarket contract about whether Ukrainian President Zelensky wore "a suit" at a particular meeting is the clearest recent example: large sums of money collided with definitional ambiguity, a token-voting dispute process, and allegations that concentrated voting power could game the outcome. The crowd wasn't wrong about what happened. The market broke because defining what happened was harder than anyone anticipated.

The Famous Misses: When Prediction Markets Got It Wrong

Two misses come up every time prediction markets are criticized, and they deserve honest treatment rather than either dismissal or overclaiming.

The 2016 US presidential election. Major prediction markets implied roughly an 80% probability of a Clinton victory the day before the election. She lost. This is real, and it's also not evidence that markets are broken. An 80% probability means the other thing happens 1 in 5 times. What the miss actually illustrates is that high probability isn't a guarantee, that correlated errors can happen even with money at stake, and that tail events are the ones that get remembered precisely because they're surprising. It's also worth noting that the long-horizon bias documented in the academic literature may have kept Trump's odds systematically underpriced for months. The miss wasn't only bad luck.

Brexit. Bookmakers priced a Remain win. Leave won. Same lesson, higher stakes: even well-functioning prediction markets can converge on a narrative that fails when late shifts, misread turnout dynamics, or correlated errors among participants all point in the same wrong direction.

Neither of these means prediction markets are useless. They mean prediction markets produce probabilities, not prophecies, and that distinction matters enormously for how you use them.

Can Prediction Markets Be Manipulated?

The manipulation question breaks into two separate sub-questions, and conflating them produces confused conclusions. Can prediction markets be manipulated at all? And if so, can a single actor sustain a false signal long enough to matter?

Can someone move prices temporarily? Yes. Evidence from the Iowa Electronic Markets, one of the longest-running political prediction platforms, shows manipulation attempts can cause large immediate price moves. Lab experiments confirm the same.

Can someone sustain a false signal? This is where it gets more complicated. In relatively liquid settings, sustained manipulation is difficult and expensive. Other traders tend to trade against distortions, correcting mispriced contracts. One experimental study found that manipulators couldn't distort accuracy because other participants adjusted their behavior in response to biased orders.

But that's not the whole story. A more recent large-scale field experiment found that the effects of randomized price shocks can remain visible up to 60 days later. Prediction markets with fewer traders and lower volume are much harder to defend against sustained pressure. Thin markets are meaningfully more vulnerable than liquid ones.

Then there's the modern crypto-platform version of the problem. A study by Columbia University researchers, reported by Fortune in November 2025, found evidence that a large fraction of trading on Polymarket may have been artificial wash trading, where the same actor buys and sells to simulate volume without genuine belief behind the trades. The researchers acknowledged methodology caveats, but even if the final odds weren't corrupted, inflated activity figures corrupt your sense of how many independent people are actually forming a collective judgment.

Most recently, a March 2026 Guardian report described trading patterns in war-related Polymarket contracts that market structure analysts and academic researchers characterized as consistent with insider-informed trading. Positions moved ahead of geopolitical developments in ways that looked less like crowd wisdom and more like a narrow set of people who knew something first. Attribution is hard with pseudonymous wallets. But if that's what's happening in sensitive geopolitical markets, the "aggregated public information" story has a serious crack in it.

Are Superforecasters More Accurate Than Prediction Markets?

A persistent myth is that prediction markets are always superior to trained human forecasters. The research tells a more conditional story.

A team led by Wharton's Pavel Atanasov ran one of the most rigorous tests of this question using data from a multi-year geopolitical forecasting tournament. Their finding: well-run prediction markets and well-run human forecasting teams can be statistically tied in accuracy. The margin between them, when measured by Brier scores (think of it as a scoring system where zero is perfect and two is the worst possible), can disappear entirely. What actually drove accuracy in both systems wasn't the format. It was the quality of the forecasters. Small groups of skilled, experienced predictors outperformed larger, less selective crowds whether they were trading in a market or filling out a survey. This is the Accuracy Edge that separates casual market-watching from informed trading.

A study by Yale SOM's Jason Dana pushed further: in contexts with highly informed participants, correctly aggregated self-reported probabilities were sometimes just as accurate as market prices, and sometimes significantly more accurate. Prediction markets don't automatically extract all available information even when incentives exist. Which raises the fair question: if a well-run survey can match a market, why accept the downsides of prediction markets, including manipulation risk, capital requirements, and participation barriers? The honest answer is that markets scale better across diverse participants and topics than curated expert panels do. But the scaling advantage only holds when the market is actually well-designed and liquid.

The bottom line isn't "markets good, superforecasters better" or the reverse. Aggregation quality and participant quality often matter more than whether you call the system a market. A prediction market with engaged, informed, diverse traders and a robust design can be excellent. One with thin participation, homogenous viewpoints, and ambiguous resolution rules can be worse than a well-run survey.

When Do Prediction Markets Work? A Practical Guide

Knowing when do prediction markets work, and when they don't, is more useful than a blanket trust or distrust. For a broader introduction to how these platforms operate, see what is a prediction market. Prediction markets tend to be most reliable when:

  • Many independent traders can participate
  • Information is widely available but dispersed across many people
  • The event resolves unambiguously, and soon
  • Liquidity is sufficient to prevent noise from dominating

They tend to underperform when:

  • The event is far in the future (long-horizon bias)
  • Participation is thin, restricted, or homogenous
  • "What happened" is contestable and resolution can be gamed or disputed
  • The market is small enough that a motivated actor can move prices and keep them moved

Here's where the evidence actually lands: prediction markets are one of the better tools we have for aggregating dispersed beliefs about uncertain futures. They're not oracles. A 70% probability isn't a prediction. It's a probability. The crowd's confidence is a data point, not a decision.

Frequently Asked Questions

Are prediction markets more accurate than polls?+
Can prediction markets be manipulated?+
Why were prediction markets wrong about the 2016 election?+
What's the difference between informativeness and calibration in prediction markets?+
Are Polymarket and Kalshi reliable?+
accuracyprediction-marketsresearchpolymarketkalshipollssuperforecasters