Preseason vs. In-Season Projections: How Accuracy Changes Over Time

Projection accuracy is not a fixed property — it shifts, sometimes dramatically, as a season progresses. The gap between what a preseason model predicted and what an in-season model predicts for the same player can be enormous, and understanding why that gap exists is one of the more practically useful things a fantasy manager can internalize. This page examines how projection inputs change between the offseason and Week 10, what drives those changes, and where the accuracy curve actually matters for decisions like drafts, waiver pickups, and trades.


Definition and scope

A preseason projection is a statistical estimate of a player's expected fantasy output for an entire season, built before a single regular-season snap has been played. An in-season projection is the same kind of estimate — points per game, total yards, touchdowns — but recalculated using live data: snap counts, target share, injury reports, depth-chart changes, and opponent matchups.

The distinction matters because the two types of projections are answering slightly different questions. Preseason models ask: given everything known before the season, what should this player produce? In-season models ask: given everything that has actually happened, what should this player produce from here? Those questions sound similar. They are not.

For context on how projection models are constructed at a mechanical level, the inputs and weights that feed each model type differ substantially — and that difference is precisely what creates the accuracy gap this page addresses.


How it works

Preseason projections are built primarily from historical performance data, aging curves, positional baselines, scheme tendencies, and market signals like ADP (Average Draft Position). They rely on reasonable assumptions: that a player will stay healthy, that a team's offensive scheme will resemble prior years, that a depth chart will hold. Those assumptions are wrong with surprising frequency.

Research published by FantasyPros in their annual projection accuracy roundups — which track mean absolute error (MAE) across projection providers — consistently shows that preseason MAE for skill-position players runs materially higher than mid-season MAE for the same players. The pattern holds across NFL, NBA, and MLB.

In-season projections benefit from a fundamentally different data environment:

  1. Observed usage rates — Target share and snap count data from real games replace preseason estimates. A wide receiver projected for a 22% target share in August might be running at 31% through Week 6, which is a meaningful recalibration.
  2. Confirmed health status — Injury designations (IR, questionable, out) replace injury probability curves. Injury adjustments in projections become more precise because the injury is either present or it isn't.
  3. Matchup specificity — Week-level opponent data replaces season-average defensive rankings. A running back facing a defense allowing 148 rushing yards per game over the last 4 weeks is a different projection than one built on last year's defensive stats.
  4. Vegas line updates — Game totals and spreads update daily, and Vegas lines and fantasy projections have a documented relationship to scoring environment; mid-season lines reflect more current team quality signals than preseason lines do.
  5. Role consolidation — Depth charts settle. The receiver who was "battling for snaps" in August either won or lost that battle by October.

The accuracy improvement from preseason to in-season is not linear. It follows roughly an asymptotic curve: the largest single accuracy gain occurs between Week 1 and Week 4, when usage patterns and role clarity emerge. After Week 8, incremental gains from additional games slow considerably, because sample size and projection reliability plateau for most stat categories around 6–8 observed games.


Common scenarios

Breakout players illustrate the gap most vividly. A receiver who enters the preseason ranked 48th at his position but emerges as a true WR1 by Week 5 will show a massive preseason-to-in-season projection divergence. Preseason models couldn't price in an injury to the player ahead of him, or a coaching change that shifted the team to a pass-heavy scheme. In-season models can observe both.

Aging veterans present the mirror case. A running back projected for 240 carries in August might show declining efficiency and a growing committee split by Week 3. Preseason models using 3-year historical averages would not capture the early-season signal; in-season models weighted toward recent performance (regression to the mean in fantasy becomes relevant here) will adjust faster.

Injured players returning from offseason surgery are a category where preseason projections carry especially high uncertainty. The assumed recovery timeline may hold, or it may not. By Week 6 of the season, actual on-field workload data replaces the estimate.


Decision boundaries

Knowing when to trust preseason projections versus in-season updates changes how the two types of projections should be applied:

The practical implication is that projection accuracy is not a static number to evaluate once and accept. It is a moving property, and the projection update schedule from any serious system reflects that reality — models that update only weekly during the season are leaving accuracy on the table relative to systems that incorporate daily injury reports and snap count data.


References