Fantasy Projection Lab: Frequently Asked Questions

Fantasy projections sit at the intersection of statistics, football intuition, and probability math — which means they're frequently misunderstood, occasionally misapplied, and almost always more useful than people give them credit for. This page addresses the questions that come up most often about how projection systems work, what goes into them, what they can't do, and how to read them without getting burned. The answers here are grounded in the same methodology documented across the full Fantasy Projection Lab reference library.


What are the most common issues encountered?

The single most consistent complaint is variance — a projection says 18.4 points, a player scores 4.2, and suddenly the model looks broken. It isn't. Projections are probability distributions compressed into a single number, and any individual game outcome tells you almost nothing about projection quality. The real signal lives in aggregate accuracy across 500 or 1,000 player-weeks, which is why backtesting projection accuracy matters more than any one week's result.

A close second: users treating projections as rankings. A projection is a point estimate (or range) based on inputs. A ranking is an ordering based on those estimates filtered through a specific scoring system. They're related but meaningfully different, a distinction covered in detail at projection vs ranking difference.


How does classification work in practice?

Projection systems classify player output along at least 3 axes simultaneously: position group, scoring format, and game context. A wide receiver in a 0.5 PPR league playing in a dome against a zone-heavy secondary gets a different model weight than that same receiver in a standard scoring outdoor game against man coverage.

Scoring format impact on projections explains why PPR, half-PPR, and standard formats can shift a player's projected value by 15–25% without a single underlying stat changing. The classification layer is what makes position-specific methodology pages — like wide receiver projection methodology or tight end projection methodology — necessary rather than redundant.


What is typically involved in the process?

A standard projection cycle involves 4 distinct stages:

  1. Data ingestion — pulling box scores, snap counts, target share, efficiency metrics, and Vegas lines from verified sources (documented at data sources used in projections)
  2. Baseline modeling — establishing expected output from historical role and volume, adjusted for sample size (see sample size and projection reliability)
  3. Contextual adjustment — layering in matchup quality, weather, injury status, and game script probability (see matchup-based projection adjustments and weather impact on fantasy projections)
  4. Output formatting — expressing results as a point estimate alongside confidence intervals, documented at projection confidence intervals

Update timing matters too. The projection update schedule shows exactly when inputs refresh relative to game time.


What are the most common misconceptions?

The most durable misconception: a higher projection means a safer player. It doesn't. A player projected at 22 points with a wide confidence interval (floor of 4, ceiling of 38) is far riskier than one projected at 14 with a tight band. Floor and ceiling projections exist precisely because that distinction drives different lineup decisions.

Second misconception: that injury-adjusted projections are guesswork. They're not guesswork — they're probability-weighted estimates based on historical recovery curves, snap count trajectories, and publicly available practice participation data, as detailed in injury adjustments in projections.

Third: that machine learning models are inherently superior to regression-based ones. Both have legitimate use cases. Machine learning in fantasy projections covers where each approach outperforms the other.


Where can authoritative references be found?

The primary methodological documentation lives across the Lab's own reference pages — projection models explained and statistical inputs for fantasy projections are the two foundational pieces. For external data, Football Outsiders publishes DVOA (Defense-adjusted Value Over Average) metrics that inform matchup adjustments; Pro Football Reference maintains career and game-level box scores used in baseline construction; and Statcast (via Baseball Savant) underpins MLB fantasy projections at the batted-ball level.

The glossary of projection terms handles terminology questions that come up when reading any of these sources.


How do requirements vary by jurisdiction or context?

Fantasy sports aren't regulated at the federal level in the United States, but daily fantasy sports (DFS) operators face state-by-state legal frameworks — 30+ states have either explicitly legalized or implicitly permitted DFS as of the mid-2020s, while a handful maintain restrictions. That legal patchwork doesn't directly change projection math, but it does affect which platforms publish projections and what formats they support.

Within the fantasy game itself, context variation is enormous. Dynasty vs redraft projection differences documents why a 26-year-old running back with declining efficiency looks different in a redraft context than a dynasty one. Superflex and two-QB projection adjustments and keeper league projection considerations address the two other major format divergences.


What triggers a formal review or action?

In projection systems, a "formal review" means recalibration — and 3 conditions typically trigger it: a player's underlying role changes sharply (snap count or target share moving more than 15 percentage points from baseline), a model's week-over-week mean absolute error exceeds its historical tolerance, or a significant input source changes its data structure or publication timing.

Snap count and target share data and usage rate adjustments in projections document how those role signals feed into real-time model corrections. Vegas line movement is a secondary trigger — sharp line movement of 3+ points often signals game-script assumptions that require projection updates, explained in Vegas lines and fantasy projections.


How do qualified professionals approach this?

Analysts who build projection systems for a living — at outlets like The Athletic, ESPN, or independent shops like FantasyPros — treat what makes a projection accurate as an empirical question, not an aesthetic one. They run backtests over multi-season samples, track correlation coefficients between projected and actual outputs by position, and weight recent performance against regression-to-mean pressure rather than defaulting to either extreme.

The practical output of that discipline shows up in decisions like lineup optimization with projections, applying projections to draft strategy, and using projections for waiver wire decisions. The methodology is only as good as its application — which is why reading and interpreting projection outputs is treated here as a skill worth building, not a step to skip.