Mean Absolute Error in Fantasy Projections: Measuring Model Performance

Mean Absolute Error (MAE) is one of the clearest tools available for measuring how far a fantasy projection system strays from reality — and understanding it turns projection consumers from passive readers into informed critics. This page explains what MAE measures, how it behaves differently from related metrics, and where the boundaries of its usefulness run out. Anyone evaluating a projection source, building a model, or doing backtesting of projection accuracy will encounter MAE repeatedly, and it rewards a precise understanding.

Definition and scope

A projection for Patrick Mahomes says 28.4 fantasy points. He scores 21.2. The absolute error on that prediction is 7.2 points — the raw distance between forecast and outcome, stripped of sign. MAE is simply the arithmetic mean of those unsigned distances across an entire sample of predictions.

Formally: MAE = (1/n) × Σ|predicted − actual|

The unsigned part matters. If a model over-projects a wide receiver by 6 points in Week 4 and under-projects a running back by 6 points in Week 5, those errors don't cancel each other out under MAE. Both count as 6-point misses. That property makes MAE an honest accounting of typical error magnitude.

Scope-wise, MAE applies wherever there is a numerical prediction and a numerical outcome — per-game projections, season totals, DFS salary-based targets, rest-of-season estimates. It doesn't require any distributional assumptions about errors, which makes it robust across player archetypes and scoring formats. A scoring format's impact on projections can shift baseline MAE levels substantially: PPR formats produce higher raw point totals and therefore tolerate larger absolute errors before those errors become decision-relevant.

How it works

Running MAE on a projection set follows four steps:

  1. Collect a paired dataset — each record contains one projected value and the corresponding actual fantasy output from the same game or time period.
  2. Compute the residual for each record — subtract actual from projected (order doesn't matter since the absolute value is taken).
  3. Take the absolute value of every residual, converting all negative errors to positive.
  4. Average the absolute values across all records in the sample.

The result is expressed in the same units as the original projections — fantasy points, not a dimensionless ratio. A quarterback projection model with an MAE of 6.8 points is missing by an average of 6.8 fantasy points per prediction. That concreteness is one of MAE's most practical virtues.

MAE vs. Root Mean Squared Error (RMSE): RMSE squares the residuals before averaging, then takes the square root. Squaring penalizes large errors disproportionately. A model with occasional catastrophic misses — a projected starter who gets scratched — will show a much worse RMSE than MAE relative to a model with consistent moderate errors, even if both models share the same MAE. For fantasy applications where a single lineup-destroying miss is particularly costly, RMSE captures something MAE does not. Most public-facing projection accuracy reports from sources like FantasyPros publish both metrics alongside each other for exactly this reason.

MAE vs. Mean Error (ME): ME preserves the sign of residuals and allows over- and under-projections to offset. A systematically biased model — one that over-projects running backs by 4 points every week — could show a ME near zero if it simultaneously under-projects quarterbacks by 4 points. MAE would correctly identify both problems. ME is useful for detecting directional bias; MAE is the primary accuracy gauge.

Common scenarios

Preseason projections tend to carry higher MAEs than in-season projections because the uncertainty pool is deeper — role changes, training camp injuries, and scheme shifts haven't resolved. The concept of sample size and projection reliability applies directly here: a four-week in-season sample with stable usage data will generally produce lower MAE than a preseason projection built on offseason assumptions.

Position-level MAE differs materially across positions. Quarterbacks in standard scoring formats project with higher raw point totals (22–32 points per game is a typical elite range), and their MAEs tend to be proportionally larger in absolute terms. Wide receivers in deep PPR leagues face high target-share volatility, which inflates MAE independent of model quality. Comparing MAE across positions without normalizing for scoring scale leads to misleading conclusions about relative model quality.

DFS contexts compress the useful MAE threshold. In daily fantasy, a 5-point miss on a $7,800 running back can swing an entire lineup's viability in a tournament. The same 5-point miss on a season-long roster carries lower stakes because the decision window spans 17 weeks. Lineup optimization with projections tools often weight projected variance alongside MAE for this reason.

Decision boundaries

MAE becomes actionable when benchmarked against a meaningful baseline — typically the MAE produced by using the season average as every prediction (the "naive forecast"). Any projection model that fails to beat naive MAE should be treated with skepticism regardless of how sophisticated its inputs appear.

Practical thresholds observed in public accuracy tracking suggest that quarterback MAEs below 7.0 points per game and skill-position MAEs below 5.5 points indicate competitive projection quality, though these figures are position- and format-specific. When comparing projection systems, MAE should be evaluated on matched sample periods and identical scoring settings — otherwise the comparison is measuring format effects, not model quality.

MAE cannot detect whether errors are random or systematic. A model with strong MAE can still carry exploitable biases that ME or directional residual analysis would reveal. Pairing MAE with projection confidence intervals gives a fuller picture: a tight confidence interval that consistently misses by a wide MAE signals miscalibration, while a wide interval that accurately captures actual outcomes signals appropriate uncertainty quantification rather than genuine accuracy.

The Fantasy Projection Lab home applies these measurement principles across all projection outputs, treating MAE not as a marketing number but as a calibration tool.

References