About The Heat Sheet

The essential reference for every competitive race in American politics.

In competitive swimming, a heat sheet is the document handed out before a meet that lists every race, every competitor, their seed times, and their lane assignments. It's the essential reference — a compact, data-rich summary of who's racing, where they stand, and what to expect.

That's exactly what we aim to be for American politics: the essential reference for every competitive race, who's running, where they stand, and what to watch for.

The name also works on its own terms — “heat” implies intensity, competition, and pressure; “sheet” implies a reference document, a data source, a ratings page. You don't need to know anything about swimming for the name to land.

What We Are

The Heat Sheet is a nonpartisan political analysis publication built by young analysts who believe election forecasting should be transparent, calibrated, and accountable. We combine the qualitative race-rating tradition of the Cook Political Report with the quantitative rigor of model-based forecasting and the market-informed thinking of the prediction market ecosystem.

We are not just another ratings site. We rate races, but we also grade the raters. We track prediction markets, but we tell you which ones to trust. Every claim we make is backed by data, and every projection we publish is scored after the fact.

Our Thesis

The current political forecasting landscape has a gap. Qualitative raters like Cook, Sabato, and Inside Elections produce expert judgments but refuse to attach probabilities to their ratings and optimize for reputational safety over calibration. Quantitative outlets like the late FiveThirtyEight and Split Ticket build models but don't systematically hold other forecasters accountable. Prediction markets offer real-time pricing but suffer from illiquidity, wide bid-ask spreads, and a lack of independent quality assessment.

The Heat Sheet sits at the intersection of all three. We publish race ratings with explicit probability estimates. We grade decision desks and forecasters on accuracy and calibration. And we evaluate prediction markets on health and reliability.

Core Principles

Calibration Over Accuracy

A “Lean R” that wins by 25 points is a worse prediction than a “Toss Up” that wins by 1, even though both “called it right.” We optimize for each rating category actually meaning what it says. After every election, we publish a full calibration report grading our own performance.

Radical Transparency

Every rating we publish includes our reasoning. Every model we build has its methodology documented. We show our work because we believe forecasting without transparency is just punditry with a spreadsheet.

Accountability for Everyone

If we grade decision desks on their calls, we grade ourselves too. If we critique a prediction market's liquidity, we disclose our own positions. The political forecasting world needs more accountability, and that starts with us.

Nonpartisan Analysis

Our team members have their own political views. We do not pretend otherwise. But our ratings, models, and analysis are built to be as free of partisan bias as possible. The diversity of viewpoints on our team acts as a check on any one perspective dominating our output.

What We Publish

Race Ratings

Every competitive House, Senate, and gubernatorial race rated with explicit probability estimates and margin ranges.

The Spread

When prediction markets, expert ratings, and fundamentals disagree on the same race — we break down why and who we think is right.

Decision Desk Scorecards

Grading AP, DDHQ, Fox, CNN, and NBC on election night speed, accuracy, and the tradeoff between the two.

Prediction Market Health Grades

Not all markets are created equal. We grade them on liquidity, volume, spreads, and convergence.