Console Game Review Sources and Metacritic: How Games Are Scored
A console game's review score can determine whether a studio survives its next budget cycle or whether a publisher greenlights a sequel. Metacritic sits at the center of that pressure — aggregating scores from professional critics into a single number that shapes purchasing decisions, bonus payouts, and game-of-the-year conversations. This page explains how review aggregation works, where scores come from, what the numbers actually represent, and where the system breaks down.
Definition and scope
Metacritic's Metascore is a weighted average of reviews from approved professional publications and outlets, converted to a 0–100 scale. The site, owned by Fandom (which acquired it from CBS Interactive in 2022), maintains a curated list of publications — each assigned a weight that Metacritic does not publicly disclose. A review from IGN or Eurogamer carries a different mathematical influence than one from a smaller regional outlet, though the exact coefficients remain proprietary.
The Metascore is separate from the User Score, a 0–10 rating submitted by registered accounts. Both appear on the same page but measure entirely different phenomena — one reflects editorial consensus from paid critics, the other reflects self-selected audience reaction, which is frequently distorted by coordinated review-bombing campaigns.
For a broader look at how games are formally categorized and evaluated across the industry, Console Game Authority's home reference provides a useful orientation to the full landscape.
How it works
Metacritic's conversion process standardizes letter grades and descriptive scores into numerical values before averaging. A review that awards an "A" becomes 100; a "B+" becomes roughly 88. Publications that use star ratings (e.g., 4 out of 5) are converted at 80. This translation layer introduces imprecision — a critic who means "very good, not great" with 4 stars gets recorded identically to one who writes a glowing 4-star review.
The scoring pipeline follows this structure:
- Review publication — an approved outlet posts a scored review after embargo lifts.
- Score capture — Metacritic staff or automated systems log the raw score.
- Conversion — the raw score is mapped to the 0–100 scale using Metacritic's internal rubric.
- Weighting — the converted score is multiplied by the outlet's undisclosed weight coefficient.
- Aggregation — all weighted scores are summed and divided to produce the Metascore.
- Threshold display — scores below 40 display in red, 40–60 in yellow, 61–100 in green.
The Metascore updates in real time as new reviews are indexed, which means a score can shift noticeably during the first 72 hours after a major release when review embargoes lift in waves.
For context on how console game awards and recognition interact with critical scores, the relationship between Metascores and awards eligibility is more direct than most players realize — the Game Awards and BAFTA Games Award longlists are assembled partly by editorial boards that use aggregated critical reception as a preliminary filter.
Common scenarios
The day-one score gap is the most familiar scenario: a game launches with a Metascore of 91 based on 34 reviews, then patches arrive, server issues emerge, and the User Score collapses to 4.2 within 48 hours. The Metascore and User Score are telling different stories about different products at different moments in time.
The review embargo effect shapes scores in a subtler way. When publishers restrict reviews until launch day — or until after launch — it signals that the product may not hold up to extended preview scrutiny. Games reviewed under tight embargo windows sometimes carry Metascores based on fewer than 10 reviews, which inflates variance considerably.
Bonus contract clauses have made Metacritic scores a labor issue. Kotaku and other outlets have reported that developer bonuses at studios including Obsidian Entertainment (on Fallout: New Vegas) were historically tied to hitting specific Metascore thresholds — 85 being a commonly cited target in contract disputes. This creates a direct financial pressure on review aggregation that extends well beyond consumer information.
OpenCritic operates as the primary alternative to Metacritic, founded in 2015 with a policy of full transparency about which outlets are included and how scores are weighted (all outlets are weighted equally). OpenCritic also tracks the percentage of critics who recommend a game, which is sometimes more informative than an averaged number.
Decision boundaries
The 75/85/90 thresholds function as informal industry benchmarks that influence stock analyst reports, retail purchasing orders, and awards consideration.
Metacritic vs. OpenCritic is the clearest structural contrast: Metacritic uses undisclosed weighting and a curated outlet list; OpenCritic uses equal weighting and a published outlet registry. A game can score 82 on Metacritic and 79 on OpenCritic, or vice versa, depending on which outlets reviewed it and how Metacritic's weights skew the average.
Score vs. recommendation rate is a meaningful distinction OpenCritic introduced: a game with a 74 average but a 91% recommendation rate tells a different story than a 74 average with a 55% recommendation rate. The former suggests most critics liked it despite some reservations; the latter suggests deep disagreement.
Where scores become genuinely misleading is in games with fewer than 15 reviews — the statistical noise is too high for the average to carry much signal. The console game release calendar context matters here: games releasing in crowded November windows often receive fewer reviews than identical titles releasing in February, simply because critics have less capacity.
The Metascore is best understood as a rough industry consensus signal, not a precision instrument — useful for identifying clear critical failures (sub-50) and exceptional achievements (above 90), and noisy in the wide middle band where most games actually live.