These trends are also reflected in popular culture, such as movies and video games. In recent years, it seems that there has been a growing gulf between the ratings given by professional reviewers and ordinary users. Cultural conservatives have claimed that reviewers give bad products good ratings based on both venal motives (e.g. maintaining positive relations with directors and producers; retaining premier access to video games) and as a result of living in a cultural bubble (e.g. privileging things such as forced diversity and woke signalling, over minor elements like story, immersion, gameplay, and even graphics).
Mass Effect: Andromeda (Amerikwa) vs. Witcher 3 (Polska).
(Perhaps the ur-example of this, and familiar to the greatest amount of people, would be the drama around Star Wars: the Last Jedi, which got an 8.5 rating from critics on Metacritic vs. a 4.5 user users. A discrepancy that those same journalists ascribed to Russian trolls.)
In the video game sphere, this led to that energetic but ultimately ineffectual gamer uprising known as Gamergate, which incidentally also kicked off in 2014. But were they right to rebel against “ethics in video games journalism” in general?
We may finally have a definitive answer thanks to video games blogger/programmer Shamus Young, who scraped Metacritic for all video game user and critic reviews since 1995.
What he found is that user and critic reviews tended to track each other up until around 2014, with the difference between the two breaking past 5 points (out of 100) just twice in the period from 2001, when Metacritic was launched, to 2014.
But then this indicator exploded, rising to above 10 points by 2018 and 2019.
And here are the combined absolute scores for users and critics in one chart:
This does make intuitive sense looking through some major titles over the past few years. E.g., Wolfenstein II: The New Colossus (critics: 86, users: 67) and Far Cry 5 (critics: 78, users: 61) both featured more or less overt political attacks on Trump’s redneck (racist, backwards, religious nutjob) base while apparently featuring mediocre gameplay (I can’t personally confirm, not having played either). Deus Ex: Mankind Divided (critics: 83, users: 67) paid prominent homage to Black Lives Matter, but otherwise featured an uninspired and unfinished story and was infested with microtransactions. Fallout 4 (critics: 84, users: 55) was much weaker in both content and gameplay than its immediate predecessor Fallout: New Vegas, but it did loudly riff on a Underground Railroad theme. But while all this might have left a glowing impression on journalists, it would not have made a difference to ordinary gamers who just wanted to have fun playing a game.
Note that Young suggests caution in interpreting these results:
This chart is basically the green chart minus the red one. Or, how many points higher are critic scores than user scores? We could see this as an indication of the diverging opinions between the people and the press, but this spike at the end could also be the result of games that didn’t turn on their aggressive monetization systems until after the review scores were set. This pissed people off and led to review bombing. So maybe it’s not a measure of the difference between critics and users, but the difference in quality between launch day, and the day the cash shop opens.
OTOH, do also note that this graph merely shows the difference between average user and average critic scores, whereas a true measure of polarization would calculate the average of the individual difference in scores for each individual game per year. After all, there are games that are the opposite of those mentioned above, games that users like relatively more than the critics. For instance, Kingdom Come: Deliverance, a Czech game that happened to receive some wrath from professional SJWs for refusing to include Blacks in medieval Bohemia, got 76 points from critics but 81 points from users (whereas the critic gave the average game 11 more points than the average user during that year). But KCD would have had the effect of reducing the gap between critics and users in 2018.
I’d imagine that a true measure of polarization, derived as suggested above, would show an even sharper divergence between professional critics and users from around 2014.