Quantcast
Would this change mitigate the problem of clickbait scores on Metacritic?

Forums - General Discussion - Would this change mitigate the problem of clickbait scores on Metacritic?

Angelus said:

Do they? How needlessly convoluted. Do they at least tell you which outlets are given more weight?

I am not sure. Their methodology is known, but I don't know if there is a comprehensive list of which review outlets are worth more.



Legend11 correctly predicted that GTA IV (360+PS3) would outsell SSBB. I was wrong.

A Biased Review Reloaded / Open Your Eyes / Switch Gamers Club

Around the Network
Maybe for the sanity of everyone sites should stop doing scores and let their reviews speak for themselves, cause that number they tack on at the end of it means absolutely nothing. If they want to summarize their reviews just add a "buy this" or don't buy this" at the end. But they won't because it adds to clicks and page upon page of fanboy-fights.

Those people should just stop caring about metacritic and use something like opencritic instead. I've never looked at it myself but I believe you can personally choose which reviewers should be taken into account for the score you see, so you can easily remove "troll" reviewers if you don't want them.

Bet Shiken that COD would outsell Battlefield in 2018. http://gamrconnect.vgchartz.com/post.php?id=8749702

Hiku said:

The importance of metacritic scores varies from person to person. But to some people they are very important.
One issue people often perceive (whether it really happens or not can be hard to prove in some cases) is that some publications will give a game or a movie a significantly higher, or more often than not, lower score to get clicks for their site.

Even if they do this, it may not be their intention to affect the overall metacritic score. But it none the less can have a noticeable effect on it. And this seems to be why a lot of people get angry at so called clickbait reviews. They'd be easier to ignore if they didn't just lower the metascore of your favorite game from a fantastic 92 to a terrible/unplayable/disastrous/I-Will-Cancel-My-Preorder 89.

The other day it occurred to me that there may be a rather simple, and commonly used, fix to this problem. Or at least a way to mitigate the issue.

Some times when calculating an average value, a few of the highest and lowest values are removed from the calculation to filter out results that potentially stray too far from the norm, and get a better representation of the average score.

I imagine if you removed a portion of the top and bottom review scores for games or movie, so called clickbait reviews would be much less of a problem considering that only a small portion of reviews tend to be suspected of this.

What do you guys think?

For stuff like this should be used a median score instead of an average. With an average extreme values have more influence on the result as moderate ones (it doesn't matter if extremely high or extremely low). A median solves all this.

https://en.wikipedia.org/wiki/Median



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

Why you will not convince me I have chosen bad consoles. / awesome Miiverse art / my greatest games list

Bet with platformmaster918 and ethomaz about PS3 overtaking Wii in total sales.

Predictions: Switch / Switch / Switch / MHWorld / GOW > BOTW / Switch vs. XB1 in the US

A better way to mitigate the problem would be ignore metacritic altogether.

Around the Network
Mnementh said:
Hiku said:

The importance of metacritic scores varies from person to person. But to some people they are very important.
One issue people often perceive (whether it really happens or not can be hard to prove in some cases) is that some publications will give a game or a movie a significantly higher, or more often than not, lower score to get clicks for their site.

Even if they do this, it may not be their intention to affect the overall metacritic score. But it none the less can have a noticeable effect on it. And this seems to be why a lot of people get angry at so called clickbait reviews. They'd be easier to ignore if they didn't just lower the metascore of your favorite game from a fantastic 92 to a terrible/unplayable/disastrous/I-Will-Cancel-My-Preorder 89.

The other day it occurred to me that there may be a rather simple, and commonly used, fix to this problem. Or at least a way to mitigate the issue.

Some times when calculating an average value, a few of the highest and lowest values are removed from the calculation to filter out results that potentially stray too far from the norm, and get a better representation of the average score.

I imagine if you removed a portion of the top and bottom review scores for games or movie, so called clickbait reviews would be much less of a problem considering that only a small portion of reviews tend to be suspected of this.

What do you guys think?

For stuff like this should be used a median score instead of an average. With an average extreme values have more influence on the result as moderate ones (it doesn't matter if extremely high or extremely low). A median solves all this.

https://en.wikipedia.org/wiki/Median

The median is an average though



Bet Shiken that COD would outsell Battlefield in 2018. http://gamrconnect.vgchartz.com/post.php?id=8749702

Go to OpenCritic, choose the reviewers that are the most fitted to your tastes, and BOOM, no more problems.

Last edited by Dadrik - on 16 April 2018

Just because you have an opinion doesn't mean you are necessarily right.

When I go to Metacritic, I just sort everything by user scores instead of reviewer scores. I find the users are a better representation of which games are good and which aren't.

The_Liquid_Laser said:
When I go to Metacritic, I just sort everything by user scores instead of reviewer scores. I find the users are a better representation of which games are good and which aren't.

Really depends tbh, but it can be more helpful



Just because you have an opinion doesn't mean you are necessarily right.

I'm not sure how good of a solution that might be. It sounds like it could work, but I would like to have a statistician's view on how it would affect scores before reaching a conclusion.

Also, it's pretty ironic that this thread has sort of a clickbait title.