By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Creative freedom, bravery, and risk in games development/publishing

Jaicee said:
JWeinCom said:

Lol. I actually looked into review scores and their comparison to MC scores as part of an unrelated post about the review system in general that I never wound up posting. So I had like a 5 month head start.

I had generally the same thoughts as you regarding Nintendo games being underrated and started looking into it and found that the review scores are just low in general.  I think people are just more likely to take note of it when it happens to games/companies they feel strongly about.

I believe when they were looking for new writers they mentioned their review scores being lower than the MC average as a point of pride, so their applicants are more likely to score things negatively. So, they just sort of have more negative slant by design, although not against any particular company. I think that's a bad policy in general, but not my website. 

Well actually, in a bid to salvage a sliver of my ego, I've just done some further math and will point out that...

1) You missed the Death Stranding review, which is the single most extreme case (-42 compared to MC!). Replacing the score for God of War (the oldest Sony title on your list) with Death Stranding's changes the average VGC score for the last 10 Sony-published and PlayStation-exclusive games from -11.1% compared to Metacritic averages to -14.9 instead, which indeed suggests Sony titles fare worse than Nintendo titles in the scoring overall these days on VGC, though granted not by a landslide margin.

...AND...

2) While 10 is a nice, round number that it's understandable to go with, the issue I've perceived here of is new, as in confined to the last two years or so, seeming to have begun at right around the time that Jim Ryan became President and CEO of Sony Computer Entertainment in April of 2019. As such, an applicable metric would exclude games released in 2018 like God of War, Detroit: Become Human, and the first Spider-Man, as I didn't see the treatment of Sony games here as being generally unfair before 2019. I also wasn't mentally including third-party releases like Persona 5 Royal, but only first-party titles. If we make these changes, the marginal difference expands quite a bit further in Nintendo's favor. Nintendo's average remains unchanged at -11.2 compared to Metacritic averages because no Nintendo games you listed are thus eliminated (indicating that Nintendo games are more often reviewed by VGC in the first place) while Sony's becomes -20.1 compared to MC scores, indicating that a fairly large and clear scoring discrepancy favoring Nintendo has indeed emerged in the last couple years, be it intentional or otherwise. So I'm not crazy after all!

I'll add that I think the main reason I've perceived the gap to super massive has to do with the fact that my favorites among these have been titles that fared the worst relative to say their average scored on Metacritic.

Returnal: -21

The Last of Us Part II: -23

Death Stranding: -42

Whereas, in contrast, I didn't even read some of the more negative reviews afforded to Nintendo games. So on top of what I've just pointed out, just the psychology resulting from this finite exposure has also been a factor in my mind magnifying and inflating the reality, making it seem even more extreme than it actually is.

I didn't include Death Stranding because I wasn't counting console/timed exclusives that were available on PC (although I accidentally did include too human). I chose not to include PC games because first off I think they're less prone to anti-manufacturer bias (at least without considering Microsoft which gets sticky), and also because their are a lot of smaller indie games which I don't know which platforms they released on. I'm not sure what else would be added if you wanted to do it that way, aside from Shenmue which is the game I got to when I decided not to include them. For the record, Shenmue's VGC review was the same as its MC. You could include Death Stranding if you wanted, but then you get a situation where the entire difference hinges on one game that was pretty divisive. Not a great argument to show there is widespread anti-sony bias.

As for whether or not to include games beyond 2 years, we're already dealing with a fairly small sample. If you want to take out everything pre 2019 and Persona 5, we're talking about 6 games. And honestly, that feels a bit arbitrary. For that to be the case, they would have had to seen Jim Ryan take over, instantly say "fuck that guy" and start shitting on Sony games despite having no anti-Sony sentiment beforehand. I just don't see the whole review team turning on a dime because of the executive in control. Also, keep in mind, this is a mainly volunteer job that based on experience (not with this site but a couple of others) pays relatively little. If I were to become a writer for this site, and they said "take 10 points off whatever you want to give Nintendo games), I'd say fuck you and stop. Can't speak for the writers, but VGC has very little leverage. They could try subtly cultivate anti-Sony people through the hiring process, but that'd be pretty difficult.

As for Persona 5, I don't know why you wouldn't want to count that, especially if you want to count Death Stranding. Why would anti-Sony bias be limited strictly to games that are published by Sony even if they are available on Windows? Why wouldn't it extend to a game that is truly exclusive to Sony home consoles? I'd say, admittedly based on anecdotal evidence, that diehard fans (fanboys if you will) of a particular console tend to place a greater amount of emphasis on exclusive content than other third party offerings, even if not the same amount as they do for first party exclusives. I would say, again based on anecdotal evidence, that exclusivity is more important of a factor than publisher.

I feel that taking the ten most recent exclusive titles was the best way to go about things. Taking titles released after February of 2019 and not including exclusives weren't published by Sony/Nintendo feels like massaging the data. And even if it's not intentionally leading the data to the desired conclusion, it leaves us with a very small sample. It also leaves us with way more Nintendo titles, which means outliers will skew the data for Sony than they would for Nintendo. The fewer the data points, the less reliable the data. If we looked at 100 games, it would be very unlikely that Nintendo games should score significantly higher without intentional bias. With 6 games, it's pretty plausible that a couple of reviewers just didn't like a couple of games and dragged down the average with no bias. 

As for why Nintendo products are reviewed more often, that's simply because there are more of them, at least over the time period in question. In 2018, (according to Wikipedia) Nintendo released 18 retail games compared to 8 for Sony. In 2019 it is 16-5 in Nintendo's favor. This would also tie into why you might perceive the gap as bigger than it is. I believe that Death Stranding was literally the only Sony retail release from 2019 that was reviewed, so if that score was low, it's going to stick out more.

At any rate, I think my methodology is pretty fair. But, even if you want to argue about whether or not a couple of games should have been included, I don't think there's nearly enough to demonstrate a calculated anti-Sony bias.



Around the Network

I like the message of your post Jaicee, especially the last paragraph, and I understand your thinking but for me personally I don't think it applies to a lot of big game companies, which includes Sony. It's just that Sony has so much capital that making a few games that are commercial failures won't bankrupt them and because of this powerful position they really aren't taking risks are they? I think your argument would work better if it was talking about indie devs no?? They have the most to lose and they take risks anyway, recently the most heartfelt games I've played haven't been from AAA developers but from indie devs.

Last edited by tsogud - on 07 July 2021

 

twintail said:

It depends what you mean by risks.

Returnal, for example, is a risk imho. It's an AAA rogue like, a 3rd person shooter from a studio who have never made one, exclusive to the PS5 (no ps4 version) and the protagonist is a middle-aged woman. 

Is the studio big and has a lot of capital? If so then it's not a risk, it won't matter if it doesn't sell well or gets picked up into another installment.

In terms of capitalistic investment it's not really a risk if you can afford it 10 times over. Just another game developed in your portfolio.

Last edited by tsogud - on 07 July 2021

 

Machina said:
JWeinCom said:

*snip*

Just wanted to say your last few posts in this thread were excellent and I'm glad that, when you researched our scores way back because you felt that there was a bias and wanted to see if the data would confirm that, when the data showed no bias against a particular manufacturer but rather that we're ~10% below the Metacritic average on the whole, you changed your conclusion. Props for that.

And you're correct - we, or at least I, have always aimed for VGC reviews to have a reputation for being tough on scoring and hard-to-please when it comes to our reviews and their scores. That's something I've cultivated over the last 10 years or so by explicitly stating it during the recruitment process, by having a firm review methodology, and by having a peer review process for all reviews. We definitely do not have a 'you must score this below the Meta' rule or attitude, but we have descriptors for each of our scores that the text of the review needs to reflect, and those descriptors set a high bar (they were the result of discussion and compromise amongst all review staff). If the reviewer genuinely thinks the game deserves an 8 on that scale and the text of the review matches the criteria for an 8, then that's ultimately what the game will get from us, even if the Metacritic average for it is say 6.5. And vice versa of course.

I noticed in one of your posts you disagreed with my pride in this tough approach, which is fine and I understand why people feel that way. Why do I like our approach though? Well, firstly because I'm quite a cynical and hard-to-please guy by nature. Another major reason is that I've always wanted our scores to actually mean something, especially on the upper end of the scale. If you give out 9-10s like candy to every hyped AAA game then, to me, your scores have no meaning or weight (and you're easily pleased). What use is that then to an audience really? And doubly so to people who want an honest assessment of the game they might be thinking of purchasing. But if your average over the last 8 years is 6.3 and you give a game 9.5 then I'm inclined to take notice and at least find out more about it, and if you give something a 10 then, well, it must be really fucking good.

A third and more minor point is I also feel like more of the ten point scale should be used. Granted, the process to getting a game to market is arduous and self-filtering, so there are very few games in the 1-3 range (unless you're mostly reviewing all of Steam's new releases), but what's the point in having a ten point scale and then only ever using five points on it (6 - and even that one rarely - then 7, 8, 9, and 10)?

Those are just my thoughts though, and while I ultimately make the final call on site policy I'm not some sort of dictator. Others on the team have and continue to contribute to our overall approach to reviewing, and I'll always take on board their feedback and try to reach a consensus where possible. Evan (aka Veknoid), for example, wrote most of the review methodology text (and did a great job imo). Lee's (coolbeans) input directly resulted in several word tweaks and a complete change to our method for giving out a 10. And during recent discussions, which eventually resulted in the methodology text being altered and scores for remasters being dropped, most of the team added their own views and we ultimately reached a majority decision on the changes.

You're in charge and you can do things how you want to. That being said, I strongly disagree with the review system.

The 1-10 scale is essentially a language. The purpose of language is to communicate clearly and effectively. And, when you are using language in a different way than everyone else, that's going to lead to confusion. Which is kind of what we see hear. I obviously don't think Jaicee's accusation of bias was justified, but it's not hard to see how she came to that. Like I said, I got that impression as well. And most people probably have more active social lives and would not spend all that time on actually testing things out.  

I get how the review methodology works, but I think it's flawed. According to the methodology, anything above an 8 is a potential GOTY nominee, at least for some category. That means that about 1/4 of the possible scores are reserved for the handful of GOTY nominees. Meanwhile, 6.5 or below is considered "decent" with anything below a 6 being classified as an unsatisfying or incomplete product. That's 12 of the possible values. 

So if anything 8 or above is reserved for GOTY candidates and anything below 7 is, at best, decent, where does that leave games that are good but not quite great? Well, somewhere in the 7 range.

Which is what I found when I looked into it (at the time there were no half values which help a little bit). There were zero games that scored a 0, zero that scored a 1, one that scored a 2, zero that scored a 3, four that scored a 4, five that scored a 5,  seventeen that scored a 6, thirty nine that scored a 7, fourteen that scored an 8, five that scored a 9, and zero that scored a 10. Nearly half (43.8% to be exact) of all the games scored a 7. 57% score either a 7 or 6. 71% scored between a 6 and an 8. Maybe that has changed since I looked into it (I believe that the most recent game I looked at was Xenoblade Chronicles HD), but as I see it the review methodology funnels everything towards a 7.  Of the games I looked at, Resident Evil 3make, DBZ Kakarot, Iron Man VR, Minecraft Dungeons, Shenmue 3, Trials of Mana, Hatsune Project Diva, Pokemon Sword/Shield, Retro Brawler Bundle, and The Last of Us 2 all received a 7. Are all those games really of equal quality? I know that there are half points now, but still, the review methodology leaves almost nowhere to put "good not great" games.

And honestly, most games should be in that "good but not great" range. Games get reviewed either a)because they sent in a copy or b) because the reviewer bought it themselves and wanted to review it. Companies generally aren't going to send out many games that are genuinely bad (which is why the data I have shows only 10 reviews under 5 out of nearly 100 games), and very few will be GOTY worthy. So, having so few options for "good" scores is a major problem. 

If you think the typical 1-10 scale is flawed, then using it in an idiosyncratic way is not the answer. Again, whether justified or not, people are going to think you're speaking the same language as other sites who use 1-10, and that's just going to lead to confusion. A much better solution is to use an entirely different system to score games. Gamexplain for instance uses a system that goes from something like "hated it" to "loved it" which is a a really clear way to express how the reviewer felt about the game. An F to A+ system also works really well IMO because it's something that's familiar to people, and it allows a wide range of scores. You have 12 different "passing" grades, so you can still reserve As and A+s for the cream of the crop while also having a nice range of possible scores for games that are above average but fall short of greatness.

Again, it's not my site, and you could do things how you want, but since I did moderate the comments of reviews for a while, I can say that there are a lot of people who take reviews the wrong way (which to be fair is maybe unavoidable on the internet). If your goal is for reviewers to be able to convey their thoughts on a game as clearly as possible, I don't think the current system accomplishes that. By using a completely different system you get to do things differently than other sites with less risk of being misinterpreted. Everyone's on the same page, which is the whole point of communication.

Last edited by JWeinCom - on 07 July 2021

tsogud said:

I like the message of your post Jaicee, especially the last paragraph, and I understand your thinking but for me personally I don't think it applies to a lot of big game companies, which includes Sony. It's just that Sony has so much capital that making a few games that are commercial failures won't bankrupt them and because of this powerful position they really aren't taking risks are they? I think your argument would work better if it was talking about indie devs no?? They have the most to lose and they take risks anyway, recently the most heartfelt games I've played haven't been from AAA developers but from indie devs.

First off, welcome back!!

Secondly, yeah for sure. But I also like to see those sorts of games get proper budgets and more visibility for a change and unfortunately first-party publishers are a main pathway to that. To that end, I definitely think about the amount of creative freedom that developers get under various publishers. Many publishers view their role as essentially one of mitigating between what the people making games want to do on the one hand and what they perceive will sell and make them a profit on the other and I'm just against that.

No, a company like Sony is obviously at no serious risk of collapse or anything like that. But a company of that scale also tends to have a different, greedier definition of risk than you or I can even fathom. Risk to them means like the possibility of losing capital, not of folding, and in that sense there is real risk involved here in some of these projects. Why do you think their Japan Studios are being consolidated right now, with most of the workers being laid off? Because losses from the Japan Studio could cause Sony, the multinational conglomerate, to collapse? No, it's because losses in any one area are commercially inefficient. I'm just hoping that that soulless mentality doesn't get applied to more projects going forward. That was really my point before.

Anyway, I'll conclude this reply by shamelessly hawking another thread of mine 'cause I'd be interested to know what your favorite newer games have been. (An expanded version of my list can be found on the second page.)



Around the Network
coolbeans said:
Jaicee said:

It would be funny if like I were to do an official game review, I think. I don't believe the world's ready for that.

Could be fun!  I'm all for drifting away from the overly-generous avg. scoring mindset from the 7th-gen.

[EDIT: Should you really go for writer/critic position: make sure your personal scoring criteria falls inline with the site's review methodology too.  Having that harmony helps so the only conversation that's occurring is how well you're expressing your viewpoint, rather than dueling viewpoints.]

Well let's put it this way: there would be a whole lot of 9s and 2s because I just tend to either fall in love with a game or else hate it. because they'd be viewing it more objectively. I don't think I'm really capable of objectivity when it comes to judging art. For me, it's about the subjectivity of what it does for me, and things in that sense tend to either work well or not at all, so I think I'd make a terrible critic.

Last edited by Jaicee - on 08 July 2021

I think it's fair to use the data from before half scores, provided that I acknowledged the data may be slightly out of date. If things are different now cool. But, it's a bit too time consuming to look into for me atm.

I think all reviews have certain ranges that are going to be more used, because like I mentioned most games that get reviewed are going to be on the good side due to natural selection bias (i.e. publishers are less likely to send in shit games to be reviewed). But, most sites would consider 9.0 and above GOTY Range, which leaves 7.0-8.9 to cover the range of good games. So, there's more room for those games. And I think it's more important to be able to differentiate between the varying levels of "good" games, than it is to be able to differentiate between exactly how good games in the GOTY range are.

As someone who watches Gamexplain, I had no idea that there were any numbers that correlated to their reviews (and I'm not sure if they mean for that or it's something opencritic did of their volition). When people see a number, I think they're naturally going to view it in relation to other sites that use numbers. When I see "liked it a lot" all I really think is "oh that reviewer liked it alot". So there's no baggage there.

But that's just my two cents, and at the end of the day it's your/the writing team's decision. But thanks for listening.

Last edited by JWeinCom - on 08 July 2021

Jaicee said:
tsogud said:

I like the message of your post Jaicee, especially the last paragraph, and I understand your thinking but for me personally I don't think it applies to a lot of big game companies, which includes Sony. It's just that Sony has so much capital that making a few games that are commercial failures won't bankrupt them and because of this powerful position they really aren't taking risks are they? I think your argument would work better if it was talking about indie devs no?? They have the most to lose and they take risks anyway, recently the most heartfelt games I've played haven't been from AAA developers but from indie devs.

First off, welcome back!!

Secondly, yeah for sure. But I also like to see those sorts of games get proper budgets and more visibility for a change and unfortunately first-party publishers are a main pathway to that. To that end, I definitely think about the amount of creative freedom that developers get under various publishers. Many publishers view their role as essentially one of mitigating between what the people making games want to do on the one hand and what they perceive will sell and make them a profit on the other and I'm just against that.

No, a company like Sony is obviously at no serious risk of collapse or anything like that. But a company of that scale also tends to have a different, greedier definition of risk than you or I can even fathom. Risk to them means like the possibility of losing capital, not of folding, and in that sense there is real risk involved here in some of these projects. Why do you think their Japan Studios are being consolidated right now, with most of the workers being laid off? Because losses from the Japan Studio could cause Sony, the multinational conglomerate, to collapse? No, it's because losses in any one area are commercially inefficient. I'm just hoping that that soulless mentality doesn't get applied to more projects going forward. That was really my point before.

Anyway, I'll conclude this reply by shamelessly hawking another thread of mine 'cause I'd be interested to know what your favorite newer games have been. (An expanded version of my list can be found on the second page.)

Ahh I see what exactly you're getting at. Maybe I was just reading it with a too jaded view of Sony as I'm not partial to big corporations lmaooo but yeah I'd agree and my hopes with the gaming industry going forward is the same. Heartfelt indie games that take risks are great but it's a shame, and it's true like you said, they don't get proper budgets or visibility like AAA first party games.

And thanks for the welcome! I'll be sure to def check out that thread!