By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - Creative freedom, bravery, and risk in games development/publishing

Jaicee said:
coolbeans said:

Definitely not.  My intent was to further probe the Metacritic/VGC connection you made since I didn't have much context; plus, I've recently had other spats around MetaCritic & contrarian opinions so I anticipated the challenge you brought up going in a similar direction.

I appreciate it and likewise in regards to commenting & challenging.  I'll also say I didn't expect this line of reasoning.  It is a valid consideration I hadn't thought of regarding potential or perceived site bias.  My overarching response to this portion is going based on memory (I'll correct anything I ought to later on).

[2] In regards to who gets picked for *x game*, it comes down to 1.) who's interested and 2.) where does it land on their hierarchy of wants for the month.  Some PS exclusives I've covered here were the result of me being the only person to sign up -- or at least put it high up on my hierarchy.  I'm not sure how confident any other writers were that they were getting their PS5 on launch.  Since my package seemed secure, I figured I'd give Astro's Playroom a look.  I remember Death Stranding being a similar case: I showed the most interest out of the crew and went from there.  I was initially going to review Returnal, but the tea leaves warned me I'd not get to it for a long time.  I can't remember the particular details, but it was between Paul and someone else to see if they could finish by the site's deadline.  Miles Morales was my "first dibs" win: the losers of previous tiebreakers get first dibs in the event of the next tiebreaker.  

[3] I'm going to provide a bit of pushback against the Nintendo counter-example.  There's more nuance to it since I've been here.  Just look some of Paul's output to see what I mean: Monster Hunter Rise, Hyrule Warriors: Age of Calamity, Xenoblade Chronicles: Definitive Edition.  Hilariously enough, the most popular Age of Calamity comment was pointing out how it's lower than the lowest on MC!  Obviously this doesn't cover the whole breadth of Nintendo exclusive reviews, but I thought it's worth inserting. 

Perhaps part of your suspicions stem back from less churn of potential Nintendo reviewers than of Playstation ones.  Had I started a bit sooner, I would've likely been the one guy to cover all key 2019 PS exclusives (if I had the time): Days Gone, Death Stranding, Concrete Genie, and MediEvil remaster.  It's like I'm in the middle of the intersection between available reviewers who stick around & those most interested in tackling this title.  Since a few more writers have secured PS5s and have some more free time I don't see it staying that way.

I'll do the numbering thing 'cause I think it just makes it clearer what I'm responding to at any given time.

[1] I'm just using MC as an arbitrary gauge simply to have a gauge. Ain't meaning to imply you gotta agree with the average critic. I know I disagree with the critical consensus fairly often. I mean the fact is that I do look at what a game's average review score is and consider that before I buy it. It's not the only thing I consider by any means, but it is something, and I suspect that's the case for most people. That I think does reveal that most of us do indeed tend to place at lease some credence on these things.

[2] Well that makes sense.

[3] I didn't even read those reviews because I wasn't interested in any of those games myself. Didn't realize that Paul had reviewed any Nintendo games since giving Luigi's Mansion 3 a score of 6! And yes, I've noticed that the two of you have a penchant for scoring first-party games in general, and indeed many video games in general, worse than most would, and conversely some other, typically more obscure titles, much higher than most people would. Just, like you say, contrarian views of things, I guess. Sometimes I feel like that too. Prolly why I prioritized getting a Series X before PS5 (and now kinda regret that decision). Or for example, whereas most people buy Nintendo systems primarily for Nintendo games, by contrast when the Switch initially launched in early 2017 to immense commercial success, I wasn't interested. I bought mine only after the announcement of Metroid Prime 4 at that year's E3, mainly for the promise of that game. To this day, I own but a handful of Nintendo games for it and use it almost exclusively to play indie games. (It's become my default indie machine, and it sure looks like it's gonna remain so, I might add.) Conversely, I like PlayStation systems these days for the first-party libraries and largely ignore mainstream third-party publishers altogether (simply no interest in the latest Assassin's Creeds and such). So yeah, in some respects, I think I understand and even share this strong sense of independence.

Anyway, getting back to the subject of VGC's official reviews, I was thinking of more high-profile Nintendo games like the cases of Paper Mario: The Origami King and Animal Crossing: New Horizons, falling within the same window of time, for example. Both of these titles were reviewed by others and scored an 8, which is higher than VGC has scored any Sony title to my knowledge in years now. In fact, as I recall, the former was the leading VGC staff pick for Game of the Year! I believe these facts by themselves serve as an adequate counterpoint. (Nothing against either, mind you. I've enjoyed them both (and Luigi's Mansion 3 ) plenty myself. Just pointing out a contrast.)

It would be funny if like I were to do an official game review, I think. I don't believe the world's ready for that.

[1]  And that's a much more nuanced approach.  Like I mentioned before, it certainly felt less arbitrary after you placed it in that "tasked by site" context.  And while it's still an understandable gauge to see where a site lands compared to avg. critics, it really sucks how often it's been poisoned for more unsavory purposes.

[3]  I follow you.  And that score comparison is quite interesting when assessing over such a timespan.  Like the "who gets to review x" point, it's something that flies past my head until it's brought up.  Although I would add another intersection to my previous comment: are you able to squeeze it within the deadline.  Because there are two critical examples that fell through the cracks because of this: Dreams & Demon's Souls (2020).  There was special interest in Demon's Souls for one person who wanted to be thorough, but I believe had some IRL things that disrupted his time and it just fell through since it seemed like everyone else wanted to try it out later on.  Maybe they'd grab it after a price cut.  Dreams is one I was on the fence to review, but figured I'd buy later.  I think both of these two were strong 9-score potential too.

Could be fun!  I'm all for drifting away from the overly-generous avg. scoring mindset from the 7th-gen.

[EDIT: Should you really go for writer/critic position: make sure your personal scoring criteria falls inline with the site's review methodology too.  Having that harmony helps so the only conversation that's occurring is how well you're expressing your viewpoint, rather than dueling viewpoints.]

Last edited by coolbeans - on 07 July 2021

Around the Network
JWeinCom said:

*snip*

Just wanted to say your last few posts in this thread were excellent and I'm glad that, when you researched our scores way back because you felt that there was a bias and wanted to see if the data would confirm that, when the data showed no bias against a particular manufacturer but rather that we're ~10% below the Metacritic average on the whole, you changed your conclusion. Props for that.

And you're correct - we, or at least I, have always aimed for VGC reviews to have a reputation for being tough on scoring and hard-to-please when it comes to our reviews and their scores. That's something I've cultivated over the last 10 years or so by explicitly stating it during the recruitment process, by having a firm review methodology, and by having a peer review process for all reviews. We definitely do not have a 'you must score this below the Meta' rule or attitude, but we have descriptors for each of our scores that the text of the review needs to reflect, and those descriptors set a high bar (they were the result of discussion and compromise amongst all review staff). If the reviewer genuinely thinks the game deserves an 8 on that scale and the text of the review matches the criteria for an 8, then that's ultimately what the game will get from us, even if the Metacritic average for it is say 6.5. And vice versa of course.

I noticed in one of your posts you disagreed with my pride in this tough approach, which is fine and I understand why people feel that way. Why do I like our approach though? Well, firstly because I'm quite a cynical and hard-to-please guy by nature. Another major reason is that I've always wanted our scores to actually mean something, especially on the upper end of the scale. If you give out 9-10s like candy to every hyped AAA game then, to me, your scores have no meaning or weight (and you're easily pleased). What use is that then to an audience really? And doubly so to people who want an honest assessment of the game they might be thinking of purchasing. But if your average over the last 8 years is 6.3 and you give a game 9.5 then I'm inclined to take notice and at least find out more about it, and if you give something a 10 then, well, it must be really fucking good.

A third and more minor point is I also feel like more of the ten point scale should be used. Granted, the process to getting a game to market is arduous and self-filtering, so there are very few games in the 1-3 range (unless you're mostly reviewing all of Steam's new releases), but what's the point in having a ten point scale and then only ever using five points on it (6 - and even that one rarely - then 7, 8, 9, and 10)?

Those are just my thoughts though, and while I ultimately make the final call on site policy I'm not some sort of dictator. Others on the team have and continue to contribute to our overall approach to reviewing, and I'll always take on board their feedback and try to reach a consensus where possible. Evan (aka Veknoid), for example, wrote most of the review methodology text (and did a great job imo). Lee's (coolbeans) input directly resulted in several word tweaks and a complete change to our method for giving out a 10. And during recent discussions, which eventually resulted in the methodology text being altered and scores for remasters being dropped, most of the team added their own views and we ultimately reached a majority decision on the changes.



Machina said:
JWeinCom said:

*snip*

Just wanted to say your last few posts in this thread were excellent and I'm glad that, when you researched our scores way back because you felt that there was a bias and wanted to see if the data would confirm that, when the data showed no bias against a particular manufacturer but rather that we're ~10% below the Metacritic average on the whole, you changed your conclusion. Props for that.

And you're correct - we, or at least I, have always aimed for VGC reviews to have a reputation for being tough on scoring and hard-to-please when it comes to our reviews and their scores. That's something I've cultivated over the last 10 years or so by explicitly stating it during the recruitment process, by having a firm review methodology, and by having a peer review process for all reviews. We definitely do not have a 'you must score this below the Meta' rule or attitude, but we have descriptors for each of our scores that the text of the review needs to reflect, and those descriptors set a high bar (they were the result of discussion and compromise amongst all review staff). If the reviewer genuinely thinks the game deserves an 8 on that scale and the text of the review matches the criteria for an 8, then that's ultimately what the game will get from us, even if the Metacritic average for it is say 6.5. And vice versa of course.

I noticed in one of your posts you disagreed with my pride in this tough approach, which is fine and I understand why people feel that way. Why do I like our approach though? Well, firstly because I'm quite a cynical and hard-to-please guy by nature. Another major reason is that I've always wanted our scores to actually mean something, especially on the upper end of the scale. If you give out 9-10s like candy to every hyped AAA game then, to me, your scores have no meaning or weight (and you're easily pleased). What use is that then to an audience really? And doubly so to people who want an honest assessment of the game they might be thinking of purchasing. But if your average over the last 8 years is 6.3 and you give a game 9.5 then I'm inclined to take notice and at least find out more about it, and if you give something a 10 then, well, it must be really fucking good.

A third and more minor point is I also feel like more of the ten point scale should be used. Granted, the process to getting a game to market is arduous and self-filtering, so there are very few games in the 1-3 range (unless you're mostly reviewing all of Steam's new releases), but what's the point in having a ten point scale and then only ever using five points on it (6 - and even that one rarely - then 7, 8, 9, and 10)?

Those are just my thoughts though, and while I ultimately make the final call on site policy I'm not some sort of dictator. Others on the team have and continue to contribute to our overall approach to reviewing, and I'll always take on board their feedback and try to reach a consensus where possible. Evan (aka Veknoid), for example, wrote most of the review methodology text (and did a great job imo). Lee's (coolbeans) input directly resulted in several word tweaks and a complete change to our method for giving out a 10. And during recent discussions, which eventually resulted in the methodology text being altered and scores for remasters being dropped, most of the team added their own views and we ultimately reached a majority decision on the changes.

You're in charge and you can do things how you want to. That being said, I strongly disagree with the review system.

The 1-10 scale is essentially a language. The purpose of language is to communicate clearly and effectively. And, when you are using language in a different way than everyone else, that's going to lead to confusion. Which is kind of what we see hear. I obviously don't think Jaicee's accusation of bias was justified, but it's not hard to see how she came to that. Like I said, I got that impression as well. And most people probably have more active social lives and would not spend all that time on actually testing things out.  

I get how the review methodology works, but I think it's flawed. According to the methodology, anything above an 8 is a potential GOTY nominee, at least for some category. That means that about 1/4 of the possible scores are reserved for the handful of GOTY nominees. Meanwhile, 6.5 or below is considered "decent" with anything below a 6 being classified as an unsatisfying or incomplete product. That's 12 of the possible values. 

So if anything 8 or above is reserved for GOTY candidates and anything below 7 is, at best, decent, where does that leave games that are good but not quite great? Well, somewhere in the 7 range.

Which is what I found when I looked into it (at the time there were no half values which help a little bit). There were zero games that scored a 0, zero that scored a 1, one that scored a 2, zero that scored a 3, four that scored a 4, five that scored a 5,  seventeen that scored a 6, thirty nine that scored a 7, fourteen that scored an 8, five that scored a 9, and zero that scored a 10. Nearly half (43.8% to be exact) of all the games scored a 7. 57% score either a 7 or 6. 71% scored between a 6 and an 8. Maybe that has changed since I looked into it (I believe that the most recent game I looked at was Xenoblade Chronicles HD), but as I see it the review methodology funnels everything towards a 7.  Of the games I looked at, Resident Evil 3make, DBZ Kakarot, Iron Man VR, Minecraft Dungeons, Shenmue 3, Trials of Mana, Hatsune Project Diva, Pokemon Sword/Shield, Retro Brawler Bundle, and The Last of Us 2 all received a 7. Are all those games really of equal quality? I know that there are half points now, but still, the review methodology leaves almost nowhere to put "good not great" games.

And honestly, most games should be in that "good but not great" range. Games get reviewed either a)because they sent in a copy or b) because the reviewer bought it themselves and wanted to review it. Companies generally aren't going to send out many games that are genuinely bad (which is why the data I have shows only 10 reviews under 5 out of nearly 100 games), and very few will be GOTY worthy. So, having so few options for "good" scores is a major problem. 

If you think the typical 1-10 scale is flawed, then using it in an idiosyncratic way is not the answer. Again, whether justified or not, people are going to think you're speaking the same language as other sites who use 1-10, and that's just going to lead to confusion. A much better solution is to use an entirely different system to score games. Gamexplain for instance uses a system that goes from something like "hated it" to "loved it" which is a a really clear way to express how the reviewer felt about the game. An F to A+ system also works really well IMO because it's something that's familiar to people, and it allows a wide range of scores. You have 12 different "passing" grades, so you can still reserve As and A+s for the cream of the crop while also having a nice range of possible scores for games that are above average but fall short of greatness.

Again, it's not my site, and you could do things how you want, but since I did moderate the comments of reviews for a while, I can say that there are a lot of people who take reviews the wrong way (which to be fair is maybe unavoidable on the internet). If your goal is for reviewers to be able to convey their thoughts on a game as clearly as possible, I don't think the current system accomplishes that. By using a completely different system you get to do things differently than other sites with less risk of being misinterpreted. Everyone's on the same page, which is the whole point of communication.

Last edited by JWeinCom - on 07 July 2021

tsogud said:

I like the message of your post Jaicee, especially the last paragraph, and I understand your thinking but for me personally I don't think it applies to a lot of big game companies, which includes Sony. It's just that Sony has so much capital that making a few games that are commercial failures won't bankrupt them and because of this powerful position they really aren't taking risks are they? I think your argument would work better if it was talking about indie devs no?? They have the most to lose and they take risks anyway, recently the most heartfelt games I've played haven't been from AAA developers but from indie devs.

First off, welcome back!!

Secondly, yeah for sure. But I also like to see those sorts of games get proper budgets and more visibility for a change and unfortunately first-party publishers are a main pathway to that. To that end, I definitely think about the amount of creative freedom that developers get under various publishers. Many publishers view their role as essentially one of mitigating between what the people making games want to do on the one hand and what they perceive will sell and make them a profit on the other and I'm just against that.

No, a company like Sony is obviously at no serious risk of collapse or anything like that. But a company of that scale also tends to have a different, greedier definition of risk than you or I can even fathom. Risk to them means like the possibility of losing capital, not of folding, and in that sense there is real risk involved here in some of these projects. Why do you think their Japan Studios are being consolidated right now, with most of the workers being laid off? Because losses from the Japan Studio could cause Sony, the multinational conglomerate, to collapse? No, it's because losses in any one area are commercially inefficient. I'm just hoping that that soulless mentality doesn't get applied to more projects going forward. That was really my point before.

Anyway, I'll conclude this reply by shamelessly hawking another thread of mine 'cause I'd be interested to know what your favorite newer games have been. (An expanded version of my list can be found on the second page.)



coolbeans said:
Jaicee said:

It would be funny if like I were to do an official game review, I think. I don't believe the world's ready for that.

Could be fun!  I'm all for drifting away from the overly-generous avg. scoring mindset from the 7th-gen.

[EDIT: Should you really go for writer/critic position: make sure your personal scoring criteria falls inline with the site's review methodology too.  Having that harmony helps so the only conversation that's occurring is how well you're expressing your viewpoint, rather than dueling viewpoints.]

Well let's put it this way: there would be a whole lot of 9s and 2s because I just tend to either fall in love with a game or else hate it. because they'd be viewing it more objectively. I don't think I'm really capable of objectivity when it comes to judging art. For me, it's about the subjectivity of what it does for me, and things in that sense tend to either work well or not at all, so I think I'd make a terrible critic.

Last edited by Jaicee - on 08 July 2021

Around the Network
JWeinCom said:

You're in charge and you can do things how you want to. That being said, I strongly disagree with the review system.

The 1-10 scale is essentially a language. The purpose of language is to communicate clearly and effectively. And, when you are using language in a different way than everyone else, that's going to lead to confusion. Which is kind of what we see hear. I obviously don't think Jaicee's accusation of bias was justified, but it's not hard to see how she came to that. Like I said, I got that impression as well. And most people probably have more active social lives and would not spend all that time on actually testing things out.  

I get how the review methodology works, but I think it's flawed. According to the methodology, anything above an 8 is a potential GOTY nominee, at least for some category. That means that about 1/4 of the possible scores are reserved for the handful of GOTY nominees. Meanwhile, 6.5 or below is considered "decent" with anything below a 6 being classified as an unsatisfying or incomplete product. That's 12 of the possible values. 

So if anything 8 or above is reserved for GOTY candidates and anything below 7 is, at best, decent, where does that leave games that are good but not quite great? Well, somewhere in the 7 range.

Which is what I found when I looked into it (at the time there were no half values which help a little bit). There were zero games that scored a 0, zero that scored a 1, one that scored a 2, zero that scored a 3, four that scored a 4, five that scored a 5,  seventeen that scored a 6, thirty nine that scored a 7, fourteen that scored an 8, five that scored a 9, and zero that scored a 10. Nearly half (43.8% to be exact) of all the games scored a 7. 57% score either a 7 or 6. 71% scored between a 6 and an 8. Maybe that has changed since I looked into it (I believe that the most recent game I looked at was Xenoblade Chronicles HD), but as I see it the review methodology funnels everything towards a 7.  Of the games I looked at, Resident Evil 3make, DBZ Kakarot, Iron Man VR, Minecraft Dungeons, Shenmue 3, Trials of Mana, Hatsune Project Diva, Pokemon Sword/Shield, Retro Brawler Bundle, and The Last of Us 2 all received a 7. Are all those games really of equal quality? I know that there are half points now, but still, the review methodology leaves almost nowhere to put "good not great" games.

And honestly, most games should be in that "good but not great" range. Games get reviewed either a)because they sent in a copy or b) because the reviewer bought it themselves and wanted to review it. Companies generally aren't going to send out many games that are genuinely bad (which is why the data I have shows only 10 reviews under 5 out of nearly 100 games), and very few will be GOTY worthy. So, having so few options for "good" scores is a major problem. 

If you think the typical 1-10 scale is flawed, then using it in an idiosyncratic way is not the answer. Again, whether justified or not, people are going to think you're speaking the same language as other sites who use 1-10, and that's just going to lead to confusion. A much better solution is to use an entirely different system to score games. Gamexplain for instance uses a system that goes from something like "hated it" to "loved it" which is a a really clear way to express how the reviewer felt about the game. An F to A+ system also works really well IMO because it's something that's familiar to people, and it allows a wide range of scores. You have 12 different "passing" grades, so you can still reserve As and A+s for the cream of the crop while also having a nice range of possible scores for games that are above average but fall short of greatness.

Again, it's not my site, and you could do things how you want, but since I did moderate the comments of reviews for a while, I can say that there are a lot of people who take reviews the wrong way (which to be fair is maybe unavoidable on the internet). If your goal is for reviewers to be able to convey their thoughts on a game as clearly as possible, I don't think the current system accomplishes that. By using a completely different system you get to do things differently than other sites with less risk of being misinterpreted. Everyone's on the same page, which is the whole point of communication.

I'm not a mathematician but I expect some bunching around the middle is to be expected for game reviews, it's just that the middle for most outlets tends to be 7-8 and for us is more like 6-7. 

When we didn't have half scores it definitely felt like scores were funnelled into 7 and that became a serious problem. I think the introduction of half scores has really helped and that's nowhere near as big an issue as it was.

By the way I don't think it's fair to continue to use stats from before the changes. I can't remember exactly when we switched, but from our first half score onwards only 17% are now straight 7s, which goes up to 24% if you include 7.5s. The introduction of half scores hasn't just given us more options to work with around that middle sweet spot you're talking about between meh and great, it also seems to have loosened up our overall range. It's early days, so the sample is still small (~113 reviews), but a third of all scores are currently falling into 5 or under.

---

On your broader criticisms of the gaming review industry's 1-10 scale itself being... highly flawed, I actually agree with many of your points. I'm not a fan of the grade system as an alternative - we used that for a while when I first started writing reviews for the site and it didn't feel great; everyone just roughly converted them into scores anyway, including most review site aggregators. After that we moved to a 100 point scale (too many options, but I preferred it to grades), then no scores at all just pros & cons (felt lacking and people missed the scores), then a 10 point scale (not enough options and too much bunching on 7), and now basically a 20 point scale. I think that this is probably the best one for us so far, but like I said - early days.

Gamexplain's system does sound good from a reader and potential buyer perspective, although even their system converts into a score. Basically, the scores are hidden from readers and only their equivalent of our descriptors are shown. We could do that and change the descriptors from Awful, Decent, Outstanding, etc. into something like Avoid, Liked, Highly Recommend, etc. but ultimately you're still working with something that 1) gets converted into a score by necessity, and 2) has fewer options along the scale. The Opencritic form for the Liked, Like-a-lot (ew), Hated, etc. system, for example, is basically an 8 point scale. But maybe because it's just a word (and not a score) that's shown to viewers it's not as noticeable or problematic?

Anyway, I'd probably argue for sticking with our current system because I'm broadly happy with it and I definitely think it's better than all of the ones we've tried out in the past. But if a majority of the team wanted to change to something like Gamexplain's then I would roll with it. We would probably have trouble implementing it right now though, because Talon's not around to help code unfortunately.



I think it's fair to use the data from before half scores, provided that I acknowledged the data may be slightly out of date. If things are different now cool. But, it's a bit too time consuming to look into for me atm.

I think all reviews have certain ranges that are going to be more used, because like I mentioned most games that get reviewed are going to be on the good side due to natural selection bias (i.e. publishers are less likely to send in shit games to be reviewed). But, most sites would consider 9.0 and above GOTY Range, which leaves 7.0-8.9 to cover the range of good games. So, there's more room for those games. And I think it's more important to be able to differentiate between the varying levels of "good" games, than it is to be able to differentiate between exactly how good games in the GOTY range are.

As someone who watches Gamexplain, I had no idea that there were any numbers that correlated to their reviews (and I'm not sure if they mean for that or it's something opencritic did of their volition). When people see a number, I think they're naturally going to view it in relation to other sites that use numbers. When I see "liked it a lot" all I really think is "oh that reviewer liked it alot". So there's no baggage there.

But that's just my two cents, and at the end of the day it's your/the writing team's decision. But thanks for listening.

Last edited by JWeinCom - on 08 July 2021

Jaicee said:
tsogud said:

I like the message of your post Jaicee, especially the last paragraph, and I understand your thinking but for me personally I don't think it applies to a lot of big game companies, which includes Sony. It's just that Sony has so much capital that making a few games that are commercial failures won't bankrupt them and because of this powerful position they really aren't taking risks are they? I think your argument would work better if it was talking about indie devs no?? They have the most to lose and they take risks anyway, recently the most heartfelt games I've played haven't been from AAA developers but from indie devs.

First off, welcome back!!

Secondly, yeah for sure. But I also like to see those sorts of games get proper budgets and more visibility for a change and unfortunately first-party publishers are a main pathway to that. To that end, I definitely think about the amount of creative freedom that developers get under various publishers. Many publishers view their role as essentially one of mitigating between what the people making games want to do on the one hand and what they perceive will sell and make them a profit on the other and I'm just against that.

No, a company like Sony is obviously at no serious risk of collapse or anything like that. But a company of that scale also tends to have a different, greedier definition of risk than you or I can even fathom. Risk to them means like the possibility of losing capital, not of folding, and in that sense there is real risk involved here in some of these projects. Why do you think their Japan Studios are being consolidated right now, with most of the workers being laid off? Because losses from the Japan Studio could cause Sony, the multinational conglomerate, to collapse? No, it's because losses in any one area are commercially inefficient. I'm just hoping that that soulless mentality doesn't get applied to more projects going forward. That was really my point before.

Anyway, I'll conclude this reply by shamelessly hawking another thread of mine 'cause I'd be interested to know what your favorite newer games have been. (An expanded version of my list can be found on the second page.)

Ahh I see what exactly you're getting at. Maybe I was just reading it with a too jaded view of Sony as I'm not partial to big corporations lmaooo but yeah I'd agree and my hopes with the gaming industry going forward is the same. Heartfelt indie games that take risks are great but it's a shame, and it's true like you said, they don't get proper budgets or visibility like AAA first party games.

And thanks for the welcome! I'll be sure to def check out that thread!