By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ManUtdFan said:
All this extra power will be wasted on raytracing, which won't ever deliver photorealistic graphics, unlike path tracing. Don't want no shiny bricks or cartoonish, sterile environments...

And so we will have to wait another 10 years for discernible upgrade to graphics quality.

Path Tracing is Ray Tracing. Not all Ray Tracing is created equal... Games have been dipping their toes into the Ray Tracing waters for over a decade now to various degrees... It's being popularized now because of nVidia and what next-gen hardware might potentially bring to the table.

The biggest limitation to games currently is certainly lighting though, developers over the years have tried their best to 'fake' Ray tracing with baked lighting and so on which occurred heavily during the 7th gen.

I say reserve judgement until we see the games and hardware in action.

HollyGamer said:

You better buy PC monitor with 1080p and  120hz refresh rate  for next gen if you planned to buy either of two, because i bet they will allow us to choose between performance (choosing frame rates over resolution) like with PS4 pro and Xbox One X. And PS5/Scarlet will have 120 fps capability with HDMI 2.1 and freesync. It's more cheaper especially  Freesync Monitor. 

I don't think that really happened on the Xbox One X often though? I know a few titles did... But it was far from the norm really.

HollyGamer said:

It can be a tool to compare capability measurement  on doing compte workload and comparison between similar architecture. Let say RTX 2080 and 2070.

Well. No it can't... I have been over this a millions times on this forum... But I'll go again.

There are a ton of different types of compute workloads... Again we have 8bit, 16bit, 32bit, 64bit floating point and so on.
Then we have 8bit, 16bit, 24bit, 32bit, 64bit integers as well. - For example most GPU's don't have native 64bit integer support, so they emulate such functionality on the 32bit blocks... Which comes with a corresponding hit to performance.

And FLOPS only refers to one of those. 32bit floating point. Aka. Single Precision.

Not to mention that other parts of the chip come into play during pure compute workloads as well... For example the Geforce 1030 has a DDR4 and GDDR5 variant... In terms of compute there is only a difference of 6.5%. (1152Mhz vs 1227Mhz @384 Cuda cores.)
But the real world performance is often 50% or more slower.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

Eutherium mining for example which tends to be a pure compute workload also sees significant gains with more memory bandwidth on the graphics card... Which just further emphasizes that flops is not a relevant capability measurement tool even between GPU's of the same architecture.

HollyGamer said:

The possibility of using 7nm+ is still there especially when Sony/Microsoft can let others fabrication manufacturer produce PS5/Scarlet chip because AMD are not producing chip they are just designing chip. Also the problem is not on Sony/ Microsoft money,  because they have the money.  They can either choose TSMC or Samsung differ from AMD own GPU lineup for PC, just like how PS4 pro using 16 nm fabrication while Polaris RX 480 using 14 nm. 

Also Samsung already made statement they can already produce 7nm+ for mass produce. Some smartphones already use it for their parts. So 7nm+ is still be in the realm of possibility.

Global Foundries have stepped away from being in the fabrication process race... So they are out.
TSMC's 7nm+ is going to be leveraging EUV, so the designs won't just automatically translate over.

TSMC's 7nm+ is likely to only be a marginal improvement over 7nm anyway with 20% density improvements, 10% performance increase... Probably not worth making the gamble on that process for the next gen consoles monolithic chips. - Yields are stupidly important.

TSMC's capacity is also only going to be 1,000~ wafers per day... That can't all be reserved for the next gen consoles, other partners building ARM processors, GPU's and other pieces of logic will be jumping at that as well.

Samsungs 7nm process however will be employing EUV... But lets keep in mind that it's not the same as TSMC's 7nm+ process, don't fall into the trap of their marketing shenanigans... You can't compare Samsungs 7nm to TSMC's 7nm, marketing has made the comparisons useless on a number to number basis.
Both TSMC and Samsung will have up-to quad patterning for their fabs.

Is it possible that the next-gen consoles could use 7nm+? Yes. It's possible, it's just highly unlikely at this stage.

And yes Money is important for Microsoft and Sony, the more you spend on Chips and the production-of... The higher your costs become for designing and building a console which flows on to the consumer.
Long gone are the days where it's even financially feasible for console manufacturers to dump billions on designing chips for their devices... Sony and Microsoft have limits you know, which are generally governed by shareholder expectations.

HollyGamer said:

I know It slower than 1080 ti but Gonzalo benchmark slightly the same performance with RTX 2070 or RX 5700 xt, it's always be the  targeting spec for PS5/scarlet and it's enough for console.

I think myself and many others had hoped that the next-gen consoles would be targeting high-end GPU performance rather than the mid-range that the Xbox One and Playstation 4 eventually settled on to see a bigger leap in general fidelity.
Don't get me wrong, we will see a big leap, it's just not going to be as impressive as it could be... Partly that is down to AMD not being able to keep pace with nVidia's performance cadence.

Is it enough for a console? I would argue more is better.

HollyGamer said:

Even 1080 ti performance can be achieve by GTX 1080 using optimization if there is a game specified made for GTX 1080 .  

But if you optimize for a Geforce 1080Ti, then the same performance gap between the GTX 1080 and 1080Ti will continue to exist.

Optimization isn't some magical construct that makes hardware more capable and excludes all other pieces of hardware.

HollyGamer said:

The problem is PC always need more raw power to run games because PC is struggling on optimization (most PC games demo and trailer using 1080 Ti to avoid bug and trouble). Also PS4 are using a slower modified 7870 ( less 2 CU and less GPU clock speed) and better than 7850. 

The PC gets optimizations... I think console gamers forget this.
For example... Whenever AMD and nVidia roll out a driver update it's not just to make things look prettier or fix bugs... But to introduce optimizations that improve performance...
For example here we can see where optimizations improved performance by 15%.
https://www.tomshardware.com/reviews/amd-nvidia-driver-updates-performance-tested,5707.html

Microsoft does a similar thing with Windows, which will often increase performance. For example:
https://www.techpowerup.com/255843/windows-10-may-2019-update-1903-gaming-performance-tested-in-21-titles-with-rtx-2080-ti-and-radeon-vii

And of course we have improvements at the API level:
https://www.redgamingtech.com/how-much-better-is-performance-with-modern-apis-directx-12-vs-directx-11-opengl-vs-vulkan/

And often game developers will roll out updates that also improve the performance of their title.
https://www.techspot.com/review/1759-ray-tracing-benchmarks-vol-2/

So obviously "optimizations" isn't just a console-only thing. - The evidence is simply undeniable at this point.

In short, there is absolutely no game that runs on a Playstation 4 that can't run on a Radeon 7870... Often, games will run better on a Radeon 7870 at the same visual settings as the Playstation 4 too... Like Overwatch, Battlefield 1, Grand Theft Auto 5 and so on.

Plus you get to choose your settings on the PC... Game doesn't run at full 1080P on the Playstation 4? Well, on a Radeon 7870 it can, just lower a couple of settings.

HollyGamer said:
Please read again, I said "The bottom Line " I am agree with u PS5 will just be comparable to RTX 3060/RTX 2070 or close enough. I am a realist but at the same time I also a dreamer. Everybody can have a dream right? Because we also don't know what is the final price of PS5/ Scarlet, if they want they can just increase the price to have better GPU and larger die size , even though it will sacrifice TDD/TBP , price and size. 

They could, but... People will whinge.
Price is a stupidly important factor for allot of people, especially for those who sit lower on the socio-economic ladder.
Remember the Xbox One at $500, remember the Playstation 3 at $600... They were all contentious price points.

HollyGamer said:
Ray tracing is the holy grail of every 3D games developer, it's very expensive technique that require expensive hardware, that's why it could be a selling point at least for gamers. I know it will creeple performance especially for console , but developer can just use it as marketing tool and hyping the console. Probably even on PS5/Scarlet  we will not see many games using RT , probably some first party ip or low RT on some triple A games. It will just be a combination of Rt and  Resteraser  

We are still a long way away from a full ray traced gaming world, it might take a few more console generations for rasterization to fall away to the side.

Ray Tracing will be used on the 9th gen hardware, just like it is being used in some 8th gen games, it's the extent of it's use that is up for debate.

HollyGamer said:
I am not saying backward compatibility " alone can achieve that"  but  it's enough to sell PS5 , because there are so many factor that make all the console you mentioned are failed on the market at least not achieving their target,  even though they have backward compatibility. PS brand and names after PS4 success in the other hand already has names, credibility, backward compatibility and brand bigger than Xbox even in US alone (not even counting across the globe). If PS5 are not repeating the same problem like PS3 did (expensive but no power advantage over it's competitor that can be seen on normal people , late to the market, bad controler ) than PS5 will sell like hot cakes to every PS4 owner. 

It's a value-added incentive. Not the be-all, end-all selling point.

Scarlett will be rolling out Xbox 360 and Original Xbox games in it's backwards compatibility efforts as well, hence why Microsoft pulled that team away from the Xbox One, will Sony do the same with Playstation 1, 2 and 3 backwards compatibility on the Playstation 5? Or would you deem it as unimportant?

I am not willing to place bets on how well any console is going to sell, my tastes don't align with the average consumer.

HollyGamer said:
Indeed it's minority but on first year or early year, all PS4 owner will go and adopt easily PS5 because they have a lot of backlog and games that can be replay with enhanced option on PS5. PS3 /Xbox 360 transition on it's early year are different because PS4 and early Xbox One hardware were not able to run old games. If Xbox One has these feature on it's early day, then it would be a different stories

Good thing I provided a fairly comprehensive list where successive consoles launched with full backwards compatibility (In hardware!) yet didn't sell as well as their predecessor.

And as the evidence I provided earlier, backwards compatibility isn't actually a big selling point for most gamers, it's a value-added incentive, sure. But it's far from being the most important aspect... Otherwise everyone would be a PC gamer as you can run your PC games from 30~ years ago.

DonFerrari said:

Yes certainly we would have better graphics if games were focused on X1X and Pro instead of the base model.

And my fear for MS is that they keep X1X "current" as the new base model (which would be a double issue as it have more expensive solutions so they wouldn't win much on price) and limit what they can achieve on Scarlet, which also can be a problem on the rumor that Anaconda (or whatever is the name of each version) would be a 1080p version of Scarlet for same game and rest of performance about equal.

Yeah. I would dislike for the Xbox One X to be retained as any kind of model for the next-gen.
I don't see it happening though, the cooling set-up, power delivery and so on means it's not the most cost-effective device to build, it would need a significant cost-reduction revision if Microsoft intended to have it as a low-end alternative to next-gen.

EricHiggin said:

While it's possible that Cerny was playing it super safe, based on where GCN was headed in terms of TF calculated performance, 8TF seems unbelievably low for the PS5, without knowing that the Navi GPU you're going to be using is likely to land around that calculated performance with RDNA. This was probably a hint way back, but also PR to smear XB1X even though it wouldn't be lying technically. Smart PR though because he could have said 12TF basing it off of old GCN, while potentially causing PS a headache later on if the PS5 launched with less than that, which is very well possible if not likely at this point.

I don't think 8 Teraflops is low at all.
I think why Teraflops wasn't mentioned earlier before RDNA became a "thing" was for this very reason, flops is irrelevant and they can't use it for marketing (like bits!) forever.

EricHiggin said:

From more of a tech perspective, the Flops in general mean very little yes. It's just a ballpark figure, which is typically used to compare models within a series, or gaming performance for most casuals. It really only matters if it's an extremely direct comparison, which almost never is the case, be it from one iteration to the next or between brands. Even worse when considerable changes are finally made to the arch. While this message is being pushed more, to your typical casual gamer, it's meaningless for the most part. The best seller and the price matter way more, which should come from the best balance of tech and games.

It's not even a ballpark figure, it's a theoretical denominator that is simply unachievable in the real world... Otherwise there wouldn't be a constant race to making chip designs more efficient every year...
I mean GPU's with more flops can end up slower than a GPU with less flops.
For example the Radeon 5870 @2.72 Teraflops is slower than the Radeon 7850 @ 1.76 Teraflops. - Almost 1 Teraflop less, but sometimes faster by almost 50%... And they have the same amount of bandwidth too. (153GB/s)
https://www.anandtech.com/bench/product/1062?vs=1076

Not to mention, the majority of people have absolutely no idea how FLOPS even pertains to the rendering of a games world anyway.

EricHiggin said:

It's getting much tougher to sell someone on your hardware based on the games themselves visually. Trying to prove it through video is extremely tough today for so many reasons. Like for one, how do you prove your 4k box is better than their 4k box, when your 4k video can only be viewed by many at 1080p online? A bigger TF number is a much easier and simpler way of 'proving' that, even though it doesn't mean all that much. For a consumer who doesn't have the time or knowledge or ability to know the difference, specs matter more and more, especially if you can't actually outsell your cheaper 'inferior' competition.

Downsampling/Supersampling means that 4k can look better on a 1080P display than native 1080P content on a 1080P display.
We are far from the point of photorealism in gaming even at 720P... Which means it's still possible to showcase differences.

The real crutch is that the Xbox and Playstation consoles are getting closer and closer in terms of capability that it's really unimportant unless you are an enthusiast... And let's face it, if you gave that much of a shit about hardware, chances are that you are part of the PC Gaming Master Race anyway.



--::{PC Gaming Master Race}::--