By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Bonzinga said:

I am only going with what's stated in the marketing. Xbox claims Hardware Accelerated while Sony claims Hardware-Based. As seen Below.

Maybe Sony has figured out a solution but lets remember that the PS5 is only twice as fast as the XSX, its not 10x faster making it impossible for the PS5 to do things the XSX cannot do. Also games will be made with Ram limitations in mind so expect next gen games to be designed around 8 to 10gigs of Ram. Sure maybe Sony can use more resources however its going to be 1st party techniques and paid exclusives. We all know that when it comes to 1st party games, all platform holders have great looking 1st party games and will make no difference to the end user because 1st party games don't appear on rival platforms making comparisons impossible.

Hardware accelerated and hardware based are highly likely to mean the exact same thing in this instance.
Don't hold much credit in marketing teams.

DonFerrari said:

He was fighting before entering the portal and after. I don't really see a reason for him to be fightining inside the portal (since it the jump takes seconds) but sure it could happen, but the fact that it didn't doesn't prove it is scripted.

To be fair... It's nothing new.
I was fighting Ultimecia in Final Fantasy 8 just a few days ago and we were warping from place to place during the fight.
That's a Playstation 1 game with 300kb/s of optical disk bandwidth and Ram measured in mere megabytes. (Obviously I was playing it on Switch though.)

DonFerrari said:

You want to count it twice? It have RAM advantage and bandwidth? The RAM ammount is the same, while PS5 have a single speed and XSX on the 10Gb have a little faster but on the 6Gb (that won`t be all for OS) is slowe. The difference in the 10GB speed is close to the difference in GPU so it is basically a match to keep it feed.

There are going to be some memory operations which will show some significant advantages in the faster memory space on the Xbox Series X, but like you alluded to, the Xbox Series X also needs that extra bandwidth due to it's higher levels of processing capabilities.

We don't yet know how much Ram will be reserved for OS/Background duties, hopefully it's the same as 8th gen or even a regression, this new generation will likely end up being memory limited by the time we end it.

DonFerrari said:

You are assuming one or the other throttle. You didn`t really understood Cerny explanation. The speed on the GPU and CPU can be sustained for as long as is necessary, that is it. And if because of load to keep the thermal level they can achieve over 10% saving in power with only 2% decrease in the clock.

Cerny's explanation isn't really elaborating on every single possible scenario though.

The fact is, even when you have a GPU's compute pegged at 100% there are often parts of the GPU (I.E. Fixed function units) which are being underutilized, which is spare TDP, that TDP can then be funneled into the CPU or GPU's clockrate rather than let it go to waste.

For example, there is the very real possibility that not all games will leverage the Playstation 5's Ray Tracing cores, but will still use the GPU to it's fullest extent, like the Unreal Engine 5 demonstration... That's allot of spare compute and energy on the table, so in those instances, we might as well use the energy that would be used for those Ray Tracing cores to bolster CPU and GPU clockrates.

It's a more efficient use of limited resources essentially.

But it does add some variability in the Playstation 5's hardware design and there is the very real possibility that when the hardware is pegged at 100% across the entire system, that clockrates will be reduced by a set amount. - But we will need to wait and see what that amount is.

SmartShift though is essentially a Thermal Dissipating Power rule that the entire console needs to adhere to, so if any component isn't being 100% utilized, energy can be shifted to another area to increase overall performance.

If the Playstation 5 was able to maintain it's clockrates and performance constantly irrespective of TDP or utilization, then Smartshift is a redundant technology, but because it's a front-and-center feature... Well. You get the idea.

DonFerrari said:

Actually SSD does that. When it stream better quality assets (including texture) it helps to make better quality pixels, it doesn`t help on the computational capability thought. By having twice the speed it can stream much larger textures (higher quality).

The SSD in the Playstation 5 is a sizable advantage... Especially over time.

In 30 seconds worth of streaming the Xbox Series X's storage can transfer a maximum of 72GB of uncompressed data.
In 30 seconds worth of streaming the Playstation 5's storage can transfer a maximum of 165GB of uncompressed data.

That's an advantage of 93GB of extra data that can be shown on-screen in a 30- second block for the Playstation 5, which is a massive amount of data... Streaming data isn't just streaming for a few moments and stopping, it's a constant, especially when you are optimizing for that aspect extensively. (I.E Open world games.)

Obviously other factors will come into play like decompressed data, the types of data, random reads, OS and I/O overheads and more which will skew the results for either hardware platform.

So whilst comparing 2.4GB/s to 5.5GB/s doesn't seem significant, it's actually a really significant number... When you account for it over time.

But then again the 20% extra compute performance the Xbox Series X is also a significant advantage... And that too can be cycled over multiple frames by deferring operations, so that 2 Teraflops worth of compute time can turn into 6 teraflop advantage for instance.

Both consoles have advantages and disadvantages over each other and that will drive competition and innovation for the first party/exclusives to push innovative rendering approaches.

I.E. Halo vs Uncharted from the 7th gen all over again.

Exciting times.

Bonzinga said:

Google: Hardware acceleration makes a big difference. But the real distinction isn't between hardware and software, but between GPU acceleration with and without dedicated RT Cores. You don't need specialized hardware to do ray tracing, but you want it.

We have basically come full circle in the GPU space.
Originally the best approach was to have all units (Vertex+Pixel+Texture+ROP) units separate, then around the Geforce 8/Radeon HD 2000 series, AMD and nVidia essentially agreed that the best approach was to combine the Vertex+Pixel operations into the same unit to consolidate compute resources.

With the advent of the Radeon 9000 series, AMD essentially rolled TnL into the shader pipelines... nVidia invested resources into maintaining that as a dedicated fixed-function block on the Geforce FX at the expense of shader resources. (A mistake they rectified with the Geforce 6.)

And now we are seeing the bifurcation of processing resources again.

Over time shader resources have become more plentiful and have become more programmable, so I am going to go out on a limp and assert that at some point nVidia and AMD will reach a point where it simply makes sense to roll the RT processing into the shader cores... The shader cores can already do it, it's just don't have the appropriate level of specialization to do it efficiently, not while they are still tasked to handle regular rasterization as well.

Can't disagree with your explanation. Just wanted to point out that being both architectures balanced (which is what I expect them to be) then texture quality would be PS5 advantage due to the capacity to stream larger (better) textures and the geometry is something uncertain since as you explained XSX have likely more capacity to draw geometries but then Nanite like feature is dependant on the SSD and I/O speed.

The great question that I believe you agree depends on the games revealed along the gen is how much the I/O interface can help the result. As said if the SSD can send better quality assets due to the speed advantage then perhaps the GPU and CPU can do a little less work could it alleviate some of the disparity? I do understand SSD doesn't do any computation so the performance GAP will always exist, but could the optimization done by devs take care of some of it? Probably not and for the whole gen we will see some advantage on the pixel count, framerate consistency, but for games where both can achieve 4k30fps where would the 10-20% difference go? What type of effects could be leveraged on this type of GPU difference in PC?



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."