By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - PS5 vs XSeX: Understanding the Gap

goopy20 said:

I think many are focusing on the SSD as that's basically what Sony's build their whole console around and which they believe will be the key to the next generation. It's also something that's a bit harder to grasp than just Tflops numbers and a lot harder to sell to the public.

That pretty much sums it up. When Cerny made the rounds with developers, he must have heard a lot of "We absolutely need a very fast ssd because a) b) c) ...." so he started with the hardware around a proprietary ssd setup. Also the improved sound hardware (not leaving VR out of sight).

My guess is at the early stages (5-7 years ago), AMD was still struggling with cu scaling (the more cus, the less efficient the eincrease in performance was), so he went fast and small instead of wide and slow.

Pure speculation: My guess is the last change happened very late (after the 12TF announcement). The A0 stepping of the chip in the devunit had the full 40cus running at 2GHz (giving the 10.3TF). This was a safe clock at the cost of losing redundancy, making the chips expensive (simply no redundancy in the biggest die area of the chip). This was changed to 2.23GHz and 36 cus (RDNA2 does seem to have significantly better power usage than RDNA1), making the chips cheaper as there is now redundancy. Could they have gone with 40cus and 2.23GHz for 11.5TF? Maybe, but we'll never know.



Around the Network
sales2099 said:
Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

Maybe I’m not seeing that chart properly but where’s the GPU? You know the biggest advantage the Series X has on PS5?

Thats the Rasterizer + Shader Processors (tflops) + TMUs + ROPs  =  The GPU.

You can see the PS5 has advantages with its  Rasterizer & ROPs performance. (this part of its GPU is better, than XSX's)
While the Xbox Series X, has advantages with its shader amount (tflops) + TMUs.

Actually performance differnce could be less or more than ~15%, varying form game to game.

If Playstation 5 is 100$ cheaper, even if its abit weaker, thats a win in my book.

People often overlook the fact, that the PS5 GPU will actually have area's where its faster at certain things than the XSX's.
All they see is the shader count and go "12 > 10".

*edit:
This was what cerny said when he mentioned there were advantages to running at higher clock speeds on a smaller chip.
He was refering to the Raster + ROP performance.

*edit2:
"ROPs handle anti-aliasing, Z and color compression, and the actual writing of the pixel to the output buffer."

So maybe PS5 has less of a performance hit for running Anti-aliasing than the XSX does?
That said, apparently they arnt as important to todays modern games, as I imagined, so any advantage there will probably be really minor in real world "effects".

ROPS = Render Output Unit, and TMUs = Texture Memory Units, for those curious :)

Last edited by JRPGfan - on 24 March 2020

Intrinsic said:

You shouldn't build your console with one major strength, talk about said major strength for 15min, and still have people saying they don't know what it's for lol. And that Spiderman demo would have done in 20 seconds what Cerny couldn't seem to drive home in15 mins. It would show how fast the game loads, and would show the benefits of game data asset streaming.

This was a talk for GDC. Everyone sitting in the audience would have known the Spiderman demo. But not the technology behind it. It was a dry, technical talk about the technology behind it. (From a didactical perspective, Cerny made almost every mistake a good speaker must avoid at all costs, so that didn't help either). Unfortunatly throwing it to the gamerzzzz crowd was going to have the backlash we see now. The guy who introduced Cerny should have made it very clear but in the video he seemed profoundly unprepared to set the stage. I hope this dofus is not responsible for the pr that will come eventually.



JRPGfan said:

Thats the Rasterizer + Shader Processors (tflops) + TMUs + ROPs  =  The GPU.

Minus the command processor minus the geometry engine.... The first one has the clock speed advantage, the second one probably is the same in both chips so also clock speed advantage.



sales2099 said:
Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

Maybe I’m not seeing that chart properly but where’s the GPU? You know the biggest advantage the Series X has on PS5?

That's the common mistake a lot of people are making. You are looking at the GPU, or what is mostly in the GPU. There are even a few things that are in the GPU that aren't in that chart. Like the geometry engine and the command processor as has been pointed out already. Sony calls it the geometry engine, MS and AMD probably calls it something else, but it's pretty much the same thing that deals with actually drawing the triangles that make up the geometry and stuff like culling. This is another area that the PS5's GPU would have an advantage since its also tied to the clock speed of the chip and there are the same amount of them in AMD GPUs. So PS5 would either draw more polygons in the same amount of time or draw the same amount f polygons in less time.

There are a fair number o things that make a GPU a "GPU".. but all people talk about are TFswhich only measures just one part of the GPU. Its pixel shaders. There are a lot of things that need to happen before a pixel even needs to be shaded.

These things were never worth talking about before because the devices in question were already heavily different in key areas to a degree where none of the other components of the GPU could make up the difference. But now they are important. 

Also why I say far as the GPU goes, the gap s not going to be as obvious as a lot of people think. Its funny, but the GPU in these next-gen consoles is actually the area (right after the CPU) that is the least different. 



Around the Network
Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

This is not wrong but it´s best case scenario for PS5. We know that the clocks of the XBOX are fixed. But the PS5 clocks will throttle. So the CPU won´t be at 3.5 most of the time when the GPU is at 2.23 Ghz. Also the GPU  uses much more power than the CPU. That means that a slowdown of the CPU doesn´t mean for sure that the GPU will be able to keep its 2.23 Ghz clock. My guess here is that the PS5 was going to be clocked at 9.2 as many rumors said but as Microsoft rose the bar with the 12.15 Teraflops XBOX, they decided to go for a variable frecuency that gives the idea of more than 10 Teraflops and show that the difference is not that big. Mu guess is that the real world difference is going to be 20%. That´s a big difference. ITs more than a PS4 (old architecture 1.84 teraflops).

Now, Will that 20% be that noticeable? Maybe. 60 to 50 fps or same fps with some downgraded shadows, less effects or motion blur here and there, or even more drops in variable resolution. I don´t think it wil be a game changer. When we see games running on XBOX one X and PS4 pro, the difference is greater in every way, but still the ps4 pro gives us a good experience.  

In my opinion Microsoft needs to show me more good games, and new IPs to make me feel that Im loosing something. Otherwise I´ll just accept that my machine is 20% slower but play the awesome games that companies like Naughty Dog or Santa Monica and others have for me. 



CrazyGPU said:
Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

This is not wrong but it´s best case scenario for PS5. We know that the clocks of the XBOX are fixed. But the PS5 clocks will throttle. So the CPU won´t be at 3.5 most of the time when the GPU is at 2.23 Ghz. Also the GPU  uses much more power than the CPU. That means that a slowdown of the CPU doesn´t mean for sure that the GPU will be able to keep its 2.23 Ghz clock. My guess here is that the PS5 was going to be clocked at 9.2 as many rumors said but as Microsoft rose the bar with the 12.15 Teraflops XBOX, they decided to go for a variable frecuency that gives the idea of more than 10 Teraflops and show that the difference is not that big. Mu guess is that the real world difference is going to be 20%. That´s a big difference. ITs more than a PS4 (old architecture 1.84 teraflops).

Now, Will that 20% be that noticeable? Maybe. 60 to 50 fps or same fps with some downgraded shadows, less effects or motion blur here and there, or even more drops in variable resolution. I don´t think it wil be a game changer. When we see games running on XBOX one X and PS4 pro, the difference is greater in every way, but still the ps4 pro gives us a good experience.  

In my opinion Microsoft needs to show me more good games, and new IPs to make me feel that Im loosing something. Otherwise I´ll just accept that my machine is 20% slower but play the awesome games that companies like Naughty Dog or Santa Monica and others have for me. 

Yes Sony were able to make the entire Smartshift, cooling solution, decide to control by frequency with fixed power consumption and everything else in a couple days since MS had revealed the specs? Or do we give then a couple months for when the rumours were more thrustworthy?

The best you can have for "reactionary" is that Sony was expecting MS to go very high on CU count and have a high TF number and chose the cheaper route to put higher frequency, but that was decided like 2 years ago.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

drkohler said:

a) How do you know?

Because of the hardware reveals.
And unlike console gamers... I have been playing around with SSD's for over a decade.

drkohler said:

b) "it's still stupidly fast however." What is stupidly faster? The ssd itself is half the speed of the PS4's. No decompressor can make that go away. So in the end, picking some data of whatever kind from the ssd is faster on the PS5, all the time. Or are you once again throwing the Teraflops of the cus around in an argument about the sdd technology inveolved?

I think you meant PS5, not PS4.
Either way, the Xbox Series X having an SSD that is half the speed of the PS5's is by all intents and purposes still stupidly fast.
The PS5's is just twice as fast.

Let's not downplay anything here, let's be realistic of what both consoles offer.

There are some possible edge-case scenarios where the Xbox Series X could potentially close the bandwidth gap due to compression by a few percentage points, but we will need to see the real world implications of that... Because like I have alluded to prior, possibly even in another thread... Many data formats are already compressed and thus don't actually gain many advantages from being compressed again. (Sometimes it can have the opposite effect and increase sizes.)

drkohler said:

c) Yes the XSX has a hardware compander too, apparently God's gift to mankind when it comes to texture decompression. What it does to other things is anyone's guess. The big question is: What does the XSX do once the data is decompressed? In his GDC talk, Cerny spends an awful lot of time trying to explain what the problems are once you have your data decompressed. Pieces of the solution are Sony property, so not in XSX hardware. At this time, we do not know if there is additional hardware in the XSX (not unlikely as MS engineers faced the same problems as Sony so the must have had ideas, too). Anything the PS5 backend does in hardware can always be done in software should MS have chosen that path. This software requires (some of the) Zen2 cores, obviously (and you better hope there is something that avoids flooding the cpu caches with data they absolutely don't need).

I am not religious. You seem to be getting upset over the specifications of certain machines? Might be a good idea to take a step back?

Microsoft has similar propriety technology as the Playstation 5 on the compression front and the Xbox Series X also includes a decompression block, that is what we do know, it was what was in the reveal.
https://news.xbox.com/en-us/2020/03/16/xbox-series-x-tech/

Microsoft completely removes the burden from the Zen 2 cores.

drkohler said:

d) Because we have two arguments that are consistently brought up by people who consistently show they have essentially no clue about what the real problems are.

The first argument is "The 2.23GHz is a boost clock". No it isn't. Not going any further there.

The second argument is "The ssd is only to make load times disappear". No it isn't. That is an added bonus but is way short of what the whole hardware/software chain has to do.

No. It is because people are enamored with their particular brand choice and cannot see where they potentially fall short or provide constructive criticism... Or just generally treading on the logical fallacy of hypothesis contrary to fact.

The 2.23Ghz -is- a boost clock. Sony/Cerny specifically mentioned Smartshift. - Unless you are calling Cerny a liar?

https://www.anandtech.com/show/15624/amd-details-renoir-the-ryzen-mobile-4000-series-7nm-apu-uncovered/4

And I quote Digital Foundry which quoted Cerny:
""Rather than look at the actual temperature of the silicon die, we look at the activities that the GPU and CPU are performing and set the frequencies on that basis - which makes everything deterministic and repeatable," Cerny explains in his presentation. "While we're at it, we also use AMD's SmartShift technology and send any unused power from the CPU to the GPU so it can squeeze out a few more pixels.""

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

It is shifting TDP from one part of the chip to the other to boost clockrates. It's a boost clock.

If the Playstation 5 cannot maintain a 2.23Ghz GPU clock in conjunction with a 3.5Ghz CPU clock, whilst pegging the I/O, then by extension... That 2.23Ghz GPU clock is not the base clock, it is a boost clock, it is a best-case scenario.

drkohler said:

The second argument is "The ssd is only to make load times disappear". No it isn't. That is an added bonus but is way short of what the whole hardware/software chain has to do.

Pretty sure that is not my exact statement and you are taking it right out of context.


e) Who? I'd assume each and every of Sony's first party studios will do. Those games will look at least as good as MS showpieces.

Who will not? Small developers without the manpower. Bigger studios who don't care about the additional work required to bring their games up to Sony's in-house standards. No doubt those (mostly multiplat) games will look (a little? noticably?) better on the XSX thanks to the brute force advantage. Again there are a lot of what-ifs involved. If those developers are too careless, the PS5 will edge out because of the higher clock rates in every stage of the gpu. There simply is no telling without seeing the games.

The Xbox Series X has the CPU, GPU and Memory bandwidth advantages, it is likely to show an advantage more often than not... Just like the Xbox One X compared to the Playstation 4 Pro.

In simpler titles which won't use 100% of either consoles capabilities... Those games will look identical. - And that happens every console generation, there are base Xbox One and Playstation 4 games with visual parity, right down to resolution and framerates.

Big AAA exclusives are another kettle of fish and could make things interesting... But by and large, if graphics is the most important aspect, the Xbox Series X holds the technical edge due to the sheer number of additional functional units baked into the chip design.


When the ssd techology is used properly, we'll see that old game design ideas can finally be realised now (on either console). The times of brute forcing your way through a game might be over (or not, we'll see in a year or two or three).

Brute forcing and using design tricks to get around hardware limitations will continue to exist because the SSD is still a limitation until it actually matches the RAM's bandwidth.

Remember... We wen't from optical disks that could be measured in kilobytes per second to mechanical hard drives that could be measured in Megabytes per second... With an accompanying reduction in seek times... Did game design change massively? For the most part, not really... And we are seeing a similar jump in storage capability by jumping from mechanical drives that are measured in Megabytes per second to Gigabytes per second with an equally dramatic decrease in seek times.

People tend to gravitate towards games that they like... Developers then design games around that, hence why something like Battle Royale happened and then every developer and it's pet dog jumped onto the bandwagon to make their own variant.

Everyone copied Gears of Wars "Horde mode" as well at one point.

That's not to say that SSD's won't provide benefits, far from it.

Intrinsic said:

GPU & Resolution
Aka 12.1TF vs 10.3TF. This is the single most stupid thing to argue about, why? Not because its just a 14%/17% performance different, or not even because with the PS5 higher GPU clock means there are things in the GPU that it would actually be faster at doing or be able to do more of than the XSX GPU; but because a lot of people are using an antiquated way measuring performance. A way that devs no longer use.

I would not be making such a claim just yet.

The Xbox Series X is a chip with dramatically more functional units... It is only in scenarios where the PS5 has the same number of ROPS, TMU's, Geometry units and so forth that it will be faster than the Xbox Series X due to it's higher clockrates... And usually those units are tied somewhat to the number of Shader groupings.

The Xbox Series X could have the advantage on the GPU side across the board... The point I am making is that until we get the complete specs set, we just don't know yet.

We do know the SSD is twice as fast as the Xbox Series X... Which is what people are clinging to at the moment as it's the only guaranteed superior metric.

HollyGamer said:

So, which computer would give the highest FPS in a game: one with a 2.0 teraflop processor in its GPU, or one with a 2.2 teraflop processor in its GPU? Now we need to look at different clock speeds, bus bandwidth, cache sizes, register counts, main and video RAM sizes, access latencies, core counts, northbridge implementation, firmwares, drivers, operating systems, graphics APIs, shader languages, compiler optimisations, overall engine architecture, data formats, and hundreds, maybe thousands of other factors. Each game team will utilise and tune the engine differently, constantly profiling and optimising based on conditions."

Precisely, we do need to account for everything. You can have the same Teraflops, but half the performance if the rest of the system isn't up to snuff.

Only focusing on Teraflops or only focusing on the SSD is extremely 2Dimensional... And doing a disservice to the amount of engineering, research and development that Microsoft, Sony and AMD have put into these consumer electronic devices.

LudicrousSpeed said:
Barring shoddy optimization, XSX games should look better and run better. PS5 games should load a couple seconds faster. Not a large difference, though I want to see how devs use the variance in power on PS5. For all we know, the small difference on paper between 12 and 10.3 can become larger once PS5 is running demanding games or depending on how devs utilize the setup.

It's more than just initial loading.

Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

Pretty sure the TMU's, ROPS haven't been revealed yet, don't count the chickens before the eggs have hatched.

And rumor has it that the Xbox Series X could have 80 ROPS verses the PS5's 64 ROPS...
https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482

In RDNA AMD groups 1x Rasterizer with every 4x Render Back ends, obviously that can change with RDNA 2, but just some food for thought.
https://www.amd.com/system/files/documents/rdna-whitepaper.pdf

Which means that we could be looking at 20 Rasterizers verses 16.

sales2099 said:

Maybe I’m not seeing that chart properly but where’s the GPU? You know the biggest advantage the Series X has on PS5?

I think you are looking for the shader processors, they have tried to take every aspect of the GPU into account rather than a pure focus on flops.

drkohler said:

My guess is at the early stages (5-7 years ago), AMD was still struggling with cu scaling (the more cus, the less efficient the eincrease in performance was), so he went fast and small instead of wide and slow.

AMD has "claimed" (Salts, grains, kittens and all that) that RDNA 2.0 is 50% more efficient than RDNA 1... Which was the same jump we saw between Vega and RDNA 1.

Graphics tasks are highly parallel... AMD was struggling with CU scaling because GCN had intrinsic hardware limits, it was an architectural limitation itself, we need to remember when AMD debuted GCN we were working with 32 CU's, AMD then stalled as the company's profits plummeted and AMD had to make cutbacks everywhere in order not to go bankrupt, so they kept milking GCN longer than anticipated in order to keep R&D and engineering costs as low as possible.

DonFerrari said:

Yes Sony were able to make the entire Smartshift, cooling solution, decide to control by frequency with fixed power consumption and everything else in a couple days since MS had revealed the specs? Or do we give then a couple months for when the rumours were more thrustworthy?

The best you can have for "reactionary" is that Sony was expecting MS to go very high on CU count and have a high TF number and chose the cheaper route to put higher frequency, but that was decided like 2 years ago.

Higher frequency isn't always cheaper.
The higher in frequency you go, the more voltage you need to dump into the design... And one aspect of chip yields is that not all chips can hit a certain clock frequency at a certain voltage due to leakage and so forth, which means the number of usable chips decreases and the cost per-chip increases.

It's actually a careful balancing act of chip size vs chip frequency. If you can get all the right cards in a row... You can pull off an nVidia Pascal and drive up clockrates significantly, however nVidia still had to spend a ton of transistors to reduce leakage and remove clockrate limiting bottlenecks from their design, but it paid off.



--::{PC Gaming Master Race}::--

Pemalite said:
drkohler said:

a) How do you know?

Because of the hardware reveals.
And unlike console gamers... I have been playing around with SSD's for over a decade.

drkohler said:

b) "it's still stupidly fast however." What is stupidly faster? The ssd itself is half the speed of the PS4's. No decompressor can make that go away. So in the end, picking some data of whatever kind from the ssd is faster on the PS5, all the time. Or are you once again throwing the Teraflops of the cus around in an argument about the sdd technology inveolved?

I think you meant PS5, not PS4.
Either way, the Xbox Series X having an SSD that is half the speed of the PS5's is by all intents and purposes still stupidly fast.
The PS5's is just twice as fast.

Let's not downplay anything here, let's be realistic of what both consoles offer.

There are some possible edge-case scenarios where the Xbox Series X could potentially close the bandwidth gap due to compression by a few percentage points, but we will need to see the real world implications of that... Because like I have alluded to prior, possibly even in another thread... Many data formats are already compressed and thus don't actually gain many advantages from being compressed again. (Sometimes it can have the opposite effect and increase sizes.)

drkohler said:

c) Yes the XSX has a hardware compander too, apparently God's gift to mankind when it comes to texture decompression. What it does to other things is anyone's guess. The big question is: What does the XSX do once the data is decompressed? In his GDC talk, Cerny spends an awful lot of time trying to explain what the problems are once you have your data decompressed. Pieces of the solution are Sony property, so not in XSX hardware. At this time, we do not know if there is additional hardware in the XSX (not unlikely as MS engineers faced the same problems as Sony so the must have had ideas, too). Anything the PS5 backend does in hardware can always be done in software should MS have chosen that path. This software requires (some of the) Zen2 cores, obviously (and you better hope there is something that avoids flooding the cpu caches with data they absolutely don't need).

I am not religious. You seem to be getting upset over the specifications of certain machines? Might be a good idea to take a step back?

Microsoft has similar propriety technology as the Playstation 5 on the compression front and the Xbox Series X also includes a decompression block, that is what we do know, it was what was in the reveal.
https://news.xbox.com/en-us/2020/03/16/xbox-series-x-tech/

Microsoft completely removes the burden from the Zen 2 cores.

drkohler said:

d) Because we have two arguments that are consistently brought up by people who consistently show they have essentially no clue about what the real problems are.

The first argument is "The 2.23GHz is a boost clock". No it isn't. Not going any further there.

The second argument is "The ssd is only to make load times disappear". No it isn't. That is an added bonus but is way short of what the whole hardware/software chain has to do.

No. It is because people are enamored with their particular brand choice and cannot see where they potentially fall short or provide constructive criticism... Or just generally treading on the logical fallacy of hypothesis contrary to fact.

The 2.23Ghz -is- a boost clock. Sony/Cerny specifically mentioned Smartshift. - Unless you are calling Cerny a liar?

https://www.anandtech.com/show/15624/amd-details-renoir-the-ryzen-mobile-4000-series-7nm-apu-uncovered/4

And I quote Digital Foundry which quoted Cerny:
""Rather than look at the actual temperature of the silicon die, we look at the activities that the GPU and CPU are performing and set the frequencies on that basis - which makes everything deterministic and repeatable," Cerny explains in his presentation. "While we're at it, we also use AMD's SmartShift technology and send any unused power from the CPU to the GPU so it can squeeze out a few more pixels.""

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

It is shifting TDP from one part of the chip to the other to boost clockrates. It's a boost clock.

If the Playstation 5 cannot maintain a 2.23Ghz GPU clock in conjunction with a 3.5Ghz CPU clock, whilst pegging the I/O, then by extension... That 2.23Ghz GPU clock is not the base clock, it is a boost clock, it is a best-case scenario.

drkohler said:

The second argument is "The ssd is only to make load times disappear". No it isn't. That is an added bonus but is way short of what the whole hardware/software chain has to do.

Pretty sure that is not my exact statement and you are taking it right out of context.


e) Who? I'd assume each and every of Sony's first party studios will do. Those games will look at least as good as MS showpieces.

Who will not? Small developers without the manpower. Bigger studios who don't care about the additional work required to bring their games up to Sony's in-house standards. No doubt those (mostly multiplat) games will look (a little? noticably?) better on the XSX thanks to the brute force advantage. Again there are a lot of what-ifs involved. If those developers are too careless, the PS5 will edge out because of the higher clock rates in every stage of the gpu. There simply is no telling without seeing the games.

The Xbox Series X has the CPU, GPU and Memory bandwidth advantages, it is likely to show an advantage more often than not... Just like the Xbox One X compared to the Playstation 4 Pro.

In simpler titles which won't use 100% of either consoles capabilities... Those games will look identical. - And that happens every console generation, there are base Xbox One and Playstation 4 games with visual parity, right down to resolution and framerates.

Big AAA exclusives are another kettle of fish and could make things interesting... But by and large, if graphics is the most important aspect, the Xbox Series X holds the technical edge due to the sheer number of additional functional units baked into the chip design.


When the ssd techology is used properly, we'll see that old game design ideas can finally be realised now (on either console). The times of brute forcing your way through a game might be over (or not, we'll see in a year or two or three).

Brute forcing and using design tricks to get around hardware limitations will continue to exist because the SSD is still a limitation until it actually matches the RAM's bandwidth.

Remember... We wen't from optical disks that could be measured in kilobytes per second to mechanical hard drives that could be measured in Megabytes per second... With an accompanying reduction in seek times... Did game design change massively? For the most part, not really... And we are seeing a similar jump in storage capability by jumping from mechanical drives that are measured in Megabytes per second to Gigabytes per second with an equally dramatic decrease in seek times.

People tend to gravitate towards games that they like... Developers then design games around that, hence why something like Battle Royale happened and then every developer and it's pet dog jumped onto the bandwagon to make their own variant.

Everyone copied Gears of Wars "Horde mode" as well at one point.

That's not to say that SSD's won't provide benefits, far from it.

Intrinsic said:

GPU & Resolution
Aka 12.1TF vs 10.3TF. This is the single most stupid thing to argue about, why? Not because its just a 14%/17% performance different, or not even because with the PS5 higher GPU clock means there are things in the GPU that it would actually be faster at doing or be able to do more of than the XSX GPU; but because a lot of people are using an antiquated way measuring performance. A way that devs no longer use.

I would not be making such a claim just yet.

The Xbox Series X is a chip with dramatically more functional units... It is only in scenarios where the PS5 has the same number of ROPS, TMU's, Geometry units and so forth that it will be faster than the Xbox Series X due to it's higher clockrates... And usually those units are tied somewhat to the number of Shader groupings.

The Xbox Series X could have the advantage on the GPU side across the board... The point I am making is that until we get the complete specs set, we just don't know yet.

We do know the SSD is twice as fast as the Xbox Series X... Which is what people are clinging to at the moment as it's the only guaranteed superior metric.

HollyGamer said:

So, which computer would give the highest FPS in a game: one with a 2.0 teraflop processor in its GPU, or one with a 2.2 teraflop processor in its GPU? Now we need to look at different clock speeds, bus bandwidth, cache sizes, register counts, main and video RAM sizes, access latencies, core counts, northbridge implementation, firmwares, drivers, operating systems, graphics APIs, shader languages, compiler optimisations, overall engine architecture, data formats, and hundreds, maybe thousands of other factors. Each game team will utilise and tune the engine differently, constantly profiling and optimising based on conditions."

Precisely, we do need to account for everything. You can have the same Teraflops, but half the performance if the rest of the system isn't up to snuff.

Only focusing on Teraflops or only focusing on the SSD is extremely 2Dimensional... And doing a disservice to the amount of engineering, research and development that Microsoft, Sony and AMD have put into these consumer electronic devices.

LudicrousSpeed said:
Barring shoddy optimization, XSX games should look better and run better. PS5 games should load a couple seconds faster. Not a large difference, though I want to see how devs use the variance in power on PS5. For all we know, the small difference on paper between 12 and 10.3 can become larger once PS5 is running demanding games or depending on how devs utilize the setup.

It's more than just initial loading.

Evilms said:

https://www.3dcenter.org/news/rohleistungsvergleich-xbox-series-x-vs-playstation-5

Pretty sure the TMU's, ROPS haven't been revealed yet, don't count the chickens before the eggs have hatched.

And rumor has it that the Xbox Series X could have 80 ROPS verses the PS5's 64 ROPS...
https://www.techpowerup.com/gpu-specs/xbox-series-x-gpu.c3482

In RDNA AMD groups 1x Rasterizer with every 4x Render Back ends, obviously that can change with RDNA 2, but just some food for thought.
https://www.amd.com/system/files/documents/rdna-whitepaper.pdf

Which means that we could be looking at 20 Rasterizers verses 16.

sales2099 said:

Maybe I’m not seeing that chart properly but where’s the GPU? You know the biggest advantage the Series X has on PS5?

I think you are looking for the shader processors, they have tried to take every aspect of the GPU into account rather than a pure focus on flops.

drkohler said:

My guess is at the early stages (5-7 years ago), AMD was still struggling with cu scaling (the more cus, the less efficient the eincrease in performance was), so he went fast and small instead of wide and slow.

AMD has "claimed" (Salts, grains, kittens and all that) that RDNA 2.0 is 50% more efficient than RDNA 1... Which was the same jump we saw between Vega and RDNA 1.

Graphics tasks are highly parallel... AMD was struggling with CU scaling because GCN had intrinsic hardware limits, it was an architectural limitation itself, we need to remember when AMD debuted GCN we were working with 32 CU's, AMD then stalled as the company's profits plummeted and AMD had to make cutbacks everywhere in order not to go bankrupt, so they kept milking GCN longer than anticipated in order to keep R&D and engineering costs as low as possible.

DonFerrari said:

Yes Sony were able to make the entire Smartshift, cooling solution, decide to control by frequency with fixed power consumption and everything else in a couple days since MS had revealed the specs? Or do we give then a couple months for when the rumours were more thrustworthy?

The best you can have for "reactionary" is that Sony was expecting MS to go very high on CU count and have a high TF number and chose the cheaper route to put higher frequency, but that was decided like 2 years ago.

Higher frequency isn't always cheaper.
The higher in frequency you go, the more voltage you need to dump into the design... And one aspect of chip yields is that not all chips can hit a certain clock frequency at a certain voltage due to leakage and so forth, which means the number of usable chips decreases and the cost per-chip increases.

It's actually a careful balancing act of chip size vs chip frequency. If you can get all the right cards in a row... You can pull off an nVidia Pascal and drive up clockrates significantly, however nVidia still had to spend a ton of transistors to reduce leakage and remove clockrate limiting bottlenecks from their design, but it paid off.

Yep I understand it. But would be quite asinine to have the chip cost the same as Xbox to deliver 20% less don't you agree?

And my point was more on, since clockrate increase isn't something so simple to do (even when Xbox One was reacting to being weaker than PS4, which was after both had revealed their specs the increase was quite small and only on CPU and also Xbox had a much bigger box so the cooling was likely easier to tweak for the increase in CPU clock) the whole use of smartshift, two controlers for TDP and all else wasn't something reactionary to MS.

It was Sony thinking it was the best solution for them on the budget they had.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

DonFerrari said:

Yep I understand it. But would be quite asinine to have the chip cost the same as Xbox to deliver 20% less don't you agree?

And my point was more on, since clockrate increase isn't something so simple to do (even when Xbox One was reacting to being weaker than PS4, which was after both had revealed their specs the increase was quite small and only on CPU and also Xbox had a much bigger box so the cooling was likely easier to tweak for the increase in CPU clock) the whole use of smartshift, two controlers for TDP and all else wasn't something reactionary to MS.

It was Sony thinking it was the best solution for them on the budget they had.

Exactly...on both points. First off anyone that is saying that sony just upped clocksto 2.23Ghz as a reaction to MS announcing 2TF is trolling. Anyone that actually believes that... well I wouldn't even know where to begin.

As for the price, I just believe the target was always to be at $399. The price of hardware is the first thing that is pegged down in the design process. you basically pick a price and then go out to build the best thing you can fit in that pricepoint. Sony would have known that their best bet at hitting that price was making the smallest chip they could make. They would also have known that if they went that route then they would have to be able to clock that chip as high as they could. And just like that, narrow and fast becomes one of the key design pillars of the hardware.

Surely, early on in their design process if they found that going narrow and fast would put them in a position where thy would have to sell at $499, they would have just scrapped that design and gone for something easier and more conventional and as the XSX has shown, even more powerful Unless they somehow spent small fortune on their SSD solution which would mean they would believe it would make more of a difference than just building something more powerful.

I believe sony just set out to make the most powerful box they could make for an MSRP target of $399.