By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Let's talk about Specs

 

You like Specs?

I love Specs! 16 40.00%
 
I kinda like Specs. 13 32.50%
 
Specs are for nerds! 2 5.00%
 
I don't care either way, ... 9 22.50%
 
Total:40
DonFerrari said:

Also the other aspect that needs more detail on the architeture is how much the extra frequency will help PS5 against the choice for less CUs.

First and most important, the 2.23 vs 1.825 GHz difference is valid for everything in the gpu, not just the cus. This includes command processor, raster operations, geometry engines, all the cache and register memory, hardware companders, whatever else is implemented into the gpu (like the Tempest stuff which makes the PS5 the clear winner in the VR department right out of the gate). So if your entire life circled around vector processing, you'd be comfortably ahead with the XSX's cu advantage. That includes Ray tracing, the XSX can roughly calculate 380billion vs PS5's 320billion intersections/s (the PS5 might gain some ground thanks to the faster caches, and boy does rt depend on caches!). This winning scenario assumes that the XSX has the capability to keep all the 52cus busy at all times, whereas the PS5 only has to keep 36cus busy all the time. Now if we look at the history of AMD keeping all its cus busy all the time, that track record is somewhat checkered, to put it mildly. You might remember all the talk about AMD not being able to have more than 64 cus in its cards. Of course there never was a hardware limit (except space constraints, obviously), but the fact that more than 64 cus would simply show such a lousy increase in performance that it basically was money thrown down the gutter. If we assume (big IF) that this has changed with rdna2, then more cus is better, obviously. Keep in mind that keeping more cus busy means more threads which means more command processor activity which means more cache activity (at a lower speed in the XSX compared to PS5). If not, then less cus is better. I hope you are beginning to see that merely shouting "12.1TF vs 10.3TF" sounds very good to the gamerzzz and pr crowd, shouting it to the developers who actually have to use the gpus would make you look a little foolish.

Games will eventually tell who is right. Probably there won't be a right or wrong at all. There's always a solution to hide a 10% deficit in a particular area. One drawback of faster clocks should be mentioned, too. If you have to go to main memory, you lose a little more time on fast clocks than on slower clocks. But if you have to go that road, you've lost anyways. As for the rt advantage, nobody knows yet if simply using a few less rays on the PS5 will nullify the XSX's advantage.

In the end, the games will show who is "right" (and there really isn't a clear right or wrong to start with). Sony with its first party galore still has a clear advantage in the race, if they can show it soon after/with the launch.

Last edited by drkohler - on 21 March 2020

Around the Network
Pemalite said:

I never said "You actually know nothing."

Hey. You are right, t was an error, already fixed that. Was trying to quote some poor arguments from you.

Pemalite said:

Teraflops isn't FP16/32 Integers. It's floating point, not integers. Wow.

You see "FP" stands for Floating Point... 16 is half precision, 32 is full. Integers means whole number of precision and not a fraction of it.  You are trying to debate things out of your understanding. State facts not your rants.

Teraflops are not a standard, not even in the same architecture. It is a measurement that depends on the socket cores and cycle operations a unit has. The more cores or higher clock you have the grater performance you will get. But all will be dependent in how many floating operation per cycle a processing unit can achieve. The only thing true being the same in the calculation is when you have the same architecture; meaning equal floating point operation per cycle operations, so with the given clock and cores, you can easily calculate performance across variants using the same architecture. But once you change the GPU architecture, the brand, or the unit generation; every variable changes the peak theoretical performance. Even if you take a PS5 with 1.8 tflops it will be run games faster than what the PS4 could. This is due to new features being built-in that enhances processing. Another example is by adding a ray tracing buffer to a GPU, this will lead to better FPS performance with touching Tflops calculation. So TFLOPS calculation across Devices ARE NOT THE SAME. ++++For Christ sake READ.++++

And to my understanding kid is not a bad word. Used in the context to denote poor judgement. Not an offense term.

Pemalite said:

Apparently you don't have an understanding of how backwards compatibility is achieved on the Xbox One.
If you think Microsoft is doing pure emulation... You are highly mistaken.

Xbox one GPU was made with a some legacy features, but not all of them. It was the sole reason backwards compatibility wasn't available at launch. It was a half cooked idea. It took years for Microsoft to be able and emulate the other features in software not available in legacy hardware. In fact they had to embed the whole 360 operating system into the xbox one to be able and do it. Where are your facts?

360 emulation in PC is poor because there are no good programmers interested in it, no even for OG xbox. True, some people have been working for years on the libraries and instruction set for those systems, but not much more luck than that. PS3 emulator is great simply because it has a very good programmer devoted to achieve true ps3 emulation in PC, simple as that. PS3 is a very complex system and its CPU is not an ordinary one, technically speaking is still more advanced than ps4 or xbox one cpus; notice i didn't say not faster but advanced. The architecture was so much better that audio processing is way better than what the ps4 is capable of, in addition, even the CPU could be used to assist the GPU for graphics workload. Neither PS4 or Xbox one had these features. Where are your facts?

About the audio, well this is very subjective matter, but i don't want the ps5 system to cost 100 more, just to pay the research of an audio module, that may or may not deliver a better immersive experience. I do enjoy good audio and audiophiles may have something here for them, but I'm not one of them. I have my doubts. And the goal for the ps5 should be achieving a better product(feature and performance wise) while being the cheaper solution. The research putted into this could have been used for BC of PS1/PS2 games and if possible PS3 as well. PS5 specs only brings me PS3 memories.

Truth is PS5 will have 10.28 Tflops in boost mode vs XBOX Sx 12.1 Tflops in fixed mode.  PS5 standard performance (with no boost) should be are around producing 9Tflops according to early leaks. So at the end PS5 will have 18%(best) to 25%(worst) lesser graphics capabilities (vector processing) than XBOX SX. Boost mode may be different than PC but it will affect performance the same. This will translate into fewer graphical intensive features such as ray tracing. I wonder how many ray trays PS5 can support at 4k60 vs XBOX SX, I put all my money that if the current PS5 is the final product Microsoft will end with an advantage. The only thing that benefits Sony is the SSD technology behind the system. True, textures and maps will almost be immediately available to be cached by the GPU, but processing them is an entire different story. How much the GPU will handle before it chokes and underclocks is key to achieve a better experience. Just take a cheap laptop with a SSD drive and try to run an old game vs an old laptop with good specs and a lame hard drive, you will find that no matter how fast textures loads, if the GPU can't handle the workload the performance is gonna stall, being the old laptop better and more stable. At the end, multi-platform games will not differ that much. But exclusive games in Xbox SX will outmatch PS5 exclusives easier due to brute performance and more available resources from CPU/GPU. It will depend in the programing magic and support that Sony can give to its first party studios to be on par or better than XBOX XS. Facts.

Pemalite said:

Your love for any company is irrelevant... And with all due respect... I honestly don't care.

Pemalite, your preferences are respected bot not shared, not with me nor anyone. Preferences are an individual matter. Other people preferences should not be bothering you. So I don't understand you attention to it. So...Whatever.

Last edited by alexxonne - on 21 March 2020

EricHiggin said:

I don't know if I would say he was super clear, but he definitely implied games would be smaller due to not having to duplicate data in the presentation. This might be a bigger deal than it seems with only an 825GB raw storage space.

Duplicating data on a mechanical drive generally doesn't happen... It became a necessity on an Optical drive because of how much slower seek times were... Plus optical disks have a myriad of performance characteristics depending on where data is physically located on the optical disk.

Such issues tend to be lessened on a mechanical disk.

Fact is, games on Consoles, installed to a mechanical disk are generally a match for the PC's install size as well.. And I can assure you, there is no duplication of data going on there.

EricHiggin said:

Yes, I would assume AMD does the overwhelming majority of the engineering and design. We'll have to see if SmartShift is a feature in big Navi cards that are supposed to be coming later this year. Cerny seemed to be hinting there's something worthy in RDNA 2 thanks to SNY.

Smartshift is not going to be on Big Navi or RDNA2.
The technology requires a CPU and GPU and other logic to actually function in a single package.

What HSA does for memory coherency... Smartshift does for power coherency.

EricHiggin said:

The leaks have existed for some time that 2.0GHz was supposed to be the peak. There was a rumor/leak not all that long ago that SNY had upgraded to, or decided to go with a pricey high end cooling solution. It's possible they got fairly solid info on XBSX and decided to push PS5 clocks slightly beyond where they were initially planned to be capped. Could also be why the talk was promoted, since they may not have final hardware yet. The shell may need to be slightly redesigned or increased in size. They would want to test plenty with new shells to make sure the heat dissipation and sound levels are acceptable before showing it off. Could also just be marketing tactics, since PS4 wasn't shown early.

The GPU Peak is 2.23Ghz. It can sustain that provided the CPU and I/O aren't pegged at 100%.

Hynad said:
CGI-Quality said:

Outside of in-house optimization, it cannot sustain itself like the Series X, not at a constant 2.23GHz all the time.

That’s not what Cerny said.

So Cerny stated that when the CPU, GPU, I/O and all the other components are 100% pegged that it can sustain boost clocks indefinitely?

Then why have the boost clocks at all and why not just claim them as base clocks? AFAIK Cerny never made that claim.

DonFerrari said:

The source for some assets being hundred time on HDD was given during the GDC talk itself. but i got one for you https://www.forbes.com/sites/kevinmurnane/2019/10/19/the-super-fast-ssds-in-the-next-gen-game-consoles-may-turn-out-to-be-a-mixed-blessing/

Not that the exact number is relevant

On the BC logic on chip, theoretically thhat is what PS5 done for PS4.

Fair call. Thank you.
I think you will find that the majority of data is from uncompressed audio... Rather than duplicated data.

alexxonne said:
Pemalite said:

Teraflops isn't FP16/32 Integers. It's floating point, not integers. Wow.

You see "FP" stands for Floating Point... 16 is half precision, 32 is full. Integers means whole number of precision and not a fraction of it.  You are trying to debate things out of your understanding. State facts not your rants.

The hilarious part I find in all of this is how oblivious you are to the fact that Integer and Floating Point are different... At-least this time you have attempted to make the distinction, perhaps you were confused or didn't convery yourself earlier? Who knows.

Integers can also have different degrees of precision like INt4, INT8, INT16 and so forth.

Either way... Educate yourself on the difference between Integer or Floating Point before we take this discussion any further, otherwise it's pointless if you cannot grasp basic computing fundamentals.

https://en.wikipedia.org/wiki/Floating-point_arithmetic

https://en.wikipedia.org/wiki/Integer

https://www.dummies.com/programming/c/the-real-difference-between-integers-and-floating-point-values/

https://processing.org/examples/integersfloats.html

https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/6

The more you know.

alexxonne said:

Teraflops are not a standard, not even in the same architecture. It is a measurement that depends on the socket cores and cycle operations a unit has. The more cores or higher clock you have the grater performance you will get. But all will be dependent in how many floating operation per cycle a processing unit can achieve

Teraflops is a standard. It's a standard that represents single precision floating point at it's basic form.

It's generally a theoretical, not a real-world denominator, but benchmarks exist that will provide a real-world extrapolation.

alexxonne said:

The only thing true being the same in the calculation is when you have the same architecture; meaning equal floating point operation per cycle operations, so with the given clock and cores, you can easily calculate performance across variants using the same architecture. But once you change the GPU architecture, the brand, or the unit generation; every variable changes the peak theoretical performance.

Except that idea falls flat on it's face when you start throwing other aspects into the equation.

Take a Geforce 1030... A DDR4 and GDDR5 variant exists, the Geforce 1030 even in floating point tasks will sometimes perform only half as fast.
We can duplicate this with the Radeon 7750 DDR3 vs GDDR5 and so on and so forth.

alexxonne said:

This is due to new features being built-in that enhances processing. Another example is by adding a ray tracing buffer to a GPU, this will lead to better FPS performance with touching Tflops calculation. So TFLOPS calculation across Devices ARE NOT THE SAME. ++++For Christ sake READ.++++

And to my understanding kid is not a bad word. Used in the context to denote poor judgement. Not an offense term.

Ray Tracing requires some form of computer. If it uses floating point, it uses teraflops.
If it uses integers, then it uses integers... Because you seem to be an "expert" in the field who is all-knowing, tell me. What is it?

Teraflops is the same regardless of device. It's theoretical.

And using "kid" in the context you were using it in, is a derogatory term, either way, don't argue that point, move on.

alexxonne said:

Pemalite said:

Apparently you don't have an understanding of how backwards compatibility is achieved on the Xbox One.
If you think Microsoft is doing pure emulation... You are highly mistaken.

Xbox one GPU was made with a some legacy features, but not all of them. It was the sole reason backwards compatibility wasn't available at launch. It was a half cooked idea. It took years for Microsoft to be able and emulate the other features in software not available in legacy hardware. In fact they had to embed the whole 360 operating system into the xbox one to be able and do it. Where are your facts?

I have had these debates before and provided the evidence.

So where are your facts?

Microsoft isn't using pure emulation to get Xbox 360 and Original Xbox games running on the Xbox One. Those are the facts.

alexxonne said:

360 emulation in PC is poor because there are no good programmers interested in it, no even for OG xbox. True, some people have been working for years on the libraries and instruction set for those systems, but not much more luck than that. PS3 emulator is great simply because it has a very good programmer devoted to achieve true ps3 emulation in PC, simple as that. PS3 is a very complex system and its CPU is not an ordinary one, technically speaking is still more advanced than ps4 or xbox one cpus; notice i didn't say not faster but advanced. The architecture was so much better that audio processing is way better than what the ps4 is capable of, in addition, even the CPU could be used to assist the GPU for graphics workload. Neither PS4 or Xbox one had these features. Where are your facts?

Xbox 360 emulation is poor, not because people aren't interested in it... But because of a myriad of reasons.

You are quick to throw out the "facts" word but not willing to pony up evidence.

The Cell CPU is actually a very simple-in-order core design... It was "complex" not because of the instructions that need to be interpreted or translated, but due to the sheer number of cores and load balancing those, which when it comes time for emulation... Makes emulation much easier verses a monolithic core design.

Audio processing on PS3 is better than PS4? Better provide the evidence for that... Or your facts are just whack.

CPU couldn't be used to assist for GPU workloads? I guess all those draw calls just happened out of thin air... I guess post-process filters like morphological AA on the CPU never happened and more. - You will need to provide the evidence the CPU didn't assist the GPU... Otherwise your facts are just whack.

alexxonne said:

About the audio, well this is very subjective matter, but i don't want the ps5 system to cost 100 more, just to pay the research of an audio module, that may or may not deliver a better immersive experience. I do enjoy good audio and audiophiles may have something here for them, but I'm not one of them. I have my doubts. And the goal for the ps5 should be achieving a better product(feature and performance wise) while being the cheaper solution. The research putted into this could have been used for BC of PS1/PS2 games and if possible PS3 as well. PS5 specs only brings me PS3 memories.

Don't give two hoots about cost. Give me the best, ditch the rest.

Price hasn't been revealed, might not be cheaper than Xbox Series X. (Another fact from me... To you.)

alexxonne said:

Truth is PS5 will have 10.28 Tflops in boost mode vs XBOX Sx 12.1 Tflops in fixed mode.  PS5 standard performance (with no boost) should be are around producing 9Tflops according to early leaks. So at the end PS5 will have 18%(best) to 25%(worst) lesser graphics capabilities (vector processing) than XBOX SX. Boost mode may be different than PC but it will affect performance the same. This will translate into fewer graphical intensive features such as ray tracing. I wonder how many ray trays PS5 can support at 4k60 vs XBOX SX, I put all my money that if the current PS5 is the final product Microsoft will end with an advantage. The only thing that benefits Sony is the SSD technology behind the system. True, textures and maps will almost be immediately available to be cached by the GPU, but processing them is an entire different story. How much the GPU will handle before it chokes and underclocks is key to achieve a better experience. Just take a cheap laptop with a SSD drive and try to run an old game vs an old laptop with good specs and a lame hard drive, you will find that no matter how fast textures loads, if the GPU can't handle the workload the performance is gonna stall, being the old laptop better and more stable. At the end, multi-platform games will not differ that much. But exclusive games in Xbox SX will outmatch PS5 exclusives easier due to brute performance and more available resources from CPU/GPU. It will depend in the programing magic and support that Sony can give to its first party studios to be on par or better than XBOX XS. Facts.

I think the Pro's and Con's of each console are well documented at this point.

alexxonne said:
Pemalite said:

Your love for any company is irrelevant... And with all due respect... I honestly don't care.

Pemalite, your preferences are respected bot not shared, not with me nor anyone. Preferences are an individual matter. Other people preferences should not be bothering you. So I don't understand you attention to it. So...Whatever.

Nice try turning it around. Other peoples preferences do not bother me.

Did you not read the part where I said I honestly don't care?



--::{PC Gaming Master Race}::--

I love the nitty gritty of systems. But that applies to the most powerful halo products as well as to embedded and low power systems. Also important is taking software specs into account when discussing hardware specs. As a linux user, getting an older but more powerful GPU might not make sense if it for example doesn't support Vulkan as well as a newer GPU.



Pemalite said:

alexxonne said:

360 emulation in PC is poor because there are no good programmers interested in it, no even for OG xbox. True, some people have been working for years on the libraries and instruction set for those systems, but not much more luck than that. PS3 emulator is great simply because it has a very good programmer devoted to achieve true ps3 emulation in PC, simple as that. PS3 is a very complex system and its CPU is not an ordinary one, technically speaking is still more advanced than ps4 or xbox one cpus; notice i didn't say not faster but advanced.

Xbox 360 emulation is poor, not because people aren't interested in it... But because of a myriad of reasons.

You are quick to throw out the "facts" word but not willing to pony up evidence.

Even if there are a myriad of reasons, one of the biggest reason is probably that there are only a few good original Xbox and good Xbox 360 games titles left that you can't already play as PC version.

If you go through the Xbox games (~200) or X360 games (~400) with a MetaScore of 80+, there are only very few which aren't available on PC (in most cases in a much prettier version) or totally outdated sport games.

Emulation of Sony or Nintendo consoles is much more attractive because there are much more games without a PC version.

Another big reason is the examplary treatment of old Xbox and 360 classics on Xbox One and Xbox One X.

Why invest much work into emulating the few original Xbox and 360 games that aren't available (or broken) on PC, if Xbox fans just have to buy a used Xbox One X (you will get it for less than $200 beginning this year, probably less than $150 next year) to play them in awesome quality.

  • Panzer Dragoon Orta in 4K
  • Fable Anniversary and Fable 2 in 4K
  • Forza Horizon 1 in 4K
  • Ninja Gaiden Black 1 + 2 in much higher resolution 
  • Splinter Cell 1 - 3 in much higher resolution (some PC versions have broken shadows)
  • Conker Live & Reloaded in much higher resolution 
  • BLACK in much higher resolution 
  • SSX 3 in much higher resolution 
  • Red Dead Redeption 1 in 4K

These great enhancements on Xbox One X are also the reason why an original Xbox mini console would probably fail: why play them on a $80 - $100 mini console in 480p, when you can play them on an used Xbox One S ($80 - $120) in 720p - 1080p or on an used Xbox One X ($150 - $200) in 1080p - 4K?

Last edited by Conina - on 21 March 2020

Around the Network
Captain_Yuri said:

Imo Cerny's claims actually make sense when it comes to the way they are doing boost after reading DF's article. Fundamentally, consoles can't be in a situation where they have variable performance in games from console to console due to thermal throttling, power issues and etc. Every console needs to be able to run the game the exact same way as every other console since that is one of the main purposes of a console over PC. Sony mentions this in the DF article:

"According to Sony, all PS5 consoles process the same workloads with the same performance level in any environment, no matter what the ambient temperature may be." - DF

I think the Variable/Boost behaviour comes from the states that ps5 will be running. I am pretty sure Cerny and the rest has done their research and determined that the ps5 will have various states that it can run on that will be repeatable from console to console depending on what the developers want.

As an example. If the CPU is running at 3.5ghz at all 8 cores, the GPU can run at blah clock but below 2.23ghz. If the GPU is running at 2.23ghz then the CPU will run at blah clock but below 3.5ghz at all 8 cores. If both need to run at max performance at the same time, they will run slower than 3.5 and 2.23 but we don't know by how much. Mind you that Cerny says it's not much of a performance dip regardless. So if a game is very GPU bound but not very CPU bound, the ps5's GPU can boost to 2.23ghz all day and vice versa but the ps5 can't have both. The benefit and the main difference of Xbox is because it's not "boosting," it's able to run at it's advertised performance at all times for it's GPU and it's CPU is determined by whether or not the devs want SMT/HyperThreading.

So CPU and GPU load for the XSX doesn't affect each other but it does for the Ps5.

"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," - Cerny

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

Least that's what I got out of it anyway.

This is how I interpreted it hearing Cerny talk. There is some talk here and on GAF/Reeee that the PS5 is actually locked in terms of performance and can run in “boost” mode at all times... that’s obviously false, or else why have variable speeds that can affect performance? A CPU heavy game will affect the GPU performance. A GPU heavy game will affect the CPU. Those that utilize both will affect both. Meaning the more demanding a game is, the less performance you get out of the PS5 because it has to run at lower speeds.

It was interesting when he said you’d see at most around a 10% drop in performance, that would get you right around 9-9.2TF which was what the original leaks had PS5 at all along. So maybe they did see XSX specs and overclock it? Will be interesting to see the price and cooling.

Either way both consoles should be capable of pushing out good stuff. XSX will just look better.



These posts get longer and longer to digest with all the quoting.
So I ask everyone for just one thing:

Some people think that 2.23GHz is a boost clock and the thing actually runs at a lower clock some/most of the time. It is NOT.
When Cerny mentioned boost in his talk, he meant it in the engineering sense of the word. What they did when testing the SoC, they started with a low frequency and upped the frequency step by step until the gpu was no longer able to function correctly (My guess is a lot of SoCs bit the dust). This gave them the absolute upper clock limit. Then they did the same thing again up to a point where the thermal/power envelope was reached with whatever cooling solutions were tested. Apparently 2.23GHz is the "sweet spot" for the gpu. (Surprisingly the 3.5GHz for the cpu is already problematic due to a particular 256bit command set that needs large amounts of power.)
Stepping up the clock is called boosting the clock in the engineering world. It has nothing to do with "This thing runs at x GHz but we can boost x by y%".



It overall doesn’t influence my purchase, but I don’t like to settle for a weaker console. I always prefer the console with the most power, as it shows in the games. Highest resolution and FPS possible



Xbox: Best hardware, Game Pass best value, best BC, more 1st party genres and multiplayer titles. 

 

sales2099 said:
Highest resolution and FPS possible

That is not how the future will work. Strangely, noone has mentioned FreeSync in the presentations...



drkohler said:
sales2099 said:
Highest resolution and FPS possible

That is not how the future will work. Strangely, noone has mentioned FreeSync in the presentations...

Lol you can’t know that. And Pc gamers would highly object to that. 



Xbox: Best hardware, Game Pass best value, best BC, more 1st party genres and multiplayer titles.