By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
EricHiggin said:

I don't know if I would say he was super clear, but he definitely implied games would be smaller due to not having to duplicate data in the presentation. This might be a bigger deal than it seems with only an 825GB raw storage space.

Duplicating data on a mechanical drive generally doesn't happen... It became a necessity on an Optical drive because of how much slower seek times were... Plus optical disks have a myriad of performance characteristics depending on where data is physically located on the optical disk.

Such issues tend to be lessened on a mechanical disk.

Fact is, games on Consoles, installed to a mechanical disk are generally a match for the PC's install size as well.. And I can assure you, there is no duplication of data going on there.

EricHiggin said:

Yes, I would assume AMD does the overwhelming majority of the engineering and design. We'll have to see if SmartShift is a feature in big Navi cards that are supposed to be coming later this year. Cerny seemed to be hinting there's something worthy in RDNA 2 thanks to SNY.

Smartshift is not going to be on Big Navi or RDNA2.
The technology requires a CPU and GPU and other logic to actually function in a single package.

What HSA does for memory coherency... Smartshift does for power coherency.

EricHiggin said:

The leaks have existed for some time that 2.0GHz was supposed to be the peak. There was a rumor/leak not all that long ago that SNY had upgraded to, or decided to go with a pricey high end cooling solution. It's possible they got fairly solid info on XBSX and decided to push PS5 clocks slightly beyond where they were initially planned to be capped. Could also be why the talk was promoted, since they may not have final hardware yet. The shell may need to be slightly redesigned or increased in size. They would want to test plenty with new shells to make sure the heat dissipation and sound levels are acceptable before showing it off. Could also just be marketing tactics, since PS4 wasn't shown early.

The GPU Peak is 2.23Ghz. It can sustain that provided the CPU and I/O aren't pegged at 100%.

Hynad said:

That’s not what Cerny said.

So Cerny stated that when the CPU, GPU, I/O and all the other components are 100% pegged that it can sustain boost clocks indefinitely?

Then why have the boost clocks at all and why not just claim them as base clocks? AFAIK Cerny never made that claim.

DonFerrari said:

The source for some assets being hundred time on HDD was given during the GDC talk itself. but i got one for you https://www.forbes.com/sites/kevinmurnane/2019/10/19/the-super-fast-ssds-in-the-next-gen-game-consoles-may-turn-out-to-be-a-mixed-blessing/

Not that the exact number is relevant

On the BC logic on chip, theoretically thhat is what PS5 done for PS4.

Fair call. Thank you.
I think you will find that the majority of data is from uncompressed audio... Rather than duplicated data.

alexxonne said:

You see "FP" stands for Floating Point... 16 is half precision, 32 is full. Integers means whole number of precision and not a fraction of it.  You are trying to debate things out of your understanding. State facts not your rants.

The hilarious part I find in all of this is how oblivious you are to the fact that Integer and Floating Point are different... At-least this time you have attempted to make the distinction, perhaps you were confused or didn't convery yourself earlier? Who knows.

Integers can also have different degrees of precision like INt4, INT8, INT16 and so forth.

Either way... Educate yourself on the difference between Integer or Floating Point before we take this discussion any further, otherwise it's pointless if you cannot grasp basic computing fundamentals.

https://en.wikipedia.org/wiki/Floating-point_arithmetic

https://en.wikipedia.org/wiki/Integer

https://www.dummies.com/programming/c/the-real-difference-between-integers-and-floating-point-values/

https://processing.org/examples/integersfloats.html

https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/6

The more you know.

alexxonne said:

Teraflops are not a standard, not even in the same architecture. It is a measurement that depends on the socket cores and cycle operations a unit has. The more cores or higher clock you have the grater performance you will get. But all will be dependent in how many floating operation per cycle a processing unit can achieve

Teraflops is a standard. It's a standard that represents single precision floating point at it's basic form.

It's generally a theoretical, not a real-world denominator, but benchmarks exist that will provide a real-world extrapolation.

alexxonne said:

The only thing true being the same in the calculation is when you have the same architecture; meaning equal floating point operation per cycle operations, so with the given clock and cores, you can easily calculate performance across variants using the same architecture. But once you change the GPU architecture, the brand, or the unit generation; every variable changes the peak theoretical performance.

Except that idea falls flat on it's face when you start throwing other aspects into the equation.

Take a Geforce 1030... A DDR4 and GDDR5 variant exists, the Geforce 1030 even in floating point tasks will sometimes perform only half as fast.
We can duplicate this with the Radeon 7750 DDR3 vs GDDR5 and so on and so forth.

alexxonne said:

This is due to new features being built-in that enhances processing. Another example is by adding a ray tracing buffer to a GPU, this will lead to better FPS performance with touching Tflops calculation. So TFLOPS calculation across Devices ARE NOT THE SAME. ++++For Christ sake READ.++++

And to my understanding kid is not a bad word. Used in the context to denote poor judgement. Not an offense term.

Ray Tracing requires some form of computer. If it uses floating point, it uses teraflops.
If it uses integers, then it uses integers... Because you seem to be an "expert" in the field who is all-knowing, tell me. What is it?

Teraflops is the same regardless of device. It's theoretical.

And using "kid" in the context you were using it in, is a derogatory term, either way, don't argue that point, move on.

alexxonne said:

Xbox one GPU was made with a some legacy features, but not all of them. It was the sole reason backwards compatibility wasn't available at launch. It was a half cooked idea. It took years for Microsoft to be able and emulate the other features in software not available in legacy hardware. In fact they had to embed the whole 360 operating system into the xbox one to be able and do it. Where are your facts?

I have had these debates before and provided the evidence.

So where are your facts?

Microsoft isn't using pure emulation to get Xbox 360 and Original Xbox games running on the Xbox One. Those are the facts.

alexxonne said:

360 emulation in PC is poor because there are no good programmers interested in it, no even for OG xbox. True, some people have been working for years on the libraries and instruction set for those systems, but not much more luck than that. PS3 emulator is great simply because it has a very good programmer devoted to achieve true ps3 emulation in PC, simple as that. PS3 is a very complex system and its CPU is not an ordinary one, technically speaking is still more advanced than ps4 or xbox one cpus; notice i didn't say not faster but advanced. The architecture was so much better that audio processing is way better than what the ps4 is capable of, in addition, even the CPU could be used to assist the GPU for graphics workload. Neither PS4 or Xbox one had these features. Where are your facts?

Xbox 360 emulation is poor, not because people aren't interested in it... But because of a myriad of reasons.

You are quick to throw out the "facts" word but not willing to pony up evidence.

The Cell CPU is actually a very simple-in-order core design... It was "complex" not because of the instructions that need to be interpreted or translated, but due to the sheer number of cores and load balancing those, which when it comes time for emulation... Makes emulation much easier verses a monolithic core design.

Audio processing on PS3 is better than PS4? Better provide the evidence for that... Or your facts are just whack.

CPU couldn't be used to assist for GPU workloads? I guess all those draw calls just happened out of thin air... I guess post-process filters like morphological AA on the CPU never happened and more. - You will need to provide the evidence the CPU didn't assist the GPU... Otherwise your facts are just whack.

alexxonne said:

About the audio, well this is very subjective matter, but i don't want the ps5 system to cost 100 more, just to pay the research of an audio module, that may or may not deliver a better immersive experience. I do enjoy good audio and audiophiles may have something here for them, but I'm not one of them. I have my doubts. And the goal for the ps5 should be achieving a better product(feature and performance wise) while being the cheaper solution. The research putted into this could have been used for BC of PS1/PS2 games and if possible PS3 as well. PS5 specs only brings me PS3 memories.

Don't give two hoots about cost. Give me the best, ditch the rest.

Price hasn't been revealed, might not be cheaper than Xbox Series X. (Another fact from me... To you.)

alexxonne said:

Truth is PS5 will have 10.28 Tflops in boost mode vs XBOX Sx 12.1 Tflops in fixed mode.  PS5 standard performance (with no boost) should be are around producing 9Tflops according to early leaks. So at the end PS5 will have 18%(best) to 25%(worst) lesser graphics capabilities (vector processing) than XBOX SX. Boost mode may be different than PC but it will affect performance the same. This will translate into fewer graphical intensive features such as ray tracing. I wonder how many ray trays PS5 can support at 4k60 vs XBOX SX, I put all my money that if the current PS5 is the final product Microsoft will end with an advantage. The only thing that benefits Sony is the SSD technology behind the system. True, textures and maps will almost be immediately available to be cached by the GPU, but processing them is an entire different story. How much the GPU will handle before it chokes and underclocks is key to achieve a better experience. Just take a cheap laptop with a SSD drive and try to run an old game vs an old laptop with good specs and a lame hard drive, you will find that no matter how fast textures loads, if the GPU can't handle the workload the performance is gonna stall, being the old laptop better and more stable. At the end, multi-platform games will not differ that much. But exclusive games in Xbox SX will outmatch PS5 exclusives easier due to brute performance and more available resources from CPU/GPU. It will depend in the programing magic and support that Sony can give to its first party studios to be on par or better than XBOX XS. Facts.

I think the Pro's and Con's of each console are well documented at this point.

alexxonne said:

Pemalite, your preferences are respected bot not shared, not with me nor anyone. Preferences are an individual matter. Other people preferences should not be bothering you. So I don't understand you attention to it. So...Whatever.

Nice try turning it around. Other peoples preferences do not bother me.

Did you not read the part where I said I honestly don't care?

Can't say the audio on PS4 is worse than PS3, but Cerny was clear that they pushed less for audio on PS4 than on PS3 but at PS5 tempest engine they came full on it. He also said that tempest engine is designed similar to SPE (without cache, dump direct on it) so that may be where he understood that the audio on PS3 was better.

And about very few of the data being duplicated that also contradict what Cerny said. He explained that is the reason why even some small patches had to reinstall the whole game, and that because the HDD was slow several of the assets had duplication and that with SSD the file sizes should be a lot smaller. PC instalation size doesn't disprove it since PC ports are developed to address HDD configurations as default.

LudicrousSpeed said:
Captain_Yuri said:

Imo Cerny's claims actually make sense when it comes to the way they are doing boost after reading DF's article. Fundamentally, consoles can't be in a situation where they have variable performance in games from console to console due to thermal throttling, power issues and etc. Every console needs to be able to run the game the exact same way as every other console since that is one of the main purposes of a console over PC. Sony mentions this in the DF article:

"According to Sony, all PS5 consoles process the same workloads with the same performance level in any environment, no matter what the ambient temperature may be." - DF

I think the Variable/Boost behaviour comes from the states that ps5 will be running. I am pretty sure Cerny and the rest has done their research and determined that the ps5 will have various states that it can run on that will be repeatable from console to console depending on what the developers want.

As an example. If the CPU is running at 3.5ghz at all 8 cores, the GPU can run at blah clock but below 2.23ghz. If the GPU is running at 2.23ghz then the CPU will run at blah clock but below 3.5ghz at all 8 cores. If both need to run at max performance at the same time, they will run slower than 3.5 and 2.23 but we don't know by how much. Mind you that Cerny says it's not much of a performance dip regardless. So if a game is very GPU bound but not very CPU bound, the ps5's GPU can boost to 2.23ghz all day and vice versa but the ps5 can't have both. The benefit and the main difference of Xbox is because it's not "boosting," it's able to run at it's advertised performance at all times for it's GPU and it's CPU is determined by whether or not the devs want SMT/HyperThreading.

So CPU and GPU load for the XSX doesn't affect each other but it does for the Ps5.

"When that worst case game arrives, it will run at a lower clock speed. But not too much lower, to reduce power by 10 per cent it only takes a couple of percent reduction in frequency, so I'd expect any downclocking to be pretty minor," - Cerny

https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-specs-and-tech-that-deliver-sonys-next-gen-vision

Least that's what I got out of it anyway.

This is how I interpreted it hearing Cerny talk. There is some talk here and on GAF/Reeee that the PS5 is actually locked in terms of performance and can run in “boost” mode at all times... that’s obviously false, or else why have variable speeds that can affect performance? A CPU heavy game will affect the GPU performance. A GPU heavy game will affect the CPU. Those that utilize both will affect both. Meaning the more demanding a game is, the less performance you get out of the PS5 because it has to run at lower speeds.

It was interesting when he said you’d see at most around a 10% drop in performance, that would get you right around 9-9.2TF which was what the original leaks had PS5 at all along. So maybe they did see XSX specs and overclock it? Will be interesting to see the price and cooling.

Either way both consoles should be capable of pushing out good stuff. XSX will just look better.

Nope, what he said is that with a couple percent drop in frequency you would see 10% decrease in power demand. And that the performance drop would be minimal. You understood it wrong.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."