Forums - Gaming Discussion - Let's talk about Specs

You like Specs?

I love Specs! 16 40.00%
 
I kinda like Specs. 13 32.50%
 
Specs are for nerds! 2 5.00%
 
I don't care either way, ... 9 22.50%
 
Total:40

@DonFerrari

I didn’t understand anything incorrectly. You’re saying the same thing I did, you just didn’t apply numbers. Yes, he said a 10% drop in performance. Please tell me what 10% of 10.3 is and what you’re left with when you subtract that.



Around the Network
Pemalite said:

Either way... Educate yourself on the difference between Integer or Floating Point before we take this discussion any further, otherwise it's pointless if you cannot grasp basic computing fundamentals.

If you are going to use Wikipedia at least search for the proper term

Source 1

Clearly you are confusing the context in which I used the term.

Pemalite said:

It's a standard that represents single precision floating point at it's basic form.

That part is true. But they're not to be used for comparisons as a standard, because flops calculation and actual performance vary too much between devices.

Pemalite said:

Except that idea falls flat on it's face when you start throwing other aspects into the equation.

It is because other aspects are the one changing your equation.

Because having different components affect flops measurement as well. You don't even need to change from the same model of gpu. Just compare an original AMD card vs other brand say Saphire, same model. Even by being the same chipset the difference in transistors between the two changes the actual performance. In addition to that you can have modifications over or under the original clock cycle settings affecting flops and performance. In the end you are essentially saying that every 2.4L engine performs equally across all car manufacturers without regarding who build it and how different they can be. Another example of what you are saying is that all cars with 200 horse power performs equally and run the same. That kind of logic fails.

Take AMD Radeon VII  13.8 tflops vs Nvidia 2080 10tflops, nvidia card ended performing better despite being the one with lesser tflops in computational power. A generation later AMD released the 5700 RX XT model with 9.7 tflops performing equally as the previous Radeon VII model, again this despite having lesser tflops computational power. My friend, you're very confused about Flops measurement. That is why is call a theoretical performance number not an actual one.

Source 1

Source 2

Source 3

Pemalite said:

I have had these debates before and provided the evidence.

So where are your facts?

Microsoft isn't using pure emulation to get Xbox 360 and Original Xbox games running on the Xbox One. Those are the facts.

I said Microsoft BC mode in Xbox one was a half cooked idea. Part of it comes from legacy features built-in in the gpu. The other part comes from emulating those via a software translator(hence emulation). You are debating yourself here, not me.

Source 1

"...the fact that certain aspects of the Xbox 360 hardware design are indeed built into the Xbox One processor - specifically, support for texture formats and audio. "It's what makes this sort of possible for us, because then we can take all of those shaders that we collect and we can package them and all the Enlightenments, and then we just go through and we do actual performance playthroughs to determine that the emulator is executing everything right."

It's not an easy task because fundamentally, the Xbox 360's PowerPC processor is worlds apart from Xbox One's x86 foundation. Floating point calculations need to be adapted from 40-bit to 32-bit..."

Source 2"...It may surprise you, but the lion’s share of Xbox One’s backwards compatibility (be it S or X) is handled by software, not the machine itself. “In order to make this happen, which we didn’t think was initially possible, we went ahead and built a virtual [Xbox] 360 entirely in software,” Bill Stillwell, who is a Microsoft platform lead, told Larry ‘Major Nelson’ Hryb on the Xbox icon’s YouTube show..."

"“...We had to bake some of the backwards compatibility support into the [Xbox One] silicon.” Considering the first back-compat 360 games didn’t hit Xbox One until 2015, that’s mightily impressive foresight..."

But you wanted facts...

Pemalite said:

The Cell CPU is actually a very simple-in-order core design... It was "complex" not because of the instructions that need to be interpreted or translated, but due to the sheer number of cores and load balancing those

You're contradicting yourself in you own very sentence. Either is simple or complex.

Even Mark Cerny stated PS3 was the more complex by having the longest term to learn and code.

Source

Pemalite said:

Price hasn't been revealed, might not be cheaper than Xbox Series X. (Another fact from me... To you.)

Where is your fact? TOTAL FAIL.

Pemalite said:

Did you not read the part where I said I honestly don't care?

If you Honestly don't care, then why bother?

Last edited by alexxonne - on 21 March 2020

LudicrousSpeed said:

@DonFerrari

I didn’t understand anything incorrectly. You’re saying the same thing I did, you just didn’t apply numbers. Yes, he said a 10% drop in performance. Please tell me what 10% of 10.3 is and what you’re left with when you subtract that.

10% drop in power consumption with a couple percent frequency decrease with minimal performance drop.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

alexxonne said:
Pemalite said:

Either way... Educate yourself on the difference between Integer or Floating Point before we take this discussion any further, otherwise it's pointless if you cannot grasp basic computing fundamentals.

If you are going to use Wikipedia at least search for the proper term

Source 1

Clearly you are confusing the context in which I used the term.

Pemalite said:

It's a standard that represents single precision floating point at it's basic form.

That part is true. But they're not to be used for comparisons as a standard, because flops calculation and actual performance vary too much between devices.

Pemalite said:

Except that idea falls flat on it's face when you start throwing other aspects into the equation.

It is because other aspects are the one changing your equation.

Because having different components affect flops measurement as well. You don't even need to change from the same model of gpu. Just compare an original AMD card vs other brand say Saphire, same model. Even by being the same chipset the difference in transistors between the two changes the actual performance. In addition to that you can have modifications over or under the original clock cycle settings affecting flops and performance. In the end you are essentially saying that every 2.4L engine performs equally across all car manufacturers without regarding who build it and how different they can be. Another example of what you are saying is that all cars with 200 horse power performs equally and run the same. That kind of logic fails.

Take AMD Radeon VII  13.8 tflops vs Nvidia 2080 10tflops, nvidia card ended performing better despite being the one with lesser tflops in computational power. A generation later AMD released the 5700 RX XT model with 9.7 tflops performing equally as the previous Radeon VII model, again this despite having lesser tflops computational power. My friend, you're very confused about Flops measurement. That is why is call a theoretical performance number not an actual one.

Source 1

Source 2

Source 3

Pemalite said:

I have had these debates before and provided the evidence.

So where are your facts?

Microsoft isn't using pure emulation to get Xbox 360 and Original Xbox games running on the Xbox One. Those are the facts.

I said Microsoft BC mode in Xbox one was a half cooked idea. Part of it comes from legacy features built-in in the gpu. The other part comes from emulating those via a software translator(hence emulation). You are debating yourself here, not me.

Source 1

"...the fact that certain aspects of the Xbox 360 hardware design are indeed built into the Xbox One processor - specifically, support for texture formats and audio. "It's what makes this sort of possible for us, because then we can take all of those shaders that we collect and we can package them and all the Enlightenments, and then we just go through and we do actual performance playthroughs to determine that the emulator is executing everything right."

It's not an easy task because fundamentally, the Xbox 360's PowerPC processor is worlds apart from Xbox One's x86 foundation. Floating point calculations need to be adapted from 40-bit to 32-bit..."

Source 2"...It may surprise you, but the lion’s share of Xbox One’s backwards compatibility (be it S or X) is handled by software, not the machine itself. “In order to make this happen, which we didn’t think was initially possible, we went ahead and built a virtual [Xbox] 360 entirely in software,” Bill Stillwell, who is a Microsoft platform lead, told Larry ‘Major Nelson’ Hryb on the Xbox icon’s YouTube show..."

"“...We had to bake some of the backwards compatibility support into the [Xbox One] silicon.” Considering the first back-compat 360 games didn’t hit Xbox One until 2015, that’s mightily impressive foresight..."

But you wanted facts...

Pemalite said:

The Cell CPU is actually a very simple-in-order core design... It was "complex" not because of the instructions that need to be interpreted or translated, but due to the sheer number of cores and load balancing those

You're contradicting yourself in you own very sentence. Either is simple or complex.

Even Mark Cerny stated PS3 was the more complex by having the longest term to learn and code.

Source

Pemalite said:

Price hasn't been revealed, might not be cheaper than Xbox Series X. (Another fact from me... To you.)

Where is your fact? TOTAL FAIL.

Pemalite said:

Did you not read the part where I said I honestly don't care?

If you Honestly don't care, then why bother?

Pema is the most vocal user on saying Tflops is an useless measure and that even on same architeture you can have lesser Tflop outperform higher because of other aspects of the card. So I guess you aren't understanding his point that Tflop is a direct theoretical maximum that always means the same (so 1 Tflop is always 1 Tflop, instead of a AMD Tflop is 0,6 NVidia Tflop or anything the like) but of course the real world perfomance will vary greatly and that is the gain in efficiency every gen see.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

DonFerrari said:
LudicrousSpeed said:

@DonFerrari

I didn’t understand anything incorrectly. You’re saying the same thing I did, you just didn’t apply numbers. Yes, he said a 10% drop in performance. Please tell me what 10% of 10.3 is and what you’re left with when you subtract that.

10% drop in power consumption with a couple percent frequency decrease with minimal performance drop.

So you get a performance drop for intense games, whether it’s the CPU for games that heavily utilize the GPU or an actual TFLOP drop for games that utilize the CPU because the GPU needs to be throttled down. You can bicker about exact numbers even though Sony provided none, the person I quoted and myself have the correct understanding of what Cerny was saying.

It remains to be seen how often the system can actually run at full power. Logic would dictate not often, otherwise there would be no need for a boost mode and variable frequencies. Remember this is the same guy who said like 8TF would be required for native 4K. It’s not like he’s infallible.

People smarter or at least more knowledgeable than us on this have said that if MS used the same method of varying frequencies, the XSX could get to like 14.6 TF. They aren’t, because it’s not smart. Sony will either have a very good, very expensive cooking solution in the PS5, or the console will not run at the max frequencies often.



Around the Network
DonFerrari said:

Pema is the most vocal user on saying Tflops is an useless measure and that even on same architeture you can have lesser Tflop outperform higher because of other aspects of the card. So I guess you aren't understanding his point that Tflop is a direct theoretical maximum that always means the same (so 1 Tflop is always 1 Tflop, instead of a AMD Tflop is 0,6 NVidia Tflop or anything the like) but of course the real world perfomance will vary greatly and that is the gain in efficiency every gen see.

Agreed.

I do understand that term will equal the same as long is compared to another theoretical measurement , without taking any device in context.

But never a flop measurement will be equal to the performance a device gives when compared to another (Ps3,360, ps4, ps4, amd, nvidia, etc)

Last edited by alexxonne - on 21 March 2020

LudicrousSpeed said:
DonFerrari said:

10% drop in power consumption with a couple percent frequency decrease with minimal performance drop.

So you get a performance drop for intense games, whether it’s the CPU for games that heavily utilize the GPU or an actual TFLOP drop for games that utilize the CPU because the GPU needs to be throttled down. You can bicker about exact numbers even though Sony provided none, the person I quoted and myself have the correct understanding of what Cerny was saying.

It remains to be seen how often the system can actually run at full power. Logic would dictate not often, otherwise there would be no need for a boost mode and variable frequencies. Remember this is the same guy who said like 8TF would be required for native 4K. It’s not like he’s infallible.

People smarter or at least more knowledgeable than us on this have said that if MS used the same method of varying frequencies, the XSX could get to like 14.6 TF. They aren’t, because it’s not smart. Sony will either have a very good, very expensive cooking solution in the PS5, or the console will not run at the max frequencies often.

The person have zero access to the chips and just made a if mathematical scenario on XSX going 2.23Ghz, that doesn't mean it can.

Cerny was very clear that the console can sustain that frequency permanently. And the drop that can be expected.

You are just picking a speculative post and running with it as if it is true because it have the same understanding as yours. DF on the other hand expect even less difference than 12 vs 10.23 would show.

alexxonne said:
DonFerrari said:

Pema is the most vocal user on saying Tflops is an useless measure and that even on same architeture you can have lesser Tflop outperform higher because of other aspects of the card. So I guess you aren't understanding his point that Tflop is a direct theoretical maximum that always means the same (so 1 Tflop is always 1 Tflop, instead of a AMD Tflop is 0,6 NVidia Tflop or anything the like) but of course the real world perfomance will vary greatly and that is the gain in efficiency every gen see.

Agreed.

I do understand that term will equal the same as long is compared to another theoretical measurement , without taking any device in context.

But never a flop measurement will be equal to the performance a device gives when compared to another (Ps3,360, ps4, ps4, amd, nvidia, etc)

And he isn't disputing that.

The thing I would disagree of his is on the frequency compatibility. Yes you can emulate without it, but having the same clock (which doesn't mean it is more powerful or not) should make the compatibility easier.

But if the compatible frequency was really necessary then PS5 wouldn't be able to do the boost BC for PS4, and Sony is going basically for pure HW BC (logic on the chip) with PS4 from what I understand.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

He didn’t say they could run at those frequencies permanently. If they could, why would they even have any variance? He literally said he’d expect both to run there most of the time, definitely not permanently.

I mean right in your post is a contradiction. “It can run there permanently, but expect these drops!”

Are you implying that the system COULD run there permanently, even though it won’t? If that’s your point, who cares? It won’t run there permanently.



I may be wrong, but lets compare another situation: A dodge demon has a V8, with an engine as big as 6.2 L, plus 2.7 L from the supercharger. The McLaren 720s has a 4.0L with twin turbo. In the paper, the demon should eat alive the McLaren, but it turns out it doesnt...



                          

"We all make choices, but in the end, our choices make us" - Andrew Ryan, Bioshock.

drkohler said:
These posts get longer and longer to digest with all the quoting.
So I ask everyone for just one thing:

Some people think that 2.23GHz is a boost clock and the thing actually runs at a lower clock some/most of the time. It is NOT.
When Cerny mentioned boost in his talk, he meant it in the engineering sense of the word. What they did when testing the SoC, they started with a low frequency and upped the frequency step by step until the gpu was no longer able to function correctly (My guess is a lot of SoCs bit the dust). This gave them the absolute upper clock limit. Then they did the same thing again up to a point where the thermal/power envelope was reached with whatever cooling solutions were tested. Apparently 2.23GHz is the "sweet spot" for the gpu. (Surprisingly the 3.5GHz for the cpu is already problematic due to a particular 256bit command set that needs large amounts of power.)
Stepping up the clock is called boosting the clock in the engineering world. It has nothing to do with "This thing runs at x GHz but we can boost x by y%".

Well Mark Cerny own words are that variable frequency = continuous boost. If you look at the presentation you even see the graphs in the background doing the boost , UP and DOWN constantly. But the gpu will not be constant at 2.23ghz, it will reach that output if it needs on a given moment, but by underclocking the cpu. This means Gpu clock may be boosting from 1,8 to 2.23ghz. He didn't want to give base clock speed , but i can assure it exists. If we use early leaks, base clock should be around 2ghz (9tflops).

We have to wait and see how this approach is played out. AMD smart Shift technology can indeed be something. It doesn't convince me yet, I see stability problems everywhere. And we have yet to see the cooling solution, it must be a robust one.

Source