By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft - The Xbox One's Insides Have Changed A Little Bit Since E3 - GPU clockspeed upped

drkohler said:
Pemalite said:
greenmedic88 said:

As for the upclock on the GPU; yes, it will require more power and will generate more heat, but I'm pretty sure it will be well within the design operating specs.

Keep in mind the XB1 box and its cooling system were designed to avoid RRoD type failures. The box itself is larger and I'm sure a great deal of thought went into keeping airflow and heat management at reliable levels.


The increase in power and heat would be insignificant, GCN scales in clockspeed very well, topping out at around 1ghz-1.2ghz.
Besides increasing clock speed by 6.6% isn't going to cause a linear increase in heat and power consumption, that occurs with voltage not clockspeed increases.

In simple terms, if you want higher speeds, you need to provide more current per time unit to the transistors. Simplified, you want 6% higher speed, you need 6% more current (which is done by increasing the driving force, the voltage). Now if you looked into your old physics book, you would find that the heat to dissipate over your resistors goes like P = I^2 * R. So contrary of what you think, heat dissipation "goes worse" with clock rate increase. AMD stuff had a tendency to go through the roof rather early (compared to Intel chips) at some clock rate. This was improved and AMD can go around 1GHz without excessive heat production. Again the key word here is "excessive" (check a 7970 for heat dissipation), the 53MHz clock rate increase does increase the heat output noticably.

The second misconception is the assumption that "larger box = better cooling". This is wrong, as larger volume always means "possibility to have dead air space". So it is actually more difficult to cool stuff in a large box than to cool the same stuff in a small box (if done correctly). What always wins is good airflow within your box, irregardless of size. With "simple" consoles like XBox One/PS4 (it basically has only one major, but very localized, heat source, the APU), so designing a reasonable cooling system wasn't that difficult for either company. MS chose to go large box because it wants it to look like a cool Hifi component, and could then incorporate a "huge" fan as a freebie, pretty much noiseless until your game goes into overdrive..

I wonder who figured out the 853MHz, though. Why that odd number? Or maybe the guy who posted it simply mistook an engineering test ("we could run the APU consistent at 853MHz" in the lab) as a sign that end user consoles run at 853MHz?

Notice that for a company that told everyone who wanted (or not wanted) to hear that "it is not about the specs, it is all about the software", it is rather strange that they are fiddling with exactly those specs so late in the development cycle. Actually, we might already in the production phase so I wonder who made the long term test with increased clock rates...


To be honest, its more the general public that is making a big deal from a off hand comment.  Reading the quote, Marc was just mentioned the bump as showing how going from design to production things have worked out better for the X1 tech and they were able to improve things with the system in general.  Next thing you know, that comment is headline news with sites Stating "Xbox increase GPU output by 6.6 %" etc.  Its probably whats wrong with the world where every little comment is made into some big announcement.



Around the Network
drkohler said:
Pemalite said:


The increase in power and heat would be insignificant, GCN scales in clockspeed very well, topping out at around 1ghz-1.2ghz.
Besides increasing clock speed by 6.6% isn't going to cause a linear increase in heat and power consumption, that occurs with voltage not clockspeed increases.

In simple terms, if you want higher speeds, you need to provide more current per time unit to the transistors. Simplified, you want 6% higher speed, you need 6% more current (which is done by increasing the driving force, the voltage). Now if you looked into your old physics book, you would find that the heat to dissipate over your resistors goes like P = I^2 * R. So contrary of what you think, heat dissipation "goes worse" with clock rate increase. AMD stuff had a tendency to go through the roof rather early (compared to Intel chips) at some clock rate. This was improved and AMD can go around 1GHz without excessive heat production. Again the key word here is "excessive" (check a 7970 for heat dissipation), the 53MHz clock rate increase does increase the heat output noticably.


Rubbish.
You don't need to increase the current (voltage) to increase clock speeds, have you learnt nothing about overclocking on the PC? Hint: People have been doing it for decades, even reducing voltages and increasing clock speeds and thus reducing heat and power consumption.

No offense, but I'll take the proof that is decades worth of overclocking over your opinion.



--::{PC Gaming Master Race}::--

drkohler said:

Notice that for a company that told everyone who wanted (or not wanted) to hear that "it is not about the specs, it is all about the software", it is rather strange that they are fiddling with exactly those specs so late in the development cycle. Actually, we might already in the production phase so I wonder who made the long term test with increased clock rates...

I think it is not like they found that now... I guess they were forced to test a up in the specs without need to redesign the components... so upclock... after all the "specs matters too".



Pemalite said:

Rubbish.
You don't need to increase the current (voltage) to increase clock speeds, have you learnt nothing about overclocking on the PC? Hint: People have been doing it for decades, even reducing voltages and increasing clock speeds and thus reducing heat and power consumption.

No offense, but I'll take the proof that is decades worth of overclocking over your opinion.

I agree with @drkohler.

The example you give is with multipliers... so you didn't changed the base clock to go with high multiples... like 10x200 = 2000Mhz or 11x200 = 2200Mhz... no change in voltage... same for well 12x166 = 2000Mhz and you can decrease the voltage.

But to increase the base clock you have a limit... you start to increase it until you have to increase the voltage to get stable.

Now remember GPU only have the base clock (AMD GPU... nVidia have two clocks but I won't enter in this part).



ethomaz said:

I agree with @drkohler.

The example you give is with multipliers... so you didn't changed the base clock to go with high multiples... like 10x200 = 2000Mhz or 11x200 = 2200Mhz... no change in voltage.

But to increase the base clock you have a limit... you start to increase it until you have to increase the voltage to get stable.

Now remember GPU only have the base clock (AMD GPU... nVidia have two clocks but I won't enter in this part).


I never mentioned anything about multipliers, nor do we have the option to play with multipliers on GPU's.

My point still stands, when you raise voltage, heat and power consumption increases exponentially. (I.E. The increase is the square of the voltage in a linear circuit.)

If you increase clockspeed and not the voltage, the increase in heat and power consumption isn't going to be an exponential increase, nor do you *have* to increase the voltage to run at a higher clock speed. (Untill you hit a wall and need more voltage, that is.) - Again, decades worth of proof via overclockers is available for you to peruse via google.

Also, nVidia moved away from having the shader clocks untied from the core clock some time ago.



--::{PC Gaming Master Race}::--

Around the Network

Pemalite said:

I never mentioned anything about multipliers, nor do we have the option to play with multipliers on GPU's.

My point still stands, when you raise voltage, heat and power consumption increases exponentially. (I.E. The increase is the square of the voltage in a linear circuit.)

If you increase clockspeed and not the voltage, the increase in heat and power consumption isn't going to be an exponential increase, nor do you *have* to increase the voltage to run at a higher clock speed. (Untill you hit a wall and need more voltage, that is.) - Again, decades worth of proof via overclockers is available for you to peruse via google.

Also, nVidia moved away from having the shader clocks untied from the core clock some time ago.

I misundestood then... sorry.

I agree with you in that explanation... but we don't know if the voltage was increase or not to add these 53Mhz... in any case I think MS is playing safe because they had made tests with 900Mhz,1000Mhz, etc.



Gamecube said:
brendude13 said:
That's good news, but Microsoft shouldn't feel too pressured. Don't want another RROD.


RRod had more to to with solder failure than chip failure.

Tell that to the Hana/Ana video encoders >_>



ethomaz said:
Darc Requiem said:
I'm waiting for Fake Kaz to announce that the PS4 GPU's clock rate has been increased to 854mhz on Twitter.

He did better...


and MS net income still covers Sony market cap :P  (sorry I don't have a tweeter account, but that's what I would have posted to the guy lol)

Moderated,

-Mr Khan



Unfortunately for consumers, MS' net income covering Sony's market cap didn't allow them to build a box with better specs than Sony's.



greenmedic88 said:
Unfortunately for consumers, MS' net income covering Sony's market cap didn't allow them to build a box with better specs than Sony's.


It was their choice and has nothing to do with the financial results.