By using this site, you agree to our Privacy Policy and our Terms of Use. Close
alexxonne said:
Pemalite said:

I have a Ryzen 2700u notebook which I actually use to do some light gaming on while I was training interstate... Well. Used to. Don't use it anymore thanks to the Switch... Probably should.

Throttling the CPU so that the GPU can "boost" is not an alien concept for me... The TDP of the APU is 15w which never changes, so if I reduce the CPU's portion of the TDP by limiting clockrates to 60%, then it can increase the TDP of the GPU portion. - It actually allows for overwatch to hit 720P@60fps. Otherwise the resolution takes a bit of a hit.
It's actually something Sony is taking home with the Smartshift aspect which takes it a step further.

I don't know what I am reading in this thread from other users... But I honestly can't be bothered.

Hi there

I suppose, that is because the ryzen APU is monitoring all the time the TDP, if it goes too high you will blow the capacitors depending on the chip, in this case the main apu capacitors.

Uh. No.
You cannot exceed the TDP, it is a hard limit in the firmware.

However... Me being an enthusiast with no fucks to give has employed various tools to tweak the firmware and push the TDP limit to 25w.

alexxonne said:

By reducing the CPU portion (when possible), you are cheating the APU TDP monitor, giving you more tolerance for a higher gpu frequency, it can be a nice boost given it would be better sustained and not throttled back when heat is an issue; and with no apparent damage to the capacitors life.

Uh. No.

The chip shares it's TDP with the CPU and GPU... A lower load on either, will allow the other to ramp up in clockrate.

alexxonne said:

Obviously this will help you in some games but not in others that do not depend on frequency(for example a game that depends on shader units count and not clock, or one game that depends more on cpu and not gpu).

It will assist in all games that are GPU bound... And considering we are talking about integrated graphics and running at 1080P... That is pretty much every game.

alexxonne said:

I have question, if you scaled down the cpu clock to 60%, how much % up did you achieved with the GPU, a 40%? If not mistaken with 1300mhz, a 40% clock increase will be 1820mhz... I suppose you didn't end that high but if that was the case, the capacitors would have endure but vram will prematurely die if not well ventilated. Bad Vram is a pain in the ass, it can show up anytime without advise, and is not immediately evident to diagnose. Lots of used cards are like this on ebay, they randomly crash without, can be playing a game or using Word.

Something very similar to what AMD/Nvidia/Intel do, but in reverse whenever a chip design or manufacturing process is flawed, they deactivate some internals, tweak the clocks and sell them as a lower tier product.

1,300Mhz is the upper limit of the GPU boost. - However it's average clockrate is lower if the CPU is under 100% load. It cannot exceed that hard limit.
The CPU will go "Up-To 3.8Ghz on all 4-CPU cores."
The GPU will go "Up-To 1.3Ghz on all 10 CU's."

Limiting one gives headroom for the other to achieve higher average clockrates... This is the same principle the PS5 will work on and is entirely under developer control.

Capacitors? What are you even on about?

VRAM? You do realize this is an APU and thus integrated graphics and thus doesn't have VRAM?

the-pi-guy said:

1.) We don't know what the base clock is. We don't even know if there even is a base clock, because the paradigm for how the system chooses a frequency is completely opposite of the norm.

2.) From Cerny's comments, we know it spends the majority of it's time near the max frequencies. More than likely it'll be closer to a 10 TF machine for games.

The base clock is irrelevant as it depends on what the developer prioritizes... So it will vary from game to game, even from one games scene to the next.



--::{PC Gaming Master Race}::--