By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Captain_Yuri said:
Bofferbrauer2 said:

Those 54W are for the entire APU, not just the iGPU.

Also, small improvement? Compared to it's direct predecessor, the 6900HS, it went from 2292 to  2791, so a 25% increase. It's certainly not as big as with the switch from Vega to RDNA2, but a pretty substantial increase nonetheless. It's also getting dangerously close to the different NVidia 16/20/3050 GPUs in laptops while obliterating their entire MX series lineup. What more did you expect, really?

I don't see why the 54W is supposed to be the only good one. If the GPU can still beat everything the last gen could reach while limited to 25W and last gen was already considered pretty playable in FHD with low or medium details, I'd say this means the 780M is still very playable while on battery.

Why shouldn't it be possible for the Steam Deck 2? The Vega 8 in the Steam Deck (which only reaches up to 1700 points btw) is also underclocked compared to 4000/5000 series APUs. It will probably not get quite as many points in a steam deck configuration even when paired with faster memory (780M supports up to LPDDR5-7500), but I think a score comparable to the one of the 6800U (which most probably ran at 25W TDP to get that score) in the graph should be very achievable. This in turn would be a 30-40% increase, plus whatever an upgraded CPU would add to the mix.

A: It is a pretty small improvement because a 6800U has a tdp of 28 watts. So if we compare the 28 watt 6800U to the 25 watt 7940HS, that's like 8-9% improvement. 6900HS also has a tdp of 35 watts... So in order for the 7940HS to get 2791 score, it needed to be bumped up to 54 watts... So that's a 54% increase in wattage for a 22% increase in performance... That's horrid... I expect better from AMD, that's for sure.

B: Also a 3050 has nearly double the time spy score so it's not really close...

That's also some Intel logic you go there. Sure it can beat everything last gen at 25 watts but by 1-8% is pretty terrible...

The Steam Deck does not have a Vega 8 GPU. I have no idea where you got that from. It has 8 RDNA 2 CUs and the APU runs at 15 watts. The point was that because of how inefficient the 7940HS is, Valve might be better off going with RDNA 2 solution or waiting for RDNA 4 to see if there will be bigger efficiency gains for the Steam Deck 2.

A: Keep in mind that the 6800U also has more bandwidth to play with (LPDDR5-6400 compared to LPDDR5-5600), which certainly skews this in favor of the 6800U. And despite having more TDP headroom (28W vs 25W) and more memory bandwidth, the 6800U gets beaten by the 7940HS by about 9%.

B: I was wrong about the 3050, I thought it was weaker than that considering how bad the desktop version is, mea culpa on that one.

C: I meant RDNA2, no idea why I wrote Vega 8 there, maybe by association since it uses Zen 2 (it was pretty late). Still, Steam Deck just lingers between 1500-1700 points depending on settings, which is way below what the 780M achieves. Even at 15W, there would still be a healthy uplift, especially if paired with faster memory, as I'm fairly sure the 780M could go quite a bit higher if the Bandwidth wouldn't strangle it.



Around the Network
hinch said:

Well Ada is essentially rebuffed Ampere with a new node, enhancements and a crap load of cache added. Blackwell is going to be a lot different as architecture goes (supposedly) so I think they'd keep power consumption to a reasonable level. As in I can't see them going over 500W for the 5090.

3nm and IPC gains from a new architecture will give them enough power jump in two years. And if, (and thats a big if) AMD somehow manages a pul a rabbit out the hat with RDNA 4. They can crank up the clocks.

I get what you're saying but Turing, Ampere, RDNA3 were all neutral or underperformed node gains. The last time a new architecture had a meaningful jump in  efficiency with other factors being equal was with Maxwell and RDNA1. That would mean simply refined architectures have been faring just as well in efficiency, on average, so I'm not sure there's a trend here.

That being said, at least for the next generation, there's room between TDP and non-RT gaming in the 4090, as well as the possibitlity of making a bigger chip with safer clocks even with meager efficiency gains in 3nm...



 

 

 

 

 

Alleged NVIDIA GeForce RTX AD106 GPU Benchmarks Leak Out, On Par With RTX 3070 Ti

https://wccftech.com/alleged-nvidia-geforce-rtx-ad106-gpu-benchmarks-leak-out-on-par-with-rtx-3070-ti/

ASUS UK is offering up to 300 GBP for old GeForce/Radeon GPUs when buying RTX4080/4070Ti

https://videocardz.com/newz/asus-uk-is-offering-up-to-300-gbp-for-old-geforce-radeon-gpus-when-buying-rtx4080-4070ti

Those horrid trade in prices are the prices I wish GPUs were in sale for

AMD 12-core Ryzen 9 7845HX CPU is 90% faster than 8-core 6900HX in PassMark multi-core test

https://videocardz.com/newz/amd-12-core-ryzen-9-7845hx-cpu-is-90-faster-than-8-core-6900hx-in-passmark-multi-core-test

Keep in mind that a 7845HX is essentially a 7900X in eco mode inside the laptop... And that's a very impressive achievement!

Chinese Moore Threads MTT S80 GPU can run Crysis

https://videocardz.com/newz/chinese-moore-threads-mtt-s80-gpu-can-run-crysis



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

haxxiy said:

A worrying trend: flagship GPUs have increased power consumption by ~15% per generation (in real-world scenarios, although the result of estimating it using TDP is similar). This is happening against a backdrop of decreasing gains in efficiency with rare exceptions due to architectural improvements.

So far it has been used to average ~50% gains per generation for the last decade, but ahead of us lies a bunch of nodes with worse power characteristics all the way to the end of CMOS die shrinking.

Will we see a 5090 with a 560W TDP to get to that 40% improvement? A 6090 keeping said TDP but consuming closer to it? A 7090 going for 650W? Or will consumer pushback and/or a novel innovative architecture save us from going that path?

It is why Nvidia, AMD and Intel are all pushing upscaling technologies along with Frame amplifiers like DLSS 3. The ceiling of gen on gen improvements is fast approaching and there's only so much you can do with the architectural side with minimal node improvements. But on the software side, there's infinite possibilities. They have fixed a lot of artifacting issues already with the DLSS 3 update and as with DLSS 2, it will only continue to get better until eventually, it becomes widely accepted like DLSS 2 has been.

Bofferbrauer2 said:
Captain_Yuri said:

A: It is a pretty small improvement because a 6800U has a tdp of 28 watts. So if we compare the 28 watt 6800U to the 25 watt 7940HS, that's like 8-9% improvement. 6900HS also has a tdp of 35 watts... So in order for the 7940HS to get 2791 score, it needed to be bumped up to 54 watts... So that's a 54% increase in wattage for a 22% increase in performance... That's horrid... I expect better from AMD, that's for sure.

B: Also a 3050 has nearly double the time spy score so it's not really close...

That's also some Intel logic you go there. Sure it can beat everything last gen at 25 watts but by 1-8% is pretty terrible...

The Steam Deck does not have a Vega 8 GPU. I have no idea where you got that from. It has 8 RDNA 2 CUs and the APU runs at 15 watts. The point was that because of how inefficient the 7940HS is, Valve might be better off going with RDNA 2 solution or waiting for RDNA 4 to see if there will be bigger efficiency gains for the Steam Deck 2.

A: Keep in mind that the 6800U also has more bandwidth to play with (LPDDR5-6400 compared to LPDDR5-5600), which certainly skews this in favor of the 6800U. And despite having more TDP headroom (28W vs 25W) and more memory bandwidth, the 6800U gets beaten by the 7940HS by about 9%.

B: I was wrong about the 3050, I thought it was weaker than that considering how bad the desktop version is, mea culpa on that one.

C: I meant RDNA2, no idea why I wrote Vega 8 there, maybe by association since it uses Zen 2 (it was pretty late). Still, Steam Deck just lingers between 1500-1700 points depending on settings, which is way below what the 780M achieves. Even at 15W, there would still be a healthy uplift, especially if paired with faster memory, as I'm fairly sure the 780M could go quite a bit higher if the Bandwidth wouldn't strangle it.

A: Imo at best the uplift of increasing memory bandwidth will be a few % because of how power limited the APU is. We see this because increasing the power netted big gains even though it's at the same memory speed. So even if we are generous and say 15% gen on gen uplift at 25 watts, I think it's pretty disappointing. Perhaps RDNA 4 will fix a lot of RDNA 3s short comings as Lovelace certainly fixed a lot of Amperes shortcomings after Nvidia went dual fp32 for the first time.

c: I think for Valve, if we assume the uplift is 15% with RDNA 3 compared to RDNA 2, it may be cheaper for them to go with RDNA 2 and spending the saved budget on a bigger battery thus increasing the tdp to 25 watts along with other improvements. Of course, this really depends on the difference they negotiate. If the difference in cost by going with the latest is big vs last gen, then the 15% uplift may not be worth it but if it's small, then sure. Or just wait for RDNA 4 and see how that pans out.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Earlier this week, Blackblaze published its Drive Stats for 2022. I didn't post it and I don't recall Yuri doing it either, so here they are:

https://www.backblaze.com/blog/backblaze-drive-stats-for-2022/ (worth a read to know more details)

And the comparison with 2020 and 2021



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
Captain_Yuri said:

ASUS UK is offering up to 300 GBP for old GeForce/Radeon GPUs when buying RTX4080/4070Ti

https://videocardz.com/newz/asus-uk-is-offering-up-to-300-gbp-for-old-geforce-radeon-gpus-when-buying-rtx4080-4070ti

Those horrid trade in prices are the prices I wish GPUs were in sale for

Why do they give more for a 2080 or 2080 Super (with probably a lot more "mileage") than for a 3060 Ti?



Conina said:
Captain_Yuri said:

ASUS UK is offering up to 300 GBP for old GeForce/Radeon GPUs when buying RTX4080/4070Ti

https://videocardz.com/newz/asus-uk-is-offering-up-to-300-gbp-for-old-geforce-radeon-gpus-when-buying-rtx4080-4070ti

Those horrid trade in prices are the prices I wish GPUs were in sale for

Why do they give more for a 2080 or 2080 Super (with probably a lot more "mileage") than for a 3060 Ti?



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Conina said:
Captain_Yuri said:

ASUS UK is offering up to 300 GBP for old GeForce/Radeon GPUs when buying RTX4080/4070Ti

https://videocardz.com/newz/asus-uk-is-offering-up-to-300-gbp-for-old-geforce-radeon-gpus-when-buying-rtx4080-4070ti

Those horrid trade in prices are the prices I wish GPUs were in sale for

Why do they give more for a 2080 or 2080 Super (with probably a lot more "mileage") than for a 3060 Ti?

Because it's more likely that someone with a 2 gens old GPU upgrades than someone that has a card from the previous gen? Or because someone that bought a high-end part like a 2080 is more likely to buy another high-end part like the 4080 and 4070Ti GPUs than someone that went for a mid-range model like the 3060Ti?



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

haxxiy said:

It is why Nvidia, AMD and Intel are all pushing upscaling technologies along with Frame amplifiers like DLSS 3. The ceiling of gen on gen improvements is fast approaching and there's only so much you can do with the architectural side with minimal node improvements. But on the software side, there's infinite possibilities. They have fixed a lot of artifacting issues already with the DLSS 3 update and as with DLSS 2, it will only continue to get better until eventually, it becomes widely accepted like DLSS 2 has been.

I agree. I don't know if engines and GPUs will ever move entirely away from current rendering techniques, but adding a neural-statistical model to generate most frames will definitely be the way to go.

(Cue terribly unoptimized games running at 20 fps without it...)