By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

haxxiy said:
JEMC said:

Impressive, but it makes you wonder why the desktop 4080 needs 300W to be 15-20% faster than a 3090Ti. But well, we know that synthetic benchmarks don't represent actual gaming performance.

Since it's closer to a 30% advantage in 3DMark, maintaining 77% of the performance for 55% of the TDP isn't that outlandish. The 3090 Ti itself could pull 88% of the performance for 66% of the power (see here).

In gaming, it'll probably be behind by some 15% still, so it's more comparable to that 300W card in the Tom's Hardware article.

Thanks. But we're still comparing a 175W card on a laptop to a desktop card using 300W, almost twice the power. It's an impressive improvement in efficiency.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
JEMC said:

haxxiy said:

Thanks. But we're still comparing a 175W card on a laptop to a desktop card using 300W, almost twice the power. It's an impressive improvement in efficiency.

When you think that the 3090 Ti had the same efficiency as the Turing GPUs and even some of the better Pascals, that was a sorely needed improvement and a very necessary one.

Makes me wonder if that's what Nvidia is paying a premium to TSMC for with the N4 node (since AMD's improvements in efficiency were more like 30% rather than 60%) or if it's mostly architecture.



 

 

 

 

 

haxxiy said:
JEMC said:

Thanks. But we're still comparing a 175W card on a laptop to a desktop card using 300W, almost twice the power. It's an impressive improvement in efficiency.

When you think that the 3090 Ti had the same efficiency as the Turing GPUs and even some of the better Pascals, that was a sorely needed improvement and a very necessary one.

Makes me wonder if that's what Nvidia is paying a premium to TSMC for with the N4 node (since AMD's improvements in efficiency were more like 30% rather than 60%) or if it's mostly architecture.

If I remember correctly, you follow this news more than I do, the node Nvidia used is nothing more than a more refined 5nm one, not a "true" (if that term can even be used) new node. I can't see this being the only reason for the improved efficiency Ada shows when dialed back to its most efficient point.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

haxxiy said:

If I remember correctly, you follow this news more than I do, the node Nvidia used is nothing more than a more refined 5nm one, not a "true" (if that term can even be used) new node. I can't see this being the only reason for the improved efficiency Ada shows when dialed back to its most efficient point.

True, it's the same node density-wise, but Nvidia must be paying an extra for something, which might be fine-tuning for their architecture yielding a better frequency-voltage curve. Definitely not one that explains entirely the better result than AMD's, but maybe halfway there.



 

 

 

 

 

AMD Talks RDNA 4, GPU-Based AI Accelerators, Next-Gen Graphics Pipeline: Promises To Evolve To RDNA 4 With Even Higher Performance In Near Future

https://wccftech.com/amd-talks-rdna-4-gpu-based-ai-accelerators-next-gen-graphics-pipeline-promises-to-evolve-to-rdna-4-with-even-higher-performance-in-near-future/

Certainly sounds interesting but we will see if it goes anywhere. The near future part is a bit curious as I wonder what they mean exactly. Like near future by the end of next year which is the typical GPU cycle or near future means sometime this year which would be accelerated gpu cycle? I do fear for AMD driver team if they come out with a new arch so soon but if they want to compete against the 4090, it may be the only way. Of course, Nvidia does also have the 4090 Ti up their sleeve.

Mechrevo claims GeForce RTX 4070 Laptop GPU is only 11% to 15% faster than RTX 3070

https://videocardz.com/newz/mechrevo-claims-geforce-rtx-4070-laptop-gpu-is-only-11-to-15-faster-than-rtx-3070

Pretty lame if true but without a proper competitor, these lower end chips will have low gen on gen improvements and relatively higher margins.

Cyberpunk 2077 HD Reworked Project to Launch on March 12th; Creator Promises No Performance Loss

https://wccftech.com/cyberpunk-2077-hd-reworked-project-to-launch-on-march-12th-creator-promises-no-performance-loss/

Exclusive: Tencent scraps plans for VR hardware as metaverse bet falters - sources

https://www.reuters.com/technology/tencent-scraps-plans-vr-hardware-metaverse-bet-falters-sources-2023-02-17/?utm_source=reddit.com



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network

It's hard to gauge what that "near future" means given that it's a translation, and not all of them are good. Besides, when thing like GPU architectures are planned and designed so many years in advance of their actual release, it's easy to talk about the next arch as something close in time.

I wish them luck with the MDIA feature, but unless Nvidia also adopts it, I won't be easy.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Radeon N31 vs. GeForce AD102/103, chip area analysis and rough cost estimates

Lots of interesting technical analysis between the two architectures. Take the cost analysis with a grain of salt but outside of AMD/Nvidia publicly telling people how much it costs which is unlikely, I don't think we will get a better estimate than his analysis. It is a shame Locuza will be retiring though. His twitter analysis were always interesting.

Gigabyte GeForce RTX 4070 GPU Listing Shows 16 GB, 12 GB & 10 GB Variants

https://wccftech.com/gigabyte-geforce-rtx-4070-gpu-listing-shows-16-gb-12-gb-10-gb-variants/

Interesting to say the least. Most likely the 12GB one will be the main one. 10GB version will be Asia only and 16GB version will likely not happen.

AMD Ryzen 7 7745HX “Dragon Range” CPU Benchmark Leak: 8 Cores On Par With 16 Core 12900HX

https://wccftech.com/amd-ryzen-7-7745hx-dragon-range-cpu-benchmark-leak-8-cores-on-par-with-16-core-12900hx/

AMD EPYC 9654 ES 96-Core CPU Is As Fast As A LN2-Overclocked Threadripper 5995WX In Cinebench

https://wccftech.com/amd-epyc-9654-es-96-core-cpu-as-fast-as-a-ln2-overclocked-threadripper-5995wx-in-cinebench/



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Seems as though this thread has worked fine since the hack/deletion, that fair to say?

(I'm running through threads to see what are still broken and what aren't).



Ryuu96 said:

Seems as though this thread has worked fine since the hack/deletion, that fair to say?

(I'm running through threads to see what are still broken and what aren't).

Yea no issues here afaik



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Yeah, we were lucky that it didn't got f*cked up with the first attack and the site was more prepared when the second one came and hit it. The team was able to restore it to how it was.

Thanks for checking.

Last edited by JEMC - on 19 February 2023

Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.