By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

WOW! I did not realize the disparity of ray traced reflections on console compared to PC.

What kind of RTX cards were being used for testing? I'm sure my 2080 shouldn't have too much of a problem. Although I probably won't crank it up to Ultra High lol.



You called down the thunder, now reap the whirlwind

Around the Network
gtotheunit91 said:

WOW! I did not realize the disparity of ray traced reflections on console compared to PC.

What kind of RTX cards were being used for testing? I'm sure my 2080 shouldn't have too much of a problem. Although I probably won't crank it up to Ultra High lol.

Well it depends on the resolution but he was using a 3090. The main thing though is the CPU this time around because as you crank those settings up, the CPU starts becoming the bottleneck.

While you can help your GPU by using DLSS and such while keeping the settings up. Your CPU will still struggle with everything cranked unless you have a high end CPU like Ryzen 5000 series or newer.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:
gtotheunit91 said:

WOW! I did not realize the disparity of ray traced reflections on console compared to PC.

What kind of RTX cards were being used for testing? I'm sure my 2080 shouldn't have too much of a problem. Although I probably won't crank it up to Ultra High lol.

Well it depends on the resolution but he was using a 3090. The main thing though is the CPU this time around because as you crank those settings up, the CPU starts becoming the bottleneck.

While you can help your GPU by using DLSS and such while keeping the settings up. Your CPU will still struggle with everything cranked unless you have a high end CPU like Ryzen 5000 series or newer.

Really? Interesting. I currently have a Ryzen 7 2700x, but I'm planning on doing a new build for the beginning of 2023. I'm hoping to pair a 3080 with the next gen Ryzen 7000 series. By that time I'm hoping GPU scarcity is all but alleviated and CPU's should be completely available. 

God of War did receive a lot of patches post-launch, so I'm wondering if Nixxes will continue to optimize the game to where older CPU's aren't quite as much of a bottleneck as they are at launch.



You called down the thunder, now reap the whirlwind

gtotheunit91 said:
Captain_Yuri said:

Well it depends on the resolution but he was using a 3090. The main thing though is the CPU this time around because as you crank those settings up, the CPU starts becoming the bottleneck.

While you can help your GPU by using DLSS and such while keeping the settings up. Your CPU will still struggle with everything cranked unless you have a high end CPU like Ryzen 5000 series or newer.

Really? Interesting. I currently have a Ryzen 7 2700x, but I'm planning on doing a new build for the beginning of 2023. I'm hoping to pair a 3080 with the next gen Ryzen 7000 series. By that time I'm hoping GPU scarcity is all but alleviated and CPU's should be completely available. 

God of War did receive a lot of patches post-launch, so I'm wondering if Nixxes will continue to optimize the game to where older CPU's aren't quite as much of a bottleneck as they are at launch.

Yea with that CPU, you will need to turn down some of the Ray Tracing settings. Not all of them though so you should still get a better visual experience than a PS5.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

Yuzu (Switch emulator) New Feature Release - Installer for Linux!

https://yuzu-emu.org/entry/yuzu-linux-installer/

That's great news for Steam Deck owners.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network

Full RDNA 3 specs leak. Lots of interesting info but take it with a grain of salt as always:


OREO

"One of the features in the RDNA 3 graphics pipeline is OREO: Opaque Random Export Order, which is just one of the many area saving techniques. With gfx10, the pixel shaders run out-of-order, where the outputs go into a Re-Order Buffer before moving to the rest of the pipeline in-order. "

Infinity Cache Updates

"The Memory Attached Last Level (MALL) Cache blocks are each halved in size, doubling the number of banks for the same cache amount. There are also changes and additions that increase graphics to MALL bandwidth and reduce the penalty of going out to VRAM."

Navi 31

  • gfx1100 (Plum Bonito)

  • Chiplet - 1x GCD + 6x MCD (0-hi or 1-hi)

  • 48 WGP (96 legacy CUs, 12288 ALUs)

  • 6 Shader Engines / 12 Shader Arrays

  • Infinity Cache 96MB (0-hi), 192MB (1-hi)

  • 384-bit GDDR6

  • GCD on TSMC N5, ~308 mm²

  • MCD on TSMC N6, ~37.5 mm²


"The world’s first chiplet GPU, Navi31 makes use of TSMC’s fanout technology (InFO_oS) to lower costs, surrounding a central 48 WGP Graphics Chiplet Die (GCD) with 6 Memory Chiplet Dies (MCD), each containing 16MB of Infinity Cache and the GDDR6 controllers with 64-bit wide PHYs. The organic fanout layer has a 35-micron bump pitch, the densest available in the industry. There is a 3D stacked MCD also being productized (1-hi) using TSMC’s SoIC. While this doubles the Infinity Cache available, the performance benefit is limited given the cost increase. Thus, the main Navi31 SKU will have 96MB of Infinity Cache (0-hi). This is lower than the 128MB in Navi21. A cut-down SKU will offer 42 WGP and 5x MCD (80MB Cache, 320-bit GDDR6).

The reference card appears to have an updated 3-fan design that is slightly taller than the previous generation, with a distinctive 3 red stripe accent on a section of the heatsink fins near the dual 8-pin connectors.

There were early plans for a version with 288MB of Infinity Cache (2-hi), but this was shelved as the cost-benefit was not worth it."

"Navi32

  • gfx1101 (Wheat Nas)

  • Chiplet - 1x GCD + 4x MCD (0-hi)

  • 30 WGP (60 legacy CUs, 7680 ALUs)

  • 3 Shader Engines / 6 Shader Arrays

  • Infinity Cache 64MB (0-hi)

  • 256-bit GDDR6

  • GCD on TSMC N5, ~200 mm²

  • MCD on TSMC N6, ~37.5 mm²

Coming in 2023, Navi32 is a smaller version of Navi31, reusing the same MCDs. Navi32 will also be coming to mobile as a high-end GPU offering in AMD Advantage laptops. There were plans for a 128MB (1-hi) version, however it might not be productized due to the aforementioned costs. Thus Navi32’s 64MB is also smaller than Navi22’s 96MB."

"Navi33

  • gfx1102 (Hotpink Bonefish)

  • Monolithic

  • 16 WGP (32 legacy CUs, 4096 ALUs)

  • 2 Shader Engines / 4 Shader Arrays

  • Infinity Cache 32MB

  • 128-bit GDDR6

  • TSMC N6, ~203 mm²

Navi33 is the mobile-first push for AMD. They expect robust sales of AMD Advantage laptops with it, as the design is drop-in compatible with Navi23 PCBs, minimizing OEM board re-spin headaches. They aim to ship more Navi33 silicon for mobile than to desktop AIB cards. The first concepts showed Navi33 as a chiplet design with 18 WGP and 2x MCD, but this could not meet the volume and cost structure of this class of GPU vs a monolithic design.

As an aside, Navi33 outperforms Intel’s top end Alchemist GPU while being less than half the cost to make and pulling less power."

Last edited by Jizz_Beard_thePirate - on 12 August 2022

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

I'm not going to pretend that I understand half the things said in that article. But anyway:

I was going to ask if the site is credible, but given how new it appears to be, the answer is no. Maybe if we knew how's behind it, things would change but, for now, I'd take it with huge amounts of salt.

That said, the Infinity Cache sizes are quite interesting (and easier to understand). Looks like, while it does work and help in certain scenarios, fast access to VRAM is still necessary, hence why they would have reduced the IC but bumped the memory bus width compared to Navi 2x. This, if true, could help the cards perform better at higher resolutions, where it was obvious that the memory configuration hindered their real performance, something we all saw with the xx50cards, that with just slightly better memory, offered better performance.

In any case, I'd wait for other sources to start guessing its performance.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Yea that's the main take away. I probably should have waited for the videocardz article as they make things a lot cleaner.

Essentially if we look at the rumoured 4090 vs 7900XT.

4090 16384 Cuda Cores vs 7900XT 12288 Stream Processors
4090 72MB of L2 cache vs 7900XT 96MB of L3 cache
4090 24 Gbps memory bandwidth vs 7900XT 18 Gbps memory bandwidth
Lovelace TSMC N4 vs RDNA 3 TSMC 5nm + 6nm

Now obviously, none of specs above that I listed are any indicators of real world performance. But I think it's going to play out as 4090 > 7900XT > 4080 ~ 7800XT in Raster.

Last edited by Jizz_Beard_thePirate - on 12 August 2022

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

Yea that's the main take away. I probably should have waited for the videocardz article as they make things a lot cleaner.

*pic*

Essentially if we look at the rumoured 4090 vs 7900XT.

4090 16384 Cuda Cores vs 7900XT 12288 Stream Processors
4090 72MB of L2 cache vs 7900XT 96MB of L3 cache
4090 24 Gbps memory bandwidth vs 7900XT 18 Gbps memory bandwidth
Lovelace TSMC N4 vs RDNA 3 TSMC 5nm + 6nm

Now obviously, none of specs above that I listed are any indicators of real world performance. But I think it's going to play out as 4090 > 7900XT > 4080 ~ 7800XT in Raster.

Hey, if you're going to compare different cards and architectures, at least make it full by bringing the "old" parts

4090 16384 Cuda Cores vs 3090 10496 Cuda Cores  =>  a 56% increase

7900XT 12288 Stream Processors vs 6900XT 5120 Stream Processors  => a 140% increase (!)

By that alone, the jump AMD is going to achieve should put it well ahead. Right? ... Oh, if only things were that easy.

*Edit* Seriously, tho, if AMD manages to pull such ridiculous increase in Stream Processors AND they aren't left unused by some ridiculous bottlenecks like with Vega, it will be nothing shorts of impressive.

Last edited by JEMC - on 12 August 2022

Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Another interesting note if it's legit is:

"Chiplet - 1x GCD + 6x MCD (0-hi or 1-hi)."
"There is a 3D stacked MCD also being productized (1-hi) using TSMC’s SoIC."
"There were early plans for a version with 288MB of Infinity Cache (2-hi), but this was shelved as the cost-benefit was not worth it."

So if Navi 31 launches with 96MB of L3 cache and we may see Radeon take a page from Zen's book and launch Navi 31 Refresh with 192MB of Vcache.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850