By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

numberwang said:

Nvidia clarifies that DLSS 3 frame generation will add latency of 1/2 frame time. That means we are talking about frame interpolation between existing frames and the newest rendered frame has to be hold back for 1/2 frame time. This does not sound well for 30fps games or fast esport titles. It might work for slow games  like Flight Simulator running at 45-60fps to get to >60fps stable. Also it won't be able to smooth over CPU frame spikes because you need a rendered frame first before you can inject an interpolated frame.

https://wccftech.com/nvidia-ada-lovelace-follow-up-qa-dlss-3-ser-omm-dmm-and-more/

I am not too worried about the latency aspect because the point of DLSS is to give you better latency than what your GPU would be able to achieve if you ran the game natively. For example, if you run Cyberpunk at 4k with RT max, lets say you can achieve 30 fps which is 33ms natively. Then lets say with DLSS 3, the Ai upscaling + frame interpolation you end up at 60fps with latency being 20ms instead of 16ms. Well the 20ms is still lower than 33ms. Obviously if you have a GPU that is capable of hitting 60fps native, then you don't use DLSS but DLSS isn't meant to replace native but rather, it's meant to look and feel as close to native as possible in situations where your GPU isn't capable.

With esports and competitive games, you never turn on upscaling whether it be DLSS or FSR or XeSS because there's always artifacts and ghosting. Plus those types of games aren't all that demanding anyway. If anything, you should turn on Reflex and enjoy the crazy latency reduction compared to without it.

But yea, overall the point of DLSS 3 is to try and make the frames look smoother than they really are. While it won't feel as low of an input lag as running the game natively at those framerates... The people that will use it couldn't do that anyway so what they will experience is lower input latency and smoother frames than they would currently.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network

Compare DLSS 3 to 2 as the point of reference.

DLSS 2 super resolution does indeed reduce latency but DLSS 3 frame interpolation increases it. I don't think I'm going to use this feature. With frame interpolation a 60fps game (17ms ft) will look like 120fps but feel like a 40fps game (17+9=26ms ft). A 30 fps game (33ms) will look like a 60fps game but feel like a 20fps games (33+17=50ms).



numberwang said:

Compare DLSS 3 to 2 as the point of reference.

DLSS 2 super resolution does indeed reduce latency but DLSS 3 frame interpolation increases it. I don't think I'm going to use this feature. With frame interpolation a 60fps game (17ms ft) will look like 120fps but feel like a 40fps game (17+9=26ms ft). A 30 fps game (33ms) will look like a 60fps game but feel like a 20fps games (33+17=50ms).

Yea that doesn't make much sense... I'd wait for Digital Foundry before jumping to conclusions.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Chazore said:

I love how even on ResetEra's nvidia thread, ppl there shit on Jay and his fear mongering shit lol.

https://www.resetera.com/threads/nvidia-rtx-4090-1600-october-12th-4080-1200-16gb-900-12gb-november-and-dlss-3-announced.634241/page-43

Yea Jayztwocents has gone off the rails with poor journalism. With his latest video, does he really think that Nvidia, all their board partners and all the power supply makers that are creating the adapter hasn't tested ATX 2.0 PSUs to work with ATX 3.0 GPUs? If it was a fire hazard, they would not be implementing it or say ATX 3.0 power supply is required. If there's anything, because Nvidia has now taken a lot more control this time around after all the issues the Ampere experienced with AIB designs, I'd expect there to be less issues this time around.

We will see though.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Update: We have confirmed with NVIDIA that the 30-cycle spec for the 16-pin connector is the same as it has been for the past 20+ years. The same 30-cycle spec exists for the standard PCIe/ATX 8-pin connector (aka mini-fit Molex). The same connector is used by AMD and all other GPU vendors too so all of those cards also share a 30-cycle life. So in short, nothing has changed for the RTX 40 GPU series.

https://wccftech.com/nvidia-geforce-rtx-40-series-pcie-gen-5-power-adapters-limited-connect-disconnect-life-of-30-cycles/

Looks like the durability of the new 16-pin connector is the same as it's always been for the 8 pin connectors.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network

Well, well, well. Looks like no one bothered to look at the specs of the old power cables before complaining about the longevity of the new one.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

I've seen that the estimated cost of a 300 mm 5 nm wafer from TSMC is about $14,000-$15,000 vs. ~ $10,000 for the 7 nm process wafer. I don't know how much more expensive Nvidia's 'special' node is but let's assume it's 10% more and they all have a similar rate of defects.

So there's a huge price increase there, more than 50% per mm2. Using the die yield calculator:

$10,000 / 71 viable GA102 per wafer = $140,84 per chip

$15,950 / 75 viable AD102 per wafer = $212,66 per chip

That's for the big ones. The smallest (AD104) yields 170 chips per wafer in this estimate and would cost $93,82 per chip, around 66% of the GA102 Ampere chips. If the profit margin were the same, the price of the RTX 4080 12 GB should've been $599-$699. On the other hand, the profit margin of the 4090 is lower than the 3090 as everyone noticed eyeballing it.

RTG might have had a good idea before Nvidia for once with the chiplet design. Using standard 5 nm and assuming 250 mm2 per chiplet, the cost would be $138,75 for the Navi 31 die vs. $113,63 for Navi 21 (a 22% increase per mm2).

But even with 100 mm2 chiplets the cost should increase another 30% by 2026 over AD102, so yeah.



 

 

 

 

 

haxxiy said:

I've seen that the estimated cost of a 300 mm 5 nm wafer from TSMC is about $14,000-$15,000 vs. ~ $10,000 for the 7 nm process wafer. I don't know how much more expensive Nvidia's 'special' node is but let's assume it's 10% more and they all have a similar rate of defects.

So there's a huge price increase there, more than 50% per mm2. Using the die yield calculator:

$10,000 / 71 viable GA102 per wafer = $140,84 per chip

$15,950 / 75 viable AD102 per wafer = $212,66 per chip

That's for the big ones. The smallest (AD104) yields 170 chips per wafer in this estimate and would cost $93,82 per chip, around 66% of the GA102 Ampere chips. If the profit margin were the same, the price of the RTX 4080 12 GB should've been $599-$699. On the other hand, the profit margin of the 4090 is lower than the 3090 as everyone noticed eyeballing it.

RTG might have had a good idea before Nvidia for once with the chiplet design. Using standard 5 nm and assuming 250 mm2 per chiplet, the cost would be $138,75 for the Navi 31 die vs. $113,63 for Navi 21 (a 22% increase per mm2).

But even with 100 mm2 chiplets the cost should increase another 30% by 2026 over AD102, so yeah.

Hopefully RTG will bring their A game this time around. The post is wide open and the ball is in their court so the game is theirs to lose. All they need to do is price it decently at the very least and they will gain massive market share. I just hope they aren't too greedy cause if they only price it 10% lower than Nvidia, Nvidia will win another gen unless AMD has feature parity. If AMD can price it 20% or lower, thats when we can start seeing people starting to switch imo.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

By the way, does anyone have experience with curved VA monitors? Are they actually good?



 

 

 

 

 

Depends on the panel. Some VA monitors have ghosting and other nonsense.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850