By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

So overall. The key GPUs are 5090 and 5070 Ti. 5080 is meh af so I'd save some money and get a 5070 Ti instead as it's very close to 5080 while having the same 16GB of vram and being $250 cheaper.

5070 is pretty good value but the 12GB of vram could cuckhold it in some games. It would be nice if Nvidia would have given it more vram but Jensen needs another alligator jacket.

DLSS 4's Multi-Frame generation is probably neat but I don't mind not having it. The bigger thing is DLSS using the Transformer model. Now the key thing here is while the new model is supported even on Turing GPUs, it's going to need more computational power. So it will be interesting to see the difference in performance vs image quality improvements.

Reflex 2 looks hella sick and coming to all RTX gpus. While Reflex is still supported on older than RTX gpus, Reflex 2 is RTX exclusive.

Neural texture compression sounds like it's going to be the next big thing and it's already being implemented as part of Direct X. But it will certainly take sometime before game devs use it.

Intel's CPU division and AMDs GPU division are in big trouble if they continue their shitty ways. Even if 9070 XT has 16GB of vram and is 5% faster in Raster than 5070, the market won't care unless it's $449 imo. Radeon needs to make sure they don't do a repeat of RDNA 3 with horrible launches that are poorly reviewed. Intel needs to get their shat together in their CPU department in general.

5090 TGP being 575 watts is nuts and the cooler being only two slots with 304mm in length is kinda crazy. Hopefully it actually cools well.

Last edited by Jizz_Beard_thePirate - 21 hours ago

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
sc94597 said:

I personally don't mind Nvidia marketing based on results rather than rasterized work-loads. Going forward we're going to see more and more NeRF assisted graphics. I wouldn't be surprised if by the end of this decade rasterization will become a very small part of game graphics. 

You end up with apples vs. oranges comparisons however: more latency and residual artifacts in the case of DLSS4 vs. DLSS3, or lower accuracy and stability in the case of FP4 vs. FP8 for deep learning.

I'd like to see something like ultra-performance DLSS3 vs. performance DLSS4 side to side instead since you start with a more or less similar number of rasterized pixels.

Jizz_Beard_thePirate said:

5090 TGP being 575 watts is nuts and the cooler being only two slots with 304mm in length is kinda crazy. Hopefully it actually cools well.

And that's with a lower boost clock too (2410 vs. 2520 MHz). I wonder if the 4 nm node was maxed out years ago in terms of efficiency or if GDDR7 VRAM consumes that much more power - maybe it's a bit of both.



 

 

 

 

 

Jizz_Beard_thePirate said:

So overall. The key GPUs are 5090 and 5070 Ti. 5080 is meh af so I'd save some money and get a 5070 Ti instead as it's very close to 5080 while having the same 16GB of vram and being $250 cheaper.

5070 is pretty good value but the 12GB of vram could cuckhold it in some games.

Exactly my thoughts.

Jizz_Beard_thePirate said:

DLSS 4's Multi-Frame generation is probably neat but I don't mind not having it. The bigger thing is DLSS using the Transformer model. Now the key thing here is while the new model is supported even on Turing GPUs, it's going to need more computational power. So it will be interesting to see the difference in performance vs image quality improvements.

I'm not a big fan of frame generation so far. I tried it on many supported games, but turned it off due to issues in most cases. Especially the areas around subtitles or HUD elements or hair strains often have annoying image artifacts. Maybe it gets better with frame generation improvements on RTX 50 GPUs, maybe it gets worse with 3 instead of 1 fake frames between 2 rendered frames.

Multi frame generation is probably nice on 240 hz monitors (and above). But on my 120 Hz displays (which I'm totally happy with and won't replace them for many years), 120 frames in total would mean a limitation to 30 correctly rendered frames with 4x MFG and 40 correctly rendered frames with 3x MFG. That doesn't sound like a good gaming experience.

I'm happy that all other improvements (better supersampling, better ray reconstruction, better reflex) also comes to the older RTX GPUs.



Jizz_Beard_thePirate said:

Da hell is this jacket Jensen

Bruh is he doing what I think he's doing...



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

The games for Humble Choice January 2025 have been revealed.

  • Against the Storm
  • Beneath Oresa
  • Blasphemous 2
  • Boxes
  • Dordogne
  • Fort Solis
  • Jagged Alliance 3
  • The Pegasus Expedition


Around the Network

I thought all this adapters did is offered the shape of the optical drive and pass through SATA data and power to an HDD or SSD. I have never used one of these tho so cant help.

Last edited by Chicho - 19 hours ago

While I could get a 5090, I'm not willing to spend that much on a gpu. The psu I started using only a week ago also isn't strong enough.
5080 is looking like the most appealing upgrade from my 3070.
5070ti looks nice, but I'm willing to invest a little more to get a 5080 - I did so with the cpu and picked up a 9800X3D. I'm not going to invest in a cpu, while failing to invest in the more important gpu.

I'll wait for reviews, benchmarks, ect, first. The price will likely go up too, so I may change my mind.

I do wonder how well my old PC with the i7-4790k which I replaced last week would have run with a 5090. I suppose I could have used the money I spent on a new PC to get the 5090 instead, but a 5090 with an i7-4790k seems pretty nuts. 



My mistake in taking Nvidia's word at face value; it appears frame gen does still add latency (11:12):

Guess I'll keep avoiding it, then. Still looking forward to the rest, though.



Nvidia is a special kind of slimy. I'm against FAKE FPS.



CPU: Ryzen 9950X
GPU: MSI 4090 SUPRIM X 24G
Motherboard: MSI MEG X670E GODLIKE
RAM: CORSAIR DOMINATOR PLATINUM 32GB DDR5
SSD: Kingston FURY Renegade 4TB
Gaming Console: PLAYSTATION 5 Pro
haxxiy said:
sc94597 said:

I personally don't mind Nvidia marketing based on results rather than rasterized work-loads. Going forward we're going to see more and more NeRF assisted graphics. I wouldn't be surprised if by the end of this decade rasterization will become a very small part of game graphics. 

You end up with apples vs. oranges comparisons however: more latency and residual artifacts in the case of DLSS4 vs. DLSS3, or lower accuracy and stability in the case of FP4 vs. FP8 for deep learning.

I'd like to see something like ultra-performance DLSS3 vs. performance DLSS4 side to side instead since you start with a more or less similar number of rasterized pixels.

For models that work fine with FP4 precision (or a Mixture of Formats Quantization) it is a real performance increase though. Deep Learning isn't scientific computing/simulation/engineering* where precision is always critical. There are many use-cases where saving on floating point precision can give good, accurate results, while performance (speed and/or accuracy) would increase significantly by the fact of being able to scale higher in terms of parameter count than other-wise (which is one of the scaling laws of transformers.) Often a quantized higher parameter model performs better than a non-quantized lower parameter one. If Nvidia is able to take advantage of this for NeRF assisted workloads (which they have been already), then it also means better performance in games with better results than otherwise. 

What this all does is make what was purely a hardware problem (Moore's law decelerating) into a hybrid hardware-software problem, where building new DL models can assist with the graphics pipeline. Rasterized performance becomes less useful of a metric over time in this timeline, because more of the compute is spent on non-rasterized workloads that aid in the final result. 

Since these DL models are improving at a faster rate than hardware the end result is that we get much better results for less money than if these companies poured all of their resources into solving the hardware problem alone (assuming there is no hardware paradigm shift on the immediate horizon, like graphene or photonics.) 


*except where DL models are used for those, when it is. 

Edit: This is probably also why Nvidia moved to ViT's (Vision Transformers) from CNN's. They scale much better by parameter count and data-set size, allowing Nvidia to better utilize the tensor cores, which have been somewhat under-utilized for gaming so far. 

Last edited by sc94597 - 10 hours ago