By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Bofferbrauer2 said:
Captain_Yuri said:

I am not sure where you are getting the "visual upgrade is small" when you have plenty of people including hardware unboxed themselves saying the visual upgrade is noticeable. If you think this is a small visual upgrade:

Then idk what to tell you other than to get your eyes checked. SSAA isn't even in the same league as to the type of visual upgrade that Ray Tracing offers even back then. Even if you have something like Ray Tracing reflections on and rest of the affects off, it offers a noticeable visual upgrade. And with DLSS which you keep ignoring, lessens the performance hit by quite a bit. The idea that very few games use RT and therefore it shouldn't matter is nonsense because not only will that number increase significantly now that the consoles have it but number of the major titles that released this holiday season have some form of Ray Tracing on both Console and PC.

The reason why people are "Using Raytracing right now as main value to define how good a GPU is" is because in Raster Performance, both Nvidia and AMD gpus are within single digit % of each other in majority of the games in which they trade blows. And when they are that close but then you have AMD gpus having a worse Ray Tracing implementation that even the 2000 series, it 110% matters a lot. Because when you are spending money on a GPU where the Raster performance is so close but AMD has potato RT performance, it would make sense for a lot of people to go with Nvidia and be on the cutting edge of technology. The Intel comparison that you made is nonsense.

PC is about options. If you want to not have options, play on a console... Consumers should be able to choose on which games they want to prioritize frame rate like Call of Duty or which games they want to prioritize Visuals like Minecraft RTX. It shouldn't be up to the reviewer to decide for them. Ray Tracing is a feature that is available now on a growing number of titles. It's an Option. And when people are spending $600+ on a GPU, they should have the Option to experience cutting edge visuals even if there's a performance hit.

I didn't say it's small, I said it's too small compared to the performance hit. That doesn't make the visual gains small, it just means it totally tanks your framerate when doing so.

Since you came with Minecraft RTX, let's have a look at it, shall we?

With a 2080 Super without RTX enabled you can load 96 chunks at 180 fps on average. Turn on RTX and you're down to only 8 chunks and 43 fps. Even with DLSS, which raises the fps to 71 with 8 chunks, we're still way lower and with a much much shorter draw distance. A 3080 does certainly a bit better, but will still be lightyears behind the framerates and draw distances achieved without raytracing. And I simply can't agree that the visual upgrade is worth losing over 80% of performance with DLSS turned on with RTX (otherwise it would be over 90% performance hit). And Minecraft is the only game so far where I consideer the raytracing really meaningful (maybe control also, but haven't checked that one yet tbf).

I do agree it looks beautiful. But just check the beginning of your DF video when he shows off the house, and look at the tree behind it: It's already partly in the mist from the short draw distance. It seriously reminded me to the N64 version of Turok in that regard. I simply can't agree that Raytracing is worth this massive tradeoff. Hence why I compared it to 8x SSAA, which also wasn't worth the tradeoff despite looking really good at the time.

I mean, yes you are sacrificing draw distance and frame rate in Minecraft but the game in it's entirety looks completely different. It's looks like a generational leap in visuals if I have ever seen one while still being above 60fps if you have a GPU powerful enough.

But the key here isn't whether or not you or I agree with the visual advantages vs performance hit as everyone would have a different opinion. The key here is whether or not all the information is being presented to the consumer from the reviewer which Hardware Unboxed was not doing. Raster is no longer the only metric to measure in every game nor is it the only deciding factor for purchasing a GPU when the Raster performance is so close.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
Captain_Yuri said:

The irony of using Intel in this comparison is that AMD's reliance on Raster is Similar to Intel's reliance on 14nm. They are both old technologies... The other funny part is that Intel is downplaying new technologies like AMD's Chiplet approach similar to how AMD is downplaying new technologies like Nvidia's Ray Tracing/DLSS.

If anything, AMD's GPU division is more like Intel than anyone else.

What are you talking about? While not up to par with Nvidia's Ampere, AMD has brought ray tracing hardware to its GPUs and it's keen to use it, while at the same time not compromising their raster performance. After all, both technologies will live together for a few more years, because RT won't become the norm until its implementation doesn't bring such a big performance loss, and for that to happen you need new and more powerful hardware. Judging AMD's stance on RT just on their first hardware itereation is, and I'm sorry to say this, short-sighted.

And regarding their stance on DLSS, well, maybe their decision to split their architectures and remove the GPGPU qualities of their gaming cards has come at the wrong time, and maybe they'll have to bring it back in the future to use that for their version of that tech. We don't know. What we know is that they're letting MSoft do the work with DirectML.

In any case, hardware architectures are designed many years in advance of the time the product finally reaches the market. Just because 5 years ago (to say some made-up date), AMD didn't took an upscaling tech into account when laying out their Navi products, that doesn't mean that they won't do it in future parts.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
Captain_Yuri said:

The irony of using Intel in this comparison is that AMD's reliance on Raster is Similar to Intel's reliance on 14nm. They are both old technologies... The other funny part is that Intel is downplaying new technologies like AMD's Chiplet approach similar to how AMD is downplaying new technologies like Nvidia's Ray Tracing/DLSS.

If anything, AMD's GPU division is more like Intel than anyone else.

What are you talking about? While not up to par with Nvidia's Ampere, AMD has brought ray tracing hardware to its GPUs and it's keen to use it, while at the same time not compromising their raster performance. After all, both technologies will live together for a few more years, because RT won't become the norm until its implementation doesn't bring such a big performance loss, and for that to happen you need new and more powerful hardware. Judging AMD's stance on RT just on their first hardware itereation is, and I'm sorry to say this, short-sighted.

And regarding their stance on DLSS, well, maybe their decision to split their architectures and remove the GPGPU qualities of their gaming cards has come at the wrong time, and maybe they'll have to bring it back in the future to use that for their version of that tech. We don't know. What we know is that they're letting MSoft do the work with DirectML.

In any case, hardware architectures are designed many years in advance of the time the product finally reaches the market. Just because 5 years ago (to say some made-up date), AMD didn't took an upscaling tech into account when laying out their Navi products, that doesn't mean that they won't do it in future parts.

It's not even on Par with Nvidia's Turing let alone Ampere and I don't see them being very "keen on using it" this generation of GPUs considering they even left it out of their main press event and have been very quiet on the whole subject ever since. Heck, go to their website, it's not even easy to find any information on Ray Tracing. Like yes RDNA 2 "can do RT" but we have seen how bad it is outside of the basic implementation such as Shadows. Sure, maybe in the future with RDNA 3 or RDNA 4 or when ever they have a viable solution but we aren't talking about future iterations of GPUs, we are talking about The Current Iteration.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Marth said:
Captain_Yuri said:

I mean, yes you are sacrificing draw distance and frame rate in Minecraft but the game in it's entirety looks completely different. It's looks like a generational leap in visuals if I have ever seen one while still being above 60fps if you have a GPU powerful enough.

But the key here isn't whether or not you or I agree with the visual advantages vs performance hit as everyone would have a different opinion. The key here is whether or not all the information is being presented to the consumer from the reviewer which Hardware Unboxed was not doing. Raster is no longer the only metric to measure in every game nor is it the only deciding factor for purchasing a GPU when the Raster performance is so close.

I don't get why raytracing gets compared with vanilla when we had shaded Minecraft for years. Vanilla lighting has been outdated since like... 2011?

Idk, it's what most people compare it to but you can find RT examples in other games if you watch DF videos. The funny part is that according to DF, putting SSR to it's highest setting in Cyberpunk has the same level of performance decrease as enabling RT reflections on Ampere while having worse visuals.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:
JEMC said:

What are you talking about? While not up to par with Nvidia's Ampere, AMD has brought ray tracing hardware to its GPUs and it's keen to use it, while at the same time not compromising their raster performance. After all, both technologies will live together for a few more years, because RT won't become the norm until its implementation doesn't bring such a big performance loss, and for that to happen you need new and more powerful hardware. Judging AMD's stance on RT just on their first hardware itereation is, and I'm sorry to say this, short-sighted.

And regarding their stance on DLSS, well, maybe their decision to split their architectures and remove the GPGPU qualities of their gaming cards has come at the wrong time, and maybe they'll have to bring it back in the future to use that for their version of that tech. We don't know. What we know is that they're letting MSoft do the work with DirectML.

In any case, hardware architectures are designed many years in advance of the time the product finally reaches the market. Just because 5 years ago (to say some made-up date), AMD didn't took an upscaling tech into account when laying out their Navi products, that doesn't mean that they won't do it in future parts.

It's not even on Par with Nvidia's Turing let alone Ampere and I don't see them being very "keen on using it" this generation of GPUs considering they even left it out of their main press event and have been very quiet on the whole subject ever since. Heck, go to their website, it's not even easy to find any information on Ray Tracing. Like yes RDNA 2 "can do RT" but we have seen how bad it is outside of the basic implementation such as Shadows. Sure, maybe in the future with RDNA 3 or RDNA 4 or when ever they have a viable solution but we aren't talking about future iterations of GPUs, we are talking about The Current Iteration.

I'd say that Navi 2x is roughly on par with Turing and, given that the drivers still have to mature, there's still room for improvement.

And you can't really be surprised that they don't highlight their RT performance. Of course they do! They are behind Nvidia in that aspect so it's no wonder they try to put it aside so it doesn't get in the way of highlighting their wins. I haven't shecked their site, but it wouldn't surprise me if they also focus on 1440p performance for the same reason.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network

Looking forward to how AMD is gonna manage to improve both raster and RT performance with RDNA 3 without a smaller process while still keeping up with Nvidia. Next gen will most likely see SAM across the board, which currently is the only saving grace for AMD.

I guess they'll have to scrap their cache and use actually fast memory to make some room.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Yo, that was one of my fav cards. Good old 750 ti. I put it into my old pc and it got a lot of playtime.

Spiderman and me are homies.



Hexus joins the fun!

*** NEW CONTEST ***

HEXUS EPIC Giveaway Day 1: Win an AMD Ryzen 7 5800X https://hexus.net/tech/features/cpu/147096-day-1-win-amd-ryzen-7-5800x/

This contest is GLOBAL.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

vivster said:

Looking forward to how AMD is gonna manage to improve both raster and RT performance with RDNA 3 without a smaller process while still keeping up with Nvidia. Next gen will most likely see SAM across the board, which currently is the only saving grace for AMD.

I guess they'll have to scrap their cache and use actually fast memory to make some room.

I'm not so sure about AMD staying at the same node. TSMC's 5nm will be available when the next gen of cards launch and I think both AMD and Nvidia will use it.

But I do hope AMD goes for a bigger memory bandwidth. With cache or not, 256-bit is NOT enough.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.