By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Captain_Yuri said:
Pemalite said:

I don't disagree that nVidia has some solid features, but lets be real here... It's not like games look ugly or are unplayable on Radeon... They are just icing on top of the cake.

They aren't unplayable/ugly, they are just largely what they look like on consoles since PS5/Series X can crank out close to ultra Raster settings and have the same upscaling tech. With Nvidia, you simply get better visuals if you have a good enough RT card, significantly better upscaling with DLSS and significantly lower latency with Reflex.

No way near it. Radeon on PC allows you to run with Ultra settings, unlike the Playstation 5/Series X.
Many Series X/Playstation 5 games run with mostly high settings. - I do own every platform.

You can also Super Sample on a Radeon+PC, which is difficult to do on console as you don't have control over resolution... Nor do consoles generally run native resolution anyway.

You aren't getting a different experience running with nVidia unless you start leveraging DLSS... You will obviously get higher framerates of course.

But otherwise, it's not that different. (Again, I own by AMD and nVidia hardware.)

Captain_Yuri said:

AMD does open source because they have to due to being low market share. If they locked FSR or Freesync to Radeon hardware, almost no one would implement them without getting heaps of money. And yes, Nvidia does implement locked features or pay devs to utilize their GPUs to put a leg up against the competition but so does AMD to a degree such as with Far Cry 6 and it's copious amounts of Vram even though it looks like a last gen title.

Open Source is factually better from an industry adoption and support perspective which means it's better for the consumer.

AMD has historically always leveraged open-source approaches.. I know you don't care what happens 10+ years ago which you have alluded to, but even back then AMD was leveraging open source to build compute into the Radeon x1900XT back in 2005-2007...

And that was when AMD and nVidia had almost 50/50 marketshare.
So your argument doesn't hold any weight anyway.




Captain_Yuri said:

FSR has it's place but it's not what I'd consider to be a selling point like what DLSS is which is the main issue. Like people that are looking to buy a new GPU is gonna watch the reviews and see that everyone says DLSS is significantly better than FSR, it's not going to sway people to buy Radeon. Especially with those with 1440p or lower monitors because again, reviews say it looks not very good compared to Native/DLSS.

FSR isn't meant to compete with DLSS, it's not using A.I upscaling.
It's using the Lanczos algorithm, which makes it hardware agnostic.

If image quality is your concern, then I wouldn't use DLSS or FSR, I would use Super Sampling...  But nVidia GPU's likely don't have the VRAM for that anyway.

Captain_Yuri said:

And it doesn't matter what happens 10 years from now. PS6 and XBox Series X2 will come out and Nvidia will invent new features that will lock people into their platform and it's a rinse and repeat cycle. That is unless Radeon or someone else does something. And Intel just may be that company. They already came out with XeSS which is also Ai based upscaler. They have RT performance similar to Nvidia and so on. That's the issue with Radeon, they could fall behind even Intel if they don't start catching up (assuming Intel even sticks around).

The big issue with Intel is their drivers. And they tend to abandon projects extremely quickly.
Intel is very terrible at making consumer-facing software, their commercial/enterprise stuff though (I.E. Compilers) tend to be extremely solid however.

Do they have potential? Absolutely.

And I would like Intel to apply pressure to AMD and nVidia, that would be great for all of us.

Captain_Yuri said:

Yea vram has always been an issue with Nvidia, no changing that until you pay the big bucks.

Doesn't need to be that way. You are already paying "big bucks" and getting less on that front.

Captain_Yuri said:

Actually that's not how Reflex works at all. Reflex works with any Nvidia GPU going back to GTX 900 and you can enable it on any game that has Reflex and it does not require any external hardware like monitor/mouse to use. All the monitor/mouse does is allows you to measure the system latency if you want those stats, they are not required to enable or use Reflex to it's full potential.

To use Reflex to it's fullest extent (I.E All the propriety features you clamor for that sets nVidia apart from AMD), you need propriety hardware, you have already elaborated how this is a big key selling-point by being an nVidia owner.

The GTX 900 series doesn't support Reflex in all titles either.

Keep in mind that AMD also has Radeon Anti-Lag which reduces input latency as well.

Captain_Yuri said:

Not to mention 900 series are still recieving continued driver support while Radeon has abandoned RX300 series GPUs.

You can't use that Chestnut.

Remember, nVidia abandoned support for Fermi in 2018... But was a GPU architecture they were re-releasing (Geforce 730) even in 2014.

Heck... It even got re-released in 2021.

That's right, an unsupported GPU re-released in 2021.
https://www.digitaltrends.com/computing/msi-gt-730-re-release-gpu-shortage/

...But sure, let's paint AMD as the only bad guy in this game.

Captain_Yuri said:

Yea and are pretty useless to anyone other than Radeons margins. Least when Ryzen did chiplets, they had double the cores vs Intel for the same price. When Radeon does chiplets, it's power hungry, less efficient, provides the same Raster performance as Mono and costs as much as Nvidia products with basically no advantages against Nvidia other than more vram. In the long term that could end up being different but that is a long ways away and Nvidia is no Intel.

You do realise that the Fabric between chiplets actually consumes additional power?

The only reason why Ryzen uses less power and offers more performance than Intel isn't actually due to the Chiplet design itself, it's actually due to the architecture.

If AMD made Ryzen monolithic and ditched the fabric and made it on a leading-edge manufacturing process, it would actually consume less energy.

The purpose of chiplets is simply cost... You can make more functional chips per wafer.. And considering AMD's biggest advantage other than more RAM... Is cost, it's good they are leveraging that, because GPU's have spiralled out of control in regards to costs.

One of the big factors was unprecedented demand due to Crypto and COVID, so AMD and nVidia could price GPU's however they wanted and they would still sell, that's now changed, so price is going to be a key selling point going forwards as the USA potentially goes into recession.



--::{PC Gaming Master Race}::--