By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Nvidia Gets SALTY

shikamaru317 said:

In the end, AMD gets the last laugh, it's looking like they got the contract for both Xbox Scarlett and PS5, so that is basically 160m+ APU's they just sold next gen.

Was that ever seriously in doubt, though? nVidia has never made x86 CPUs, and neither Sony nor Microsoft seem in any hurry to change architectures again, so that basically ruled them out from the word go.

What nVidia does have is Nintendo and the Switch - and until AMD manages to come up with something that's really competitive on a power/performance basis, they can just sit there in their own little niche and let AMD service the main console market.



Around the Network

Even though Vega VII is underwhelming, it has the same die size as a GP104 so on the bright side there's at least some progress in perf/area even though it also has useless features for gaming like half rate FP64 profile and deep learning specific instructions ...



True, Yes. Salty, No. Because the bulk of the sales will be the cards competing with the 2070, 2060 on down.



CGI-Quality said:

I actually wish AMD would put up a better fight. I'm happy to splurge on $1000+ graphics cards, but when we start seeing $2500 >single< GPUs not in the Quadro field, it is obvious that they are selling it that high because, well, they can. Sure, I'm in a different tier as a high-priced buyer, but even we know that there is a limit to this stuff.

That said, Jensen is FAR from salty here. More like beating his chest ~ knowing, yet again, they'll have the PC gaming market cornered. Plus, this tells me a new console generation is close, as NVIDIA openly attacked AMD just prior to the release of the PS4/XB1.

More or less seems that way.

It is a shame that each time AMD tries to catch up to Nvidia in the high end market, they seemingly keep falling short, or being the same, with not much else to set them apart. 

I'm still going to stick with my 1080ti, as I don't really feel the need to go for either the latest NV or AMD cards. Nvidia's at least got a head start on raytracing,m while AMD's latest card seems to be skipping it altogether. Again, I'd agree, Jensen is definitely beating his chest, because he knows they've taken the step that AMD hasn't and that they'll have the PC market once more.

I do wish AMD would step up, but as far as thing are headed, they still seem more concerned about the low to medium end market, as well as the current/upcoming console gen. The longer they do this, the more grip Nvidia will have on the PC market. 

Until AMD gets their backside into gear, I'll be sticking with Nvidia.



Do people not know the point of the word "salty" ?



Around the Network
thismeintiel said:

I do find it interesting that AMD had Xbox onstage and then later announced the Radeon VII, and used it to run FH4. Wonder if this is what XB2 is getting, while PS5 gets Navi.

People thought the same about Ryzen and the Xbox One X. - Microsoft had a demo running that had a Zen CPU in it... And people automagically chalked the Xbox One X to having Zen... Obviously I argued the contrary due to cost reasons. :P

deskpro2k3 said:

I'm not 100% but I think It is as good as the rtx 2080, but he is calling it lousy. So does that mean rtx 2080 is lousy too? When you switch raytracing and dlss on the game looks better but the fps takes a hit.

It will beat the RTX 2080 in a few key benchmarks, probably be equal in a few others... And probably loose in a heap more.

JRPGfan said:

Those cards are like 13-14 Tflops.
They make the PS4pro + Xbox One X look weak.

Thats underwhelming? O_O

Yes. Because flops alone doesn't equate to gaming performance.

HoloDust said:

If I understand the tech correctly, DLSS is using neural networks (that need to be trained per title) to make higher resolution image with more details than what it gets on input - so, not like checkerboarding.

It's basically frame reconstruction, but uses multiple approaches to achieve superior image quality.

Basically... A developer sends nVidia it's code, then nVidia runs it through their Super Computer... And then dials up it's settings. - From there when someone runs the game locally the Tensor cores get to work on "comparing the differences" and optimizes the scene from the information provided from nVidias servers.

JRPGfan said:

Yes instead of 1 technique that kinda works for everything, they have a A.I optimise how to do it best for each title.
But its still the same way to go about getting better performance, it comes at the cost of imagine quality.

Its a better technique than Checkerboard rendering that the PS4pro does, but its in sense the same thing.
(its just one works from the top, and scales down, while another works from the bottom and scales up)

DLSS isn't perfect. It has limitations.
Sometimes 4k DLSS, depending on scene... Will look like a 1440P image with TAA.

DLSS and Checkerboarding both have Pro's and Con's, it's not really sensible to discard one in favor of another...

JRPGfan said:

About the power load:


It usually 225+ watts and has short bursts where it can go upto 314watts.

Radeon VI will probably be the same, even if they say 300watts, during gameing it ll likely be 225watt ish too.

nVidia will have the edge in power consumption. They are years ahead of AMD at this point in regards to efficiency.

OlfinBedwere said:

It's more that when the PS4 and Xbox One were in the planning stages, devs made it absolutely clear that they wanted the CPU and GPU to all be part of the same chip. Intel's (integrated) GPUs are complete garbage, and nVidia don't make x86 CPUs, so that left AMD as the only real option.

nVidia has ARM.

Conina said:

Why do you think that?

Radeon VI will be more efficient than a Vega 64, but it will use the better efficiency to push more pixels and polygons than the Vega 64, not to deliver the same performance with less power consumption:

Precisely. The bulk of the power consumption benefits 7nm would have brought with Vega... Was spent mostly on driving up clockrates and widening the memory controller.
This AMD's current status QUO, they did the same when they took the Radeon RX 480, rebadged into the RX 580 and ported it to 12nm... They didn't leverage 12nm to increase efficiency... They used it to increase clockrates.

JRPGfan said:

I think perf/watt will massively be improved with the Radeon VII compaired to the RX Vega 64.

Look at what AMD did with their new Ryzen chips.
They beat out the Intel 9900k (which is 180watt+) and do so at 130watts.

They just had a massive improvement in perf/watt (for their cpu line), and at the same time finally managed to beat intel in cpu performance.

Ryzen and Radeon are separate entities at this point... Radeon are still on their development cycle that was planned out back over half a decade ago.

Besides, just because Ryzen is smashing it... Doesn't mean Radeon will, rarely have AMD's CPU and GPU divisions both ever given it's direct competitors a run for it's money at the same time.

OlfinBedwere said:

Uh, what? The Ryzen 2700X is almost certainly better value for money than a 9900K, but the latter is pretty decisively ahead performance-wise.

If you're talking about the new third-generation Zen chips they announced, I think it'd probably be best to wait until there are benchmarks from someone other than AMD themselves before making that sort of claim.

Ryzen 2700X is better value than the 9900K.

Zen 2 is supposed to offer 10-15% better IPC than Zen+.
Now if we were to peg just the IPC gains over Zen+, we are likely going to have a chip that is around the 9900K performance in the bulk of scenario's.
https://www.anandtech.com/bench/product/2125?vs=2263

Then you have the clockrate increases on top of it...
AMD's Zen 2 actually beat the 9900K on Cinebench too.
https://wccftech.com/amd-ryzen-3000-zen-2-desktop-am4-processors-launching-mid-2019/

And did so with much lower powerlevels.

7nm and the Zen architecture really is showing it's strengths verses Intels 14nm process.

Bofferbrauer2 said:

I think most people didn't actually watch that presentation or don't fully understand the Radeon VII.

The Radeon VII isn't just a Vega in 7nm, as it improves far more than just the clock speed. The clock speed increases by 10-15% while framerates increase almost throughout by 20-35%, and that's despite lacking 4CU. That brings the card to about RTX 2080 level in terms of FPS.

What is there to Understand?
It's the same old Vega... But at 7nm with much higher clockspeeds and more bandwidth.

Bofferbrauer2 said:

I can understand that from a pure gamer standpoint the GPU is disappointing, but for those Navi will come later down the line. But for those who use their GPU not just to play, but also for work, this is a very good offering.

That is the key word... "Gamer". - AMD is marketing this GPU towards the gamer. Hence the gaming benchmarks.
Thus they have opened themselves up for comparisons and criticism.

CGI-Quality said:

I actually wish AMD would put up a better fight. I'm happy to splurge on $1000+ graphics cards, but when we start seeing $2500 >single< GPUs not in the Quadro field, it is obvious that they are selling it that high because, well, they can. Sure, I'm in a different tier as a high-priced buyer, but even we know that there is a limit to this stuff.

That said, Jensen is FAR from salty here. More like beating his chest ~ knowing, yet again, they'll have the PC gaming market cornered. Plus, this tells me a new console generation is close, as NVIDIA openly attacked AMD just prior to the release of the PS4/XB1.

Same though. Jensen was just being honest. Brutally honest... But still honest.

OlfinBedwere said:

What nVidia does have is Nintendo and the Switch - and until AMD manages to come up with something that's really competitive on a power/performance basis, they can just sit there in their own little niche and let AMD service the main console market.

Ironically... AMD did have something compelling that could have dropped into something like the Switch... But they sold their technology to Qualcomm which was the Adreno GPU.

fatslob-:O said:

Even though Vega VII is underwhelming, it has the same die size as a GP104 so on the bright side there's at least some progress in perf/area even though it also has useless features for gaming like half rate FP64 profile and deep learning specific instructions ...

But at 7nm, Vega 7 is likely still more expensive to manufacture than the 12nm process.

Both have a similar amount of transistors, so that's a plus, at-least AMD is not at a stupidly large deficit anymore.



--::{PC Gaming Master Race}::--

Where's the salt?



Pemalite said:

But at 7nm, Vega 7 is likely still more expensive to manufacture than the 12nm process.

Both have a similar amount of transistors, so that's a plus, at-least AMD is not at a stupidly large deficit anymore.

Yeah, I'd attribute AMD not being at a large deficit anymore mostly thanks to Nvidia regressing with their Turing architecture ... 

The TU106 which has almost the same die size as the GP102 has performance that's slightly faster than the GP104 ... 



Pemalite said:

Bofferbrauer2 said:

I think most people didn't actually watch that presentation or don't fully understand the Radeon VII.

The Radeon VII isn't just a Vega in 7nm, as it improves far more than just the clock speed. The clock speed increases by 10-15% while framerates increase almost throughout by 20-35%, and that's despite lacking 4CU. That brings the card to about RTX 2080 level in terms of FPS.

What is there to Understand?
It's the same old Vega... But at 7nm with much higher clockspeeds and more bandwidth.

Bofferbrauer2 said:

I can understand that from a pure gamer standpoint the GPU is disappointing, but for those Navi will come later down the line. But for those who use their GPU not just to play, but also for work, this is a very good offering.

That is the key word... "Gamer". - AMD is marketing this GPU towards the gamer. Hence the gaming benchmarks.
Thus they have opened themselves up for comparisons and criticism.

Seriously Perm?

If it was just a die Shrink plus increased clock speeds and more bandwith, then it wouldn't have outperformed the old Vega 64 by such a large margin. Radeon VII has some architectural improvements, because the numbers don't add up otherwise. It's certainly not the Bandwith, as many of these games listed weren't limited in that domain to begin with.

And you got the wrong keyword. The keyword is also, as like I said it's for someone who plays games, but who uses his GPU also for other things, like work for instance. It is what the Titan series was on NVidia's side of things... until the RTX Titan that is, which got turned into a pure gaming GPU without any productivity extras while keeping the huge pricetag.



Mordred11 said:
Maybe salty, but definitely true. You can't just keep adding more raw power to GPUs and expect things to progress, that will hit a ceiling very soon. Instead, focusing on implementing features like ray tracing, is the key to progress in this field.

Before doing that they'll have to significantly improve the performance of their GPUs.