I do find it interesting that AMD had Xbox onstage and then later announced the Radeon VII, and used it to run FH4. Wonder if this is what XB2 is getting, while PS5 gets Navi.
People thought the same about Ryzen and the Xbox One X. - Microsoft had a demo running that had a Zen CPU in it... And people automagically chalked the Xbox One X to having Zen... Obviously I argued the contrary due to cost reasons. :P
I'm not 100% but I think It is as good as the rtx 2080, but he is calling it lousy. So does that mean rtx 2080 is lousy too? When you switch raytracing and dlss on the game looks better but the fps takes a hit.
It will beat the RTX 2080 in a few key benchmarks, probably be equal in a few others... And probably loose in a heap more.
Those cards are like 13-14 Tflops.
They make the PS4pro + Xbox One X look weak.
Thats underwhelming? O_O
Yes. Because flops alone doesn't equate to gaming performance.
If I understand the tech correctly, DLSS is using neural networks (that need to be trained per title) to make higher resolution image with more details than what it gets on input - so, not like checkerboarding.
It's basically frame reconstruction, but uses multiple approaches to achieve superior image quality.
Basically... A developer sends nVidia it's code, then nVidia runs it through their Super Computer... And then dials up it's settings. - From there when someone runs the game locally the Tensor cores get to work on "comparing the differences" and optimizes the scene from the information provided from nVidias servers.
Yes instead of 1 technique that kinda works for everything, they have a A.I optimise how to do it best for each title.
But its still the same way to go about getting better performance, it comes at the cost of imagine quality.
Its a better technique than Checkerboard rendering that the PS4pro does, but its in sense the same thing.
(its just one works from the top, and scales down, while another works from the bottom and scales up)
DLSS isn't perfect. It has limitations.
Sometimes 4k DLSS, depending on scene... Will look like a 1440P image with TAA.
DLSS and Checkerboarding both have Pro's and Con's, it's not really sensible to discard one in favor of another...
About the power load:
It usually 225+ watts and has short bursts where it can go upto 314watts.
Radeon VI will probably be the same, even if they say 300watts, during gameing it ll likely be 225watt ish too.
nVidia will have the edge in power consumption. They are years ahead of AMD at this point in regards to efficiency.
It's more that when the PS4 and Xbox One were in the planning stages, devs made it absolutely clear that they wanted the CPU and GPU to all be part of the same chip. Intel's (integrated) GPUs are complete garbage, and nVidia don't make x86 CPUs, so that left AMD as the only real option.
nVidia has ARM.
Why do you think that?
Radeon VI will be more efficient than a Vega 64, but it will use the better efficiency to push more pixels and polygons than the Vega 64, not to deliver the same performance with less power consumption:
Precisely. The bulk of the power consumption benefits 7nm would have brought with Vega... Was spent mostly on driving up clockrates and widening the memory controller.
This AMD's current status QUO, they did the same when they took the Radeon RX 480, rebadged into the RX 580 and ported it to 12nm... They didn't leverage 12nm to increase efficiency... They used it to increase clockrates.
I think perf/watt will massively be improved with the Radeon VII compaired to the RX Vega 64.
Look at what AMD did with their new Ryzen chips.
They beat out the Intel 9900k (which is 180watt+) and do so at 130watts.
They just had a massive improvement in perf/watt (for their cpu line), and at the same time finally managed to beat intel in cpu performance.
Ryzen and Radeon are separate entities at this point... Radeon are still on their development cycle that was planned out back over half a decade ago.
Besides, just because Ryzen is smashing it... Doesn't mean Radeon will, rarely have AMD's CPU and GPU divisions both ever given it's direct competitors a run for it's money at the same time.
Uh, what? The Ryzen 2700X is almost certainly better value for money than a 9900K, but the latter is pretty decisively ahead performance-wise.
If you're talking about the new third-generation Zen chips they announced, I think it'd probably be best to wait until there are benchmarks from someone other than AMD themselves before making that sort of claim.
Ryzen 2700X is better value than the 9900K.
Zen 2 is supposed to offer 10-15% better IPC than Zen+.
Now if we were to peg just the IPC gains over Zen+, we are likely going to have a chip that is around the 9900K performance in the bulk of scenario's.
Then you have the clockrate increases on top of it...
AMD's Zen 2 actually beat the 9900K on Cinebench too.
And did so with much lower powerlevels.
7nm and the Zen architecture really is showing it's strengths verses Intels 14nm process.
I think most people didn't actually watch that presentation or don't fully understand the Radeon VII.
The Radeon VII isn't just a Vega in 7nm, as it improves far more than just the clock speed. The clock speed increases by 10-15% while framerates increase almost throughout by 20-35%, and that's despite lacking 4CU. That brings the card to about RTX 2080 level in terms of FPS.
What is there to Understand?
It's the same old Vega... But at 7nm with much higher clockspeeds and more bandwidth.
I can understand that from a pure gamer standpoint the GPU is disappointing, but for those Navi will come later down the line. But for those who use their GPU not just to play, but also for work, this is a very good offering.
That is the key word... "Gamer". - AMD is marketing this GPU towards the gamer. Hence the gaming benchmarks.
Thus they have opened themselves up for comparisons and criticism.
I actually wish AMD would put up a better fight. I'm happy to splurge on $1000+ graphics cards, but when we start seeing $2500 >single< GPUs not in the Quadro field, it is obvious that they are selling it that high because, well, they can. Sure, I'm in a different tier as a high-priced buyer, but even we know that there is a limit to this stuff.
That said, Jensen is FAR from salty here. More like beating his chest ~ knowing, yet again, they'll have the PC gaming market cornered. Plus, this tells me a new console generation is close, as NVIDIA openly attacked AMD just prior to the release of the PS4/XB1.
Same though. Jensen was just being honest. Brutally honest... But still honest.
What nVidia does have is Nintendo and the Switch - and until AMD manages to come up with something that's really competitive on a power/performance basis, they can just sit there in their own little niche and let AMD service the main console market.
Ironically... AMD did have something compelling that could have dropped into something like the Switch... But they sold their technology to Qualcomm which was the Adreno GPU.
Even though Vega VII is underwhelming, it has the same die size as a GP104 so on the bright side there's at least some progress in perf/area even though it also has useless features for gaming like half rate FP64 profile and deep learning specific instructions ...
But at 7nm, Vega 7 is likely still more expensive to manufacture than the 12nm process.
Both have a similar amount of transistors, so that's a plus, at-least AMD is not at a stupidly large deficit anymore.