By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Nvidia Gets SALTY

Bofferbrauer2 said:

Seriously Perm?

If it was just a die Shrink plus increased clock speeds and more bandwith, then it wouldn't have outperformed the old Vega 64 by such a large margin. Radeon VII has some architectural improvements, because the numbers don't add up otherwise. It's certainly not the Bandwith, as many of these games listed weren't limited in that domain to begin with.

Vega 7's clockspeed increased by 16.4%. - That means the Render Output Pipelines (A big limiter on Vega), Texture Mapping Units operate that much faster. - The ROPS is a big one as AMD always seems to be ROP starved.

Compute increased by 9.5% over Vega 64.

Bandwidth increased by a whopping 112.27%.

The 25% or so increases is more or less from increases in clockrate and that bandwidth boost... Vega was never compute bound to begin with.

So yes, seriously. Vega 7 is just like RX 590, few enhancements... Bulk of the gains due to clock increases thanks to the smaller fabrication process opening up extra headroom.

Is there some secret sauce hidden somewhere? Possibly, but we don't know at this point in time... And it is best to leave such speculation until Anandtech has done a thorough analysis on the hardware when it releases.

Bofferbrauer2 said:
And you got the wrong keyword. The keyword is also, as like I said it's for someone who plays games, but who uses his GPU also for other things, like work for instance. It is what the Titan series was on NVidia's side of things... until the RTX Titan that is, which got turned into a pure gaming GPU without any productivity extras while keeping the huge pricetag.

They still marketed it to Gamers. - AMD has a different brand for non-gamers... You might have heard of them under the banner of "Fire Pro".

AMD could have also done what they initially did with Vega... And called it the "Frontiers Edition" - Which WAS marketed towards gamers+professionals.

Drawing comparisons to Titan is a bit silly, they both have vastly different price points and targeted audiences... I mean. The Titan is actually a good card for professionals, gamers and prosumers.

WolfpackN64 said:
I think some people dont get the Radeon VII that well. Perdormance on par with the RTX 2080 for games is impressive at the same price. Ray tracing at this point doesn,t matter. DLSS at this point doesn't matter.

AMD can't just be "on par" with the RTX 2080.

The issue is... By the time the Vega 7 launches, the RTX 2080 may have dropped a notch on the pricing ladder.
...Plus the RTX 2080 isn't even nVidia's fastest GPU, the RTX 2080 Ti and Titan sit above it, let alone what Pascal offers.

The Geforce RTX 2080 not only gets essentially "Free" Ray Tracing and DLSS... But does everything whilst consuming less power... For the gamer, there is little value in what Vega 7 offers... And this is coming from someone who generally only buys AMD GPU's.

WolfpackN64 said:

The problem is that this card will absolutely clobber NVIDIA in the prosumer market. The massive amount of memory bandwith combined, combined with Vega's compute prowess and appearantly untouched FP64 performance (remember this card is a repurposed datacenter card) will eat into Titan V and RTX Titan sales.

You can get monster workstation performance for a lot less then what NVIDIA's charging you for and that hurts them. This card does just ok for gaming, I think AMD knows fully wel the real meat and potatoes is to be found in the mid range when they'll release Navi later on.

AMD has always offered GPU's with surprising amounts of compute... Hence why they were gobbled up left, right and center by crypto currency miners. - But games generally need more than that. - Vega 7 should do well in those markets that are looking for more compute.

What AMD really needs to do is ditch Graphics Core Next and move onto Next-Gen already.

WolfpackN64 said:

At this point, NVIDIA's response is like Apple: "No one does what we do, so the competition doesn't compare". To bad NVIDIA's distinguishing features (RTX, DLSS) are clearly suffering from 1st generation syndrome. AMD can let NVIDIA do all the work cracking open the initial market for ray tracing because it'll be years from now untill the feature will be mainstream and widespread.

I don't actually like nVidia's approach to Ray Tracing. - Ray Tracing is generally a compute and memory constrained issue, so nVidia has taken what seems to be a fixed-function route which is efficient from a transistor point of view with the amount of performance it offers, but it also means those units cannot be leveraged for general rasterization tasks... And lets face it, we still live in a rasterization world.

Hopefully AMD takes another approach to Ray Tracing.

It smells like the Geforce FX all of again.... Lots of fixed function stuff at the expense of other hardware... And the irony was... It is the perfect Storm for AMD to pull another Radeon 9700 Pro all over again.
However... Despite nVidia essentially crippling itself, AMD still cannot get a decisive victory even at 7nm, it's a testament to how efficient nVidia's GPU's are right now, AMD is years away from even matching them.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:

It smells like the Geforce FX all of again.... Lots of fixed function stuff at the expense of other hardware... And the irony was... It is the perfect Storm for AMD to pull another Radeon 9700 Pro all over again.
However... Despite nVidia essentially crippling itself, AMD still cannot get a decisive victory even at 7nm, it's a testament to how efficient nVidia's GPU's are right now, AMD is years away from even matching them.

This is exactly my sentiment as well - instead of kicking nVidia in the nuts right now with a GPU that has massive fps advantage in current titles over their RTX offerings, AMD is coming out with something that is around 2080 level with higher TDP and on smaller node.

I'm really glad that Ryzen finally brought them back into CPU game, but it seems they just can't get their shit together when it comes to GPUs.



Pemalite said:
thismeintiel said:

I do find it interesting that AMD had Xbox onstage and then later announced the Radeon VII, and used it to run FH4. Wonder if this is what XB2 is getting, while PS5 gets Navi.

People thought the same about Ryzen and the Xbox One X. - Microsoft had a demo running that had a Zen CPU in it... And people automagically chalked the Xbox One X to having Zen... Obviously I argued the contrary due to cost reasons. :P

And I argued the same thing.  There are differences, though.  Ryzen launched only a few months before the X did.  No way would they have a CPU of that quality just months after it launched itself.  Like you said, the cost would have been crazy.  Forget $499, it would have been the PS3's $599 all over again.  I'm sure it would have also taken more effort from MS, making sure that every game ran correctly on a new CPU with a different architecture.

Radeon VII, on the other hand, launches next month.  That's 1 1/2 years, seeing as XB2 isn't expected until late 2020, to get costs down.  I'm guessing if it did use it, it would also be a slightly parred down version of it.  I could definitely see this being used in the top of the line XB2, while the PS5 gets Navi, which Sony is rumored to be having input in its development.



freebs2 said:
DonFerrari said:
And they will be extra salty with PS5 and Scarlet being AMD powered again and doing good sales plus having good performance for console side.

For their prospective probably while ps and xb move a lot of units, console chips have very thin margins compared to graphics cards and laptops gpus.

Mind me, I don't have anything against AMD, but PS and Xbox use their chips not because they're superior to Nividia's but only because they're chepaer.

Not only because it's cheaper. I guess both Sony and Microsoft doesn't really want to work with Nvidia at this point. Nvidia screwed both companies in the past. They screwed Microsoft with GPU prices for original Xbox which led to a lawsuit. And they made the worse GPU for PS3 than what was inside Xbox360 which came out a year before. I guess it costed more for Sony than X360 GPU as well.

shikamaru317 said:

This is just one of many reasons why I dislike Nvidia. Yes, it's kind of sad that it took a die shrink down to 7nm for AMD to match or slightly exceed Nvidia's 12nm GTX 2080, but that is no good reason for Nvidia to trashtalk AMD. I like people and companies who are humble instead of those that gloat and trashtalk. And this is just one reason why I dislike Nvidia, I also dislike them because of their practice of designing tech like PhysX and Hair Works specifically so that it will have trouble running on AMD cards, and then moneyhat developers into using that tech in their games, effectively handicapping AMD cards in those games (and the consoles since they use AMD APU's). It is one thing to moneyhat an optimization deal where a dev spends more time optimizing for your cards than for your competition's cards, both AMD and Nvidia do that, but it is another entirely to pay a dev to handicap the competition. It is because of underhanded tactics like that that I will never buy an Nvidia card again, even if they are technically superior to AMD cards in some ways.

In the end, AMD gets the last laugh, it's looking like they got the contract for both Xbox Scarlett and PS5, so that is basically 160m+ APU's they just sold next gen.

The same. This is the exact same reason I stopped buying Nvidia GPUs 6 years ago. I just don't feel like supporting such business practices.

shikamaru317 said:
thismeintiel said:

Probably more salty that they won't see an almost guaranteed 120M+ sales from the PS/Xbox lines again.

I do find it interesting that AMD had Xbox onstage and then later announced the Radeon VII, and used it to run FH4. Wonder if this is what XB2 is getting, while PS5 gets Navi.

It's possible that Xbox Scarlett Anaconda will use a cut down version of this with a few less cores and with GDDR6 instead of the more expensive HBM2 I suppose, but even then it might be too expensive. Radeon VII offers roughly RTX 2080 tier performance for $700, while the top end Navi GPU, the RX 380, will supposedly offer GTX 2070/Vega 64 tier performance for $250. That is a huge price difference, and I can't see MS having a massively higher launch price on Anaconda just to get a measly 15% performance advantage over PS5, at most Anaconda will be $100 more than PS5 imo. 

The price difference won't be the same as in retail. Not to say that the console GPUs will only be based on desktop counterparts, not be the exact same thing. Console manufacturers never pay retail prices for their chips. Otherwise we would never have seen a 400-500$ consoles. I also hope Microsoft won't go with Radeon VII for Scarlett. Choosing Navi even if it will be slightly less powerful would be a wiser choice as this GPU will have more new technologies that won't be present in Radeon VII



 

derpysquirtle64 said:
freebs2 said:

For their prospective probably while ps and xb move a lot of units, console chips have very thin margins compared to graphics cards and laptops gpus.

Mind me, I don't have anything against AMD, but PS and Xbox use their chips not because they're superior to Nividia's but only because they're chepaer.

Not only because it's cheaper. I guess both Sony and Microsoft doesn't really want to work with Nvidia at this point. Nvidia screwed both companies in the past. They screwed Microsoft with GPU prices for original Xbox which led to a lawsuit. And they made the worse GPU for PS3 than what was inside Xbox360 which came out a year before. I guess it costed more for Sony than X360 GPU as well.

shikamaru317 said:

It's possible that Xbox Scarlett Anaconda will use a cut down version of this with a few less cores and with GDDR6 instead of the more expensive HBM2 I suppose, but even then it might be too expensive. Radeon VII offers roughly RTX 2080 tier performance for $700, while the top end Navi GPU, the RX 380, will supposedly offer GTX 2070/Vega 64 tier performance for $250. That is a huge price difference, and I can't see MS having a massively higher launch price on Anaconda just to get a measly 15% performance advantage over PS5, at most Anaconda will be $100 more than PS5 imo. 

The price difference won't be the same as in retail. Not to say that the console GPUs will only be based on desktop counterparts, not be the exact same thing. Console manufacturers never pay retail prices for their chips. Otherwise we would never have seen a 400-500$ consoles. I also hope Microsoft won't go with Radeon VII for Scarlett. Choosing Navi even if it will be slightly less powerful would be a wiser choice as this GPU will have more new technologies that won't be present in Radeon VII

It will be interesting to see if Nintendo has the same problems with Nvidia that the other two did.  On one hand, they may not want to screw up their last chance to stay in the console market, so will treat Nintendo well.  On the other hand, which the way the CEO talks make me think it may go this way, they know they have Nintendo by the balls for a mobile chip suitable for them, so may refuse to negotiate much lower prices.  This could affect Nintendo's ability to price cut when needed.

As for Navi, I have a feeling MS may not have access to it.  Sony is rumored to be working pretty closely with AMD on Navi.  I kinda doubt they would want all of their input to go into their competition's machine.  I would imagine Sony is going to have console exclusivity on Navi, at least for the beginning of the gen.  I think Radeon VII, most likely customized, is the way MS is going to go.  We have over a year for them to drop prices, which, like you pointed out, aren't the prices MS pays, anyway.  It would give them a numbers win, most likely, in terms of Tflops.  Of course, it won't be that large of a win, will cause it to be ~$499 vs a ~$399 PS5, and have fewer of the new technologies found in Navi.  In the end, MS is going to make sure they can tout the most powerful system next gen, but I think it will mainly be on paper.  Sony's 1st parties are going to show just what next gen can do.



Around the Network
shikamaru317 said:
derpysquirtle64 said:

The price difference won't be the same as in retail. Not to say that the console GPUs will only be based on desktop counterparts, not be the exact same thing. Console manufacturers never pay retail prices for their chips. Otherwise we would never have seen a 400-500$ consoles. I also hope Microsoft won't go with Radeon VII for Scarlett. Choosing Navi even if it will be slightly less powerful would be a wiser choice as this GPU will have more new technologies that won't be present in Radeon VII

True, and they have about a year to get costs down before Xbox Scarlett Anaconda actually enters production. Still, I would hope they use a Navi based design instead since Navi should have additional architectural improvements that aren't present on the Radeon 7. Navi is worth it over Radeon 7 for TDP alone, RX 3080 is rumored to have a 150 watt TDP, half of the 300 watt TDP on Radeon 7, lower power usage is essential for a console if you don't want it to run hot and loud like PS4 Pro. Ideally it would be cool to see MS order a custom Navi based chip that actually has more CU's than the RX 3080.  

I would bet anything that RX 3000 rumor is bollocks. It isn't going to nearly double 7 nm Vega in  efficiency. AMD would be shouting to the market with years to spare if they had anything good; Zen was less impressive than that and that's what happened. Navi is just another GCN architecture. Thus, it's shit at this day and age. The best thing you can expect from shit is a pretty morning mushroom to sprout from it, and nothing more. Otherwise, it's just another step in RTG's staircase of disappointment.



 

 

 

 

 

Pemalite said:

Bofferbrauer2 said:
And you got the wrong keyword. The keyword is also, as like I said it's for someone who plays games, but who uses his GPU also for other things, like work for instance. It is what the Titan series was on NVidia's side of things... until the RTX Titan that is, which got turned into a pure gaming GPU without any productivity extras while keeping the huge pricetag.

They still marketed it to Gamers. - AMD has a different brand for non-gamers... You might have heard of them under the banner of "Fire Pro".

AMD could have also done what they initially did with Vega... And called it the "Frontiers Edition" - Which WAS marketed towards gamers+professionals.

Drawing comparisons to Titan is a bit silly, they both have vastly different price points and targeted audiences... I mean. The Titan is actually a good card for professionals, gamers and prosumers.

Different price point, certainly, targeted audiences, not so sure. Sure, they just showed only gaming benchmarks, but I wouldn't have done any different - don't want to steal the thunder of the just newly released Radeon Instinct MI50 and MI60 which are much more expensive.

Titan was good for professionals and prosumers, very good even. RTX Titan both cut that down and locked it behind the driver, making it in practice a pure gaming card. You want to work with that? Buy a Quadro RTX 6000 or 8000; same chip but with drivers unlocked to actually be able to work with it. And if you need FP64, still better to get the predecessors as this gen it got radically cut down even in the Quadro line.

Btw, FirePro are pure server cards by now (since 2017 and Polaris), you might have meant their workstation successors, the Radeon Pro series.



You know what's really salty? According to Wikipedia and the couple of sources on there, they are close relatives. Apparently, Su's grandfather and Haung's mother are brother and sister. I wouldn't want to be there for their next family get together.



This technical mumbo-jumbo is interesting.



thismeintiel said:

Ryzen launched only a few months before the X did.  No way would they have a CPU of that quality just months after it launched itself.  Like you said, the cost would have been crazy.  Forget $499, it would have been the PS3's $599 all over again.  I'm sure it would have also taken more effort from MS, making sure that every game ran correctly on a new CPU with a different architecture.

Ryzen was taped out long before the Xbox One X even released... CPU's take years to design, test and ratify.
Microsoft would have been able to leverage Ryzen if they so wanted as the design was well and truly done and dusted, they just didn't.

thismeintiel said:

Radeon VII, on the other hand, launches next month.  That's 1 1/2 years, seeing as XB2 isn't expected until late 2020, to get costs down.  I'm guessing if it did use it, it would also be a slightly parred down version of it.  I could definitely see this being used in the top of the line XB2, while the PS5 gets Navi, which Sony is rumored to be having input in its development.

I hope neither console uses Navi or Vega. - That isn't next generation graphics.

thismeintiel said:

As for Navi, I have a feeling MS may not have access to it.  Sony is rumored to be working pretty closely with AMD on Navi.

They do have access to it.
Navi is Graphics Core Next, just another iterative update to the core architecture, it's nothing revolutionary.

It is also an AMD technology, not a Sony one... All Sony will be doing is presenting AMD with design goals of what they would like out of a part... Then AMD's semi-custom division will get to work trying to meet those goals.

thismeintiel said:

I kinda doubt they would want all of their input to go into their competition's machine.  I would imagine Sony is going to have console exclusivity on Navi, at least for the beginning of the gen.

Navi is Graphics Core Next. An AMD technology, patented by AMD. - Sony doesn't really have any say in the matter.
If you think Sony is having transistor-level input on the layout of Navi, then you are highly mistaken.

haxxiy said:

I would bet anything that RX 3000 rumor is bollocks. It isn't going to nearly double 7 nm Vega in  efficiency. AMD would be shouting to the market with years to spare if they had anything good; Zen was less impressive than that and that's what happened. Navi is just another GCN architecture. Thus, it's shit at this day and age. The best thing you can expect from shit is a pretty morning mushroom to sprout from it, and nothing more. Otherwise, it's just another step in RTG's staircase of disappointment.

It won't double 7nm Vega, Navi is a replacement for Polaris not Vega. - What it will do is bring Vega 64 (14nm) to essentially a 150w TDP, Vega 7 will still be 25-40% faster.

Vega is actually efficient... At lower voltages and clockrates, AMD just decided to throw efficiency out the Window and dial everything to the max, otherwise they would risk having Vega 7 running against the Geforce 2070 rather than the 2080... And that isn't going to sit well when you have 16GB of stupidly expensive HBM memory.

Same reason why they ported the Radeon RX 480 to 12nm with the Radeon RX 590, they took the clockrate headroom when moving from 14nm to 12nm (And because they are similar processes, actually easy to do!), dialed the voltages up to get as much performance as possible without any regard to power consumption.

shikamaru317 said:

The TDP rating on the Radeon 7 is probably higher than it's actual power usage, they probably just rated it higher than it actually uses to be on the safe side and to have extra room for overclockers. I'm not buying that it has the same power consumption as Vega 64 Air Cooled when it is on the 7nm process as opposed to 12nm for Vega 64, afterall, it only has about 1.2 tflop more power than the Vega 64. Tflops are far from the best measure of real world performance, but I'm only expecting about a 30-35% improvement over Vega 64 in AMD optimized games and probably only a 15-20% improvement over Vega 64 in Nvidia optimized games, so there is just no way it has the same power consumption despite being on 7nm.

Keep in mind they are the same GPU. Vega 7's TDP is certainly real-world and comparable to Vega 64.

Vega 7 wasn't built specifically for 7nm remember, Vega's successor will... And that means the chip layout will be better optimized for the 7nm process.

Another thing to keep in mind is that nVidia, AMD and Intel and all the other chip manufacturers actually calculate TDP differently, so none are directly comparable.

shikamaru317 said:

As for Navi, this is more than a simple refresh like RX 500 series was, Navi is to Polaris what Polaris was to Volcanic Islands. The gap in years between Polaris and Navi is actually 1 longer than the gap in years between Volcanic Islands and Polaris, and we went from 190 watt TDP on the R9 285 to 150 watt TDP on RX 480 with an increase from 3.3 tflops up to 5.2 tflops. RX 3080, or whatever the top end Navi chipset will be called might not be quite as impressive as the rumors suggest (2070/Vega 64 tier performance at 150 watt TDP for $250) but I'd bet it will have at least 2060/Vega 56 tier performance at 170 watts for $250. And that is the chipset that Sony will be using in PS5 most likely, all signs point to Navi for PS5. MS needs to offer something with the same or better performance that doesn't use so much more power that the cooling fans sound like a jet engine (*cough* PS4 Pro *cough*).

The RX 480 wasn't much of an upgrade over the R9 390X though. But it did use significantly less power.
https://www.anandtech.com/bench/product/1746?vs=1748

In short, the improvements that Polaris had over Volcanic Islands was pretty inconsequential on the performance front.

As for cooling... Microsoft has it's shit sorted on that front, they learned a lesson from the Xbox 360 I think... Hence why the Xbox One, Xbox One S and Xbox One X are all fairly whisper quiet and reliable.

Bofferbrauer2 said:
Pemalite said:

They still marketed it to Gamers. - AMD has a different brand for non-gamers... You might have heard of them under the banner of "Fire Pro".

AMD could have also done what they initially did with Vega... And called it the "Frontiers Edition" - Which WAS marketed towards gamers+professionals.

Drawing comparisons to Titan is a bit silly, they both have vastly different price points and targeted audiences... I mean. The Titan is actually a good card for professionals, gamers and prosumers.

Different price point, certainly, targeted audiences, not so sure. Sure, they just showed only gaming benchmarks, but I wouldn't have done any different - don't want to steal the thunder of the just newly released Radeon Instinct MI50 and MI60 which are much more expensive.

Titan was good for professionals and prosumers, very good even. RTX Titan both cut that down and locked it behind the driver, making it in practice a pure gaming card. You want to work with that? Buy a Quadro RTX 6000 or 8000; same chip but with drivers unlocked to actually be able to work with it. And if you need FP64, still better to get the predecessors as this gen it got radically cut down even in the Quadro line.

Btw, FirePro are pure server cards by now (since 2017 and Polaris), you might have meant their workstation successors, the Radeon Pro series.

The point I am making is that... If you showcase a brand new shiny GPU... With gaming benchmarks... Then you are marketing your GPU to gamers. It really is as simple as that...
And if your GPU doesn't measure up to the competition, then you bring ridicule and condemnation upon yourself... And you really have no one else to blame but yourself.



--::{PC Gaming Master Race}::--