By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - Navi Made in Colab with sony, MS still Using it?

 

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34
Pemalite said:

One of the largest Achilles heels of Graphics Core Next is it's need for bandwidth... Hence why AMD outfitted Vega 7 with 1TB/s of HBM.

Even when stacking the RX 480 against the 390/390X, the 390/390X often still takes the performance crown... The RX 480 was focused on mainstream performance at a low TDP... And did very well all things considered... And would hit it's performance level but use 100w+ less power under load.

The RX 580 essentially is an overclocked RX 480... It was clear Graphics Core Next cannot keep pace with nVidia in terms of performance/watt, so AMD drove performance as much as possible, power consumption be damned.

In short, AMD is so far behind nVidia it's almost embarrassing, I haven't seen this kind of massive discrepancy between AMD and nVidia before, even during the Radeon 7500/8500 days ATI was able to keep pace somewhat with nVidia...

In the end it does probably mean next-gen consoles are likely to be held back by the ball and chain that is Graphics Core Next for next gen consoles.

But anyone stating that we will get Geforce 2080Ti performance out of Navi just hasn't been paying attention over the last decade.

Meh, I wouldn't worry about the architecture a whole lot because in the long run games are starting to be more optimized than ever before on GCN and even Nvidia by their own admission with Turing, a good amount of bloat in it's silicon are probably due to the features that GCN already had. There's lot's of things that Nvidia changed with Turing to be more at par with the feature set of GCN such as it's more flexible memory model (not even Volta has this), access to barycentric coordinates within pixel shaders, async compute, and the scalar unit so it's no big loss that next gen consoles are going with GCN again when indirect competition (Nvidia) is taking a similar route ... 

There are still some things Turing doesn't have compared to GCN like rapid packed math or shader specified stencil values ... 

Don't worry about consoles since developers on that platform seem to have an easier time matching Nvidia's equivalent theoretical performance. I don't think performance will be a concern because it's the developers job to figure out the fast/slow paths of the hardware they're working on. Sony could very well do some amazing things with a Radeon VII at hand because of the fact that they have better tools than available on PC ... 

Reaching RTX 2080Ti level of performance isn't all that far fetched depending the fast/slow paths each hardware is hitting relative to each other ... 



Around the Network
fatslob-:O said:

Meh, I wouldn't worry about the architecture a whole lot

Okay...

fatslob-:O said:

because in the long run games are starting to be more optimized than ever before on GCN

So... Either Architecture matters or it doesn't?

fatslob-:O said:

Nvidia by their own admission with Turing, a good amount of bloat in it's silicon are probably due to the features that GCN already had. There's lot's of things that Nvidia changed with Turing to be more at par with the feature set of GCN such as it's more flexible memory model (not even Volta has this), access to barycentric coordinates within pixel shaders, async compute, and the scalar unit so it's no big loss that next gen consoles are going with GCN again when indirect competition (Nvidia) is taking a similar route ... 

Maxwell, Pascal and Turing have a plethora of techniques that simply gives nVidia a massive step up in regards to efficiency... These were all lessons that nVidia learned whilst building Tegra.

Things like Asynchronous compute are in the Xbox One/Playstation 4... And on the PC hasn't really translated into AMD having a leg up over nVidia in the PC gaming landscape by any meaningful denominator.

fatslob-:O said:

There are still some things Turing doesn't have compared to GCN like rapid packed math or shader specified stencil values ...  

Turing has Rapid Packed Math... Or rather, nVidia's version of it.
Hence why Turings half-precision is double it's single-precision in theoretical flops.

Even some Pascal parts had it.
https://www.anandtech.com/show/10222/nvidia-announces-tesla-p100-accelerator-pascal-power-for-hpc
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/4

It has also been a feature with Tegra for awhile too. Rapid packed math as AMD calls it, is AMD's marketing term of packing two FP16 problems together.

fatslob-:O said:

Don't worry about consoles since developers on that platform seem to have an easier time matching Nvidia's equivalent theoretical performance. I don't think performance will be a concern because it's the developers job to figure out the fast/slow paths of the hardware they're working on. Sony could very well do some amazing things with a Radeon VII at hand because of the fact that they have better tools than available on PC ... 

Reaching RTX 2080Ti level of performance isn't all that far fetched depending the fast/slow paths each hardware is hitting relative to each other ... 

Metro on PC is a step up over the Xbox One X version on Turing-equivalent hardware.

Consoles can punch ahead of PC equivalent hardware, that holds true whether you use nVidia or AMD's solutions... But the precedent is already been done and dusted... Despite games being built with 8th gen Graphics Core Next hardware in mind... nVidia still holds a seriously catastriphic advantage over AMD in almost every regard... With the exception of price.




www.youtube.com/@Pemalite

Pemalite said:

Okay...

So... Either Architecture matters or it doesn't?

Considering the market (consoles) we're talking about it sure doesn't for the most part because they have a lot more control on the software side ... 

Pemalite said:

Maxwell, Pascal and Turing have a plethora of techniques that simply gives nVidia a massive step up in regards to efficiency... These were all lessons that nVidia learned whilst building Tegra.

Things like Asynchronous compute are in the Xbox One/Playstation 4... And on the PC hasn't really translated into AMD having a leg up over nVidia in the PC gaming landscape by any meaningful denominator.

Turing was arguably a step backwards in efficiency compared to Pascal so I'm not seeing a 'massive' step up in comparison to before ... 

As for the last line, I'm not surprised considering PC has shit tools with so many developers continue with shit practices and it doesn't help that AMD killed their own gfx API ... 

PS4 is arguably a developers wet dream since it has goodies like their in-house Razor CPU/GPU profiler and using GNM is almost like CUDA except for graphics so you get the benefits of single source programming model with more low level access than either DX12/Vulkan could provide. Graphics programmers are a lot more productive with the single source model like CUDA and they get better performance since they have access to more features ... 

Pemalite said:

Turing has Rapid Packed Math... Or rather, nVidia's version of it.

Hence why Turings half-precision is double it's single-precision in theoretical flops.

Even some Pascal parts had it.
https://www.anandtech.com/show/10222/nvidia-announces-tesla-p100-accelerator-pascal-power-for-hpc
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/4

It has also been a feature with Tegra for awhile too. Rapid packed math as AMD calls it, is AMD's marketing term of packing two FP16 problems together.

Only GTX Turing supports rapid packed math, the RTX Turing series have Tensor Cores which are FAR more limited in flexibility so it's nearly useless to game programmers ... 

Pemalite said:

Metro on PC is a step up over the Xbox One X version on Turing-equivalent hardware.

Consoles can punch ahead of PC equivalent hardware, that holds true whether you use nVidia or AMD's solutions... But the precedent is already been done and dusted... Despite games being built with 8th gen Graphics Core Next hardware in mind... nVidia still holds a seriously catastriphic advantage over AMD in almost every regard... With the exception of price.

Considering that the X1X matches a GTX 1070 (GP104), I'd guess the Turing equivalent would be a little bit under the GTX 1660Ti (TU116) ... (not a surprise when looking at the die size between the two with 314 mm^2 vs 284mm^2) 

Nvidia holds an advantage over AMD in PC so I don't deny that much is true but on consoles the advantages don't appear to be all that compelling to the manufacturers. When we see technical comparisons between Switch and PS4 (which is at least theoretically 4x faster), benchmarks seem to show that code also manages to run 4x better as well on the PS4 so even in similar development environments GCN seems to perform as similarly to the Nvidia parts in theoretical performance ... 

Switch and PS4 are consoles with specialized graphics APIs such as NVN and GNM tailoring them respectively but amazingly enough they pack a similar punch relative to their weight ... 



@Permalite
I dint say the ps5 would have 2080ti performance. I said the leak said that a discreate pc card had a 2080ti performance for $430. Never claimed thet would be in a console.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

eva01beserk said:
@Permalite
I dint say the ps5 would have 2080ti performance. I said the leak said that a discreate pc card had a 2080ti performance for $430. Never claimed thet would be in a console.

I never mentioned the Playstation 5.
I was comparing Vega 7/Navi/2080Ti.

Navi isn't going to match a Geforce RTX 2080Ti. Simple as that... Regardless if it's PC or Console, it's Polaris's successor not Vega's.

fatslob-:O said:
Pemalite said:

Okay...

So... Either Architecture matters or it doesn't?

Considering the market (consoles) we're talking about it sure doesn't for the most part because they have a lot more control on the software side ... 

Well it does matter to a degree. Always has.
If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

fatslob-:O said:
Pemalite said:

Maxwell, Pascal and Turing have a plethora of techniques that simply gives nVidia a massive step up in regards to efficiency... These were all lessons that nVidia learned whilst building Tegra.

Things like Asynchronous compute are in the Xbox One/Playstation 4... And on the PC hasn't really translated into AMD having a leg up over nVidia in the PC gaming landscape by any meaningful denominator.

Turing was arguably a step backwards in efficiency compared to Pascal so I'm not seeing a 'massive' step up in comparison to before ... 

As for the last line, I'm not surprised considering PC has shit tools with so many developers continue with shit practices and it doesn't help that AMD killed their own gfx API ... 

PS4 is arguably a developers wet dream since it has goodies like their in-house Razor CPU/GPU profiler and using GNM is almost like CUDA except for graphics so you get the benefits of single source programming model with more low level access than either DX12/Vulkan could provide. Graphics programmers are a lot more productive with the single source model like CUDA and they get better performance since they have access to more features ... 

Turing is actually more efficient than Pascal on a SM to SM based comparison... However, Turing introduced allot of new hardware designed for other tasks... Once games start to leverage Ray-Tracing more abundantly, then Turing's architecture will shine far more readily.

It's a chicken before the egg scenario.

Whether nVidia's approach was the right one... Still remains to be seen. Either way, AMD still isn't able to match Turing, despite Turing's pretty large investment in non-rasterization technologies that takes up a ton of die space... Which is extremely telling.

fatslob-:O said:
Pemalite said:

Turing has Rapid Packed Math... Or rather, nVidia's version of it.

Hence why Turings half-precision is double it's single-precision in theoretical flops.

Even some Pascal parts had it.
https://www.anandtech.com/show/10222/nvidia-announces-tesla-p100-accelerator-pascal-power-for-hpc
https://www.anandtech.com/show/13282/nvidia-turing-architecture-deep-dive/4

It has also been a feature with Tegra for awhile too. Rapid packed math as AMD calls it, is AMD's marketing term of packing two FP16 problems together.

Only GTX Turing supports rapid packed math, the RTX Turing series have Tensor Cores which are FAR more limited in flexibility so it's nearly useless to game programmers ... 

Well. It's only early days yet. Turing is only the start of nVidia's efforts into investing in Tensor Cores.

In saying that... Routing FP16 through the Tensor cores has one massive advantage... It means that Turing can dual issue FP16 and FP32/INT32 operations at the same time, allowing the Warp Scheduler another option to keep the SM partition busy working.

So there is certainly a few "Pro's" to the "Con's" you have outlaid.


fatslob-:O said:
Pemalite said:

Metro on PC is a step up over the Xbox One X version on Turing-equivalent hardware.

Consoles can punch ahead of PC equivalent hardware, that holds true whether you use nVidia or AMD's solutions... But the precedent is already been done and dusted... Despite games being built with 8th gen Graphics Core Next hardware in mind... nVidia still holds a seriously catastriphic advantage over AMD in almost every regard... With the exception of price.

Considering that the X1X matches a GTX 1070 (GP104), I'd guess the Turing equivalent would be a little bit under the GTX 1660Ti (TU116) ... (not a surprise when looking at the die size between the two with 314 mm^2 vs 284mm^2) 

Nvidia holds an advantage over AMD in PC so I don't deny that much is true but on consoles the advantages don't appear to be all that compelling to the manufacturers.

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)
Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

fatslob-:O said:

When we see technical comparisons between Switch and PS4 (which is at least theoretically 4x faster), benchmarks seem to show that code also manages to run 4x better as well on the PS4 so even in similar development environments GCN seems to perform as similarly to the Nvidia parts in theoretical performance ... 

Switch and PS4 are consoles with specialized graphics APIs such as NVN and GNM tailoring them respectively but amazingly enough they pack a similar punch relative to their weight ... 

I would place the Playstation 4 at more than 4x faster. It has far more functional units at it's disposal, granted Maxwell is also a far more efficient architecture... The Playstation 4 also has clockspeed and bandwidth on it's side.

I am surprised the Switch gets as close as it does to be honest.




www.youtube.com/@Pemalite

Around the Network
Pemalite said:
eva01beserk said:
@Permalite
I dint say the ps5 would have 2080ti performance. I said the leak said that a discreate pc card had a 2080ti performance for $430. Never claimed thet would be in a console.

I never mentioned the Playstation 5.
I was comparing Vega 7/Navi/2080Ti.

Navi isn't going to match a Geforce RTX 2080Ti. Simple as that... Regardless if it's PC or Console, it's Polaris's successor not Vega's.



The leak is based from the youtuber adoredtv which I deemed fake long ago, here's the full lineup:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.

Last edited by Trumpstyle - on 04 May 2019

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.


One thing is for sure, it will have less than 64 CU's, Ryzen CPU, GDDR6. - The exact CU/Clockspeed ratio, bandwidth and ram capacity we have zero clue about.

It's all well and good to assert various specs, but we still need to take it with a grain of salt.

Trumpstyle said:

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.


A CPU clockspeed of 3.2Ghz is actually very conservative for Zen 2. So it would be a good fit for next gen.
The GPU clock is rather high, we haven't seen GCN hit those rates at a base clock yet, but if Navi is driving up clockrates instead of functional units, that wouldn't necessarily be a bad thing.

Trumpstyle said:

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.


Keep in mind that Vega 7 reaches it's performance level with 1 Terabyte per seconds worth of bandwidth. 1TB/s.
Next gen consoles aren't going to have that.

Navi is a mainstream part not a high-end one, so expecting Vega 7 levels of performance is a little bit silly in my opinion.

This is like Polaris all over again, people expected high-end performance out of it... It was always a mid-range part.. And were consequently let down upon it's release.
Overhyped. Not enough substance.

At the end of the day... It's all Graphics Core Next.




www.youtube.com/@Pemalite

Power means nothing, if you don't put out games. Hopefully MS gets away from worrying so much about bc and the consoles tv capabilities next gen. Worry more about having quality first party games.



Pemalite said:

Well it does matter to a degree. Always has.
If all the 8th gen consoles had Rapid Packed Math for example, then developers would use it, but sadly that isn't the case as it wasn't bolted onto Graphics Core Next until after the 8th gen base consoles launched.

Then the argument should be about features instead of the architecture. No reason why we couldn't opt in for hardware extensions for the same effect ... 

Pemalite said:

Turing is actually more efficient than Pascal on a SM to SM based comparison... However, Turing introduced allot of new hardware designed for other tasks... Once games start to leverage Ray-Tracing more abundantly, then Turing's architecture will shine far more readily.

It's a chicken before the egg scenario.

Whether nVidia's approach was the right one... Still remains to be seen. Either way, AMD still isn't able to match Turing, despite Turing's pretty large investment in non-rasterization technologies that takes up a ton of die space... Which is extremely telling.

Higher performance per SM came at an expense of 40%+ larger die area compared to it's predecessor so Nvidia is not as flawless as you seem to believe in their execution of efficiency ... 

As far as ray tracing is concerned, there's no reason to believe that either AMD or Intel couldn't one up whatever Turing has because there's still potential improve it with new extensions such as traversal shaders, more efficient acceleration structures, and beam tracing! It's far from guaranteed that Turing will be built for the future of ray tracing when the yet to be released new consoles could very well obsolete the way games design ray tracing around Turing hardware with a possibly superior feature set ... 

Turing invested equally just as much elsewhere such as tensor cores, texture space shading, mesh shaders, independent thread scheduling, variable rate shading, and some GCN features (barycentric coordinates, flexible memory model, scalar ops) are all things that can directly enhance rasterization as well so it's just mainstream perception that overhypes it's focus towards ray tracing ... 

There's other ways to bloat Nvidia's architectures in the future with features from consoles they still haven't adopted like global ordered append and shader specified stencil values ...

Pemalite said:

Well. It's only early days yet. Turing is only the start of nVidia's efforts into investing in Tensor Cores.


In saying that... Routing FP16 through the Tensor cores has one massive advantage... It means that Turing can dual issue FP16 and FP32/INT32 operations at the same time, allowing the Warp Scheduler another option to keep the SM partition busy working.

So there is certainly a few "Pro's" to the "Con's" you have outlaid.

Tensor cores are pretty much DOA since consoles won't be adopting it and AMD aren't interested in the idea either. Not a surprise since there's hardly any applications for it beyond image post-processing and even then it doesn't provide a clear benefit over existing methods ... 

Compared to double rate FP16 in shaders which are far more flexible and can be used for many other things including post-processing such as water rendering, ambient occlusion, signed distance fields collision for hair physics ... 

I don't think the real-time graphics industry is headed into the direction of tensor cores since there's very few compelling use cases for it ...

Pemalite said:

In some instances the GTX 1070 pulls ahead of the Xbox One X and sometimes rather significantly. (Remember I also own the Xbox One X.)
Often the Xbox One X is matching my old Radeon RX 580 in most games... No way would I be willing to say it's matching a 1070 across the board though... Especially when the Xbox One X is generally sacrificing effects for resolution/framerate.

Generally speaking you're going to need a GTX 1070 to get the same experience as the X1X is pretty definitively ahead of the GTX 1060 in the same settings and by extension the RX 580 as well ...

Pemalite said:

I would place the Playstation 4 at more than 4x faster. It has far more functional units at it's disposal, granted Maxwell is also a far more efficient architecture... The Playstation 4 also has clockspeed and bandwidth on it's side.

I am surprised the Switch gets as close as it does to be honest.

From a GPU compute perspective the PS4 is roughly ~4.7x faster, the same with texture sampling depending on formats but it's geometry performance is just a little over 2x faster than the Switch so it's not totally a slam dunk in theoretical performance since developers need to use some features like async compute to mask the relatively low geometry performance ... 

The Switch get's as 'close' (still can't run many AAA games) as it does since NV's driver/shader compiler team desire to take responsibility for performance so it doesn't matter what platform you develop on for Nvidia hardware when their whole entire software stack is more productive ...  

For AMD, on the PC side they can't change practices as that easily so I can only imagine their envy for Sony to be able to shove a whole new gfx API down every developers throat ...  



Trumpstyle said:
Pemalite said:

I never mentioned the Playstation 5.
I was comparing Vega 7/Navi/2080Ti.

Navi isn't going to match a Geforce RTX 2080Ti. Simple as that... Regardless if it's PC or Console, it's Polaris's successor not Vega's.



The leak is based from the youtuber adoredtv which I deemed fake long ago, here's the full lineup:

Funny thing about this lineup it matches the next-gen consoles very nicely. Take the R5 3500g and disable 2CU's for better yields and clock it at 1,8ghz and you get 4,1 Teraflops which is a very good fit for Xbox lockhart, Navi 12 with 4CU's disable clocked at 1,8ghz adds to 8,3TF and matches the rumored PS5. And than the last Navi 10 at 48CU's clocked at 1,95ghz gives 12TF which is a nice fit for Xbox anaconda.

Now the clock-speed is based on the leaked gonzalo which indicates a Cpu clocked at 3,2ghz and GPU clocked at 1,8ghz for the PS5, since consoles usually have lower clock-speed than desktop parts I assume Navi can hit 2-2,1GHz without much issue.

About navi hitting geforce 2080Ti performance, I don't think it's impossible, Vega 7 is close to geforce 2080 and assume Navi can hit 2-2.1ghz then a 64CU gpu with maybe 5% per/TF increase from architecture and it's very close to geforce 2080Ti about 10% under which this leak suggest. And a 64CU Navi gpu should have the die size around 300mm2+ so even the price is not out of whack with gddr6 ofc.

Even though I still think this is a fake leak the rumored next-gen consoles gives it some credibility.

It doesn't, neither with the lineup or the price. It just doesn't hold up at all if you look close enough:

  1.  Navi 12 having so many different core counts. GCN4 had Polaris 10 with 36CU, Polaris 11 with 16CU and Polaris 12 with 8CU; add to that Polaris 22 with 24 CU in the RX Vega M, which is paired to an Intel CPU. In other words, every different CU count on the chip also resulted in a different Polaris version name. Having Navi 12 range from 15 to 40 CU is thus patently wrong.
  2. R3 3300G/R5 3500G. So much wrong with those. First, AMD retired the R3/5/7/9 naming scheme with Polaris already, there's no reason to bring it back, especially not if the top end isn't going to be R7/R9, but still RX. Unless that's meant to stand for Ryzen 3/5, of course. Then, the CU counts are impossibly high. There's no way they could be fed through DDR4 without choking on the bandwith. Even with DDR4-3200, efficiently feeding more than 12 CU is next to impossible. Having so many CU would just bloat the chip size, making them more expensive for AMD to produce.
  3. Those prices are unbelievably, impossibly low. While it's clear that AMD will want to undercut NVidias prices to gain market share, they wouldn't undercut them by such a massive amount. I mean, the RTX 2080 is over 1000€, and the proposed RX 3080 would already come close with less than a quarter of the price? No can't do. They would be even cheaper than their own predecessors, which are already on bargain bin prices due to the end of the cryptomining boom and thus high stocks that need to be cleared out. Not only would AMD not make money with those prices, but would rather ensure, that the rest of the Polaris and Vega cards would become instantly unsellable. In other words, AMD would lose money with those prices - and the goodwill of their board partner who build the actual graphics cards along with it.
  4. The TDP values: AMD was trailing behind NVidia by a lot, and with that, they would surpass NVidia again, and not just marginally so. The recently released 1650 for instance trails an RX 570 by over 20% if locked to 75W, and an RX 3060 is supposed to be on par with an 580? More power than a Vega 64 LCU for less than half the TDP? That's simply not realistic.
  5. The VRAM sizes. RX 36060 having around RX 580 power but only half the memory? Really don't think so. And Vega 64 could already have used more than 8GB, so the 3080 being stuck with it while being more powerful, while not impossible, would still be a major disappointment.

So no, nothing realistic about the leak at all.