By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Rumor:PS5 & Anaconda Scarlet GPU on par with RTX 2080, Xbox exclusives focus on Cross gen, Developer complain about Lockhart.UPDATE: Windows Central said Xbox Anaconda target 12 teraflop

 

What do you think

I am excited for next gen 22 61.11%
 
I cannot wait to play next gen consoles 4 11.11%
 
I need to find another th... 2 5.56%
 
I worried about next gen 8 22.22%
 
Total:36
DonFerrari said:
EricHiggin said:

How sure are you that XB1S sales aren't being propped up by XB1X, and would be even worse without it?

XB1 had it's own issues. Just because RROD was fixed yet replaced with other things that were seen as issues, doesn't automatically make it a purchase this time around.

People may stick with or buy into last gen, but the main point is about next gen sales. If someone isn't going to bother with next gen, why take them into consideration?

How many people bought Wii just because of motion controls? Most of those people bought in because it was also cheap. One main feature to influence a customer to buy is great if you can provide it, but quite often there's more than just one thing that makes you want to buy a certain console.

We can't be sure, but considering the sales only dropped after X1X there isn't any strong showing that either PS4Pro or X1X improved sales of base model (the bump we saw with X1X was mostly due to a drop caused by it when they announced so much earlier and then near launch they announced a price reduction for X1S a to early as well).

Sure X1 have its issue, PS4 does as well. But the point was that PS3 got a lot more hurdle to overcome and was still able to do it.

Sure the point is for nextgen, but we were talking about affordability as well, so we would need someone that wants a new machine, that is nextgen (and won't bother with MS saying that their games will keep being crossgen so X1 would still suffice for him) and also the cheapest one without caring that the performance is much lower with lets say 1080p instead of 4k for a mere 100USD difference. I don't really think that is such an expressive number.

Most people I know and news we have was that people bought it because of motion controls. Such evidence is present that for the first 2 years or so people were paying a lot above MSRP to buy one.

I'm a very conservative person, so for me to go against what we have historically seem on console sales I would need hard evidence instead of speculation for a future state that would be very different from what already happened but without much difference in the situation being present so don't fell bad if I don't agree with you =p

No hard feelings, no forced assimilation, just worthwhile thoughts.

While certain info isn't available to make a reliable conclusion, I don't focus as much on the past. It's certainly necessary and useful, but too much focus on what was, without enough consideration about what is, will make discerning the future less likely.

I mean, who could have foreseen this?



Around the Network
Pemalite said:
HollyGamer said:

Yes , because the games you played are using engines that build using new CPU from 2008 and above as baseline. Imagine if game developer still use SNES as baseline of gaming design until now, we might still stuck on 2D even we have raytracing that is under utilize. 

Having Xbox One as baseline, means you just stuck on old Jaguar while underutilize the tech available on Scarlett with SSD, AVX 256 on Ryzen 3000 , faster ram, Ray Tracing, and Iq per geometry that only available on RDNA etc etc , not include the tech for machine learning that can used on enhancing gameplay and a lot possibility if Scarlet is the baseline.  

As game designer you are limited by the canvas , you need bigger canvas and better ink. 


Engines are simply scalable, that is all there is to it, that doesn't change when new console hardware comes out with new hardware features that gets baked into new game engines.

You can turn effects down/off, you can use different (less demanding) effects in place of more demanding ones and more, which is why we can take a game like Doom, The Witcher 3, Overwatch, Wolfenstein 2 which scales from high-end PC CPU's, right down to the Switch... A game like the Witcher 3 still fundamentally plays the same as the PC variant despite the catastrophic divide in CPU capabilities.

Scaling a game from 3 CPU cores @ 1ghz on the Switch to 6 CPU cores at 1.6Ghz on the Playstation 4 to 8+ CPU cores @3.4Ghz on the PC just proves that.

The Switch was certainly not the baseline for those titles, the Switch didn't even exist when those games were being developed, yet a big open world game like the Witcher 3 plays great, game design didn't suffer.

I mean, I get what you are saying, developers do try and build a game to a specific hardware set, but that doesn't mean you cannot scale a game downwards or upwards after the fact.

At the end of the day, things like Ray Tracing can simply be turned off, you can reduce geometric complexity in scenes by playing around with Tessellation factors and more and thus scale across different hardware.

drkohler said:

blablabla removed, particularly completely irrelevant "command processor special sauce" and other silly stuff.
Ray tracing doesn't use floating point operations? I thought integer ray tracing was a more or less failed attempt in the early 2000s so colour me surprised.

You have misconstrued my statements.

The Single Precision Floating Point numbers being propagated around are NOT including the Ray Tracing capabilities of the part, because the FLOPS are a function of Clockrate multiplied by functional CUDA/RDNA/GCN shader units multiplied by number of instructions per clock. - It excludes absolutely everything else, that includes Ray Tracing capabilities.

drkohler said:

Look, as many times as you falsely yell "Flops are irrelevant", you are still wrong.

The technical baseplate for the new console SoCs are identical. AMD has not gone the extra miles to invent different paths for the identical goals of both consoles. Both MS and Sony have likely added "stuff" to the baseplate, but at the end of the day, it is still the same baseplate both companies relied on when they started designing the new SoCs MANY YEARS AGO.

And for ray tracing, which seems to be your pet argument, do NOT expect to see anything spectacular. You can easily drive a $1200 NVidia 2080Ti into the ground using ray tracing, what do you think entire consoles priced around $450-500 are going to deliver on that war front?

You can have identical flops with identical chips and still have half the gaming performance.

Thus flops are certainly irrelevant as it doesn't account for the capabilities of the entire chip.

Even overclocked the Geforce 1030 DDR4 cannot beat the GDDR5 variant, they are the EXACT same chip, roughly the same flops.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

nVidia's Ray Tracing on the 2080Ti is not the same as RDNA2's Ray Tracing coming out next year, the same technology that next-gen consoles are going to leverage, so it's best not to compare.

Plus, developers are still coming to terms on how to more effectively implement Ray Tracing, it is certainly a technology that is a big deal.

DonFerrari said:

CGI I guess the problem is that the way Pema put was that Flops are totally irrelevant.

But if we are looking at basically the same architeture and most stuff on them being the same, looking at the GPU point one being 10TF and other 12TF hardly the 10TF would be the better one.

Now sure on real world application if one have better memory (be it speed, quantity, etc) or CPU that advantage may be reversed.

So basically yes when Pema says it what he wants to say is that Tflop isn't the end all "simple number show it is better", not that it really doesn't matter at all.

Well. They are irrelevant, it's a theoretical number, not a real world one, the relevant "Flop number" would be one that is based on actual, real-world capabilities that the chips can actually achieve.

And like the Geforce 1030 example above, you can have identical/more flops, but because of other compromises, you end up with significantly less performance.

DonFerrari said:

I read all the posts on this thread. And you can't claim Oberon is real, no rumor can be claimed real until official information is gave.

Even consoles that were released in the market the real processing power were never confirmed because the measures made by people outside the company aren't reliable. Switch and WiiU we never discovered what is the exact performance of their GPU, we had just good guesses.

So please stop trying to pass rumor as official information. And also you can't claim 4 rumors that are different are all true.

The Switch we know exactly what it's capabilities are because Nintendo are using off-the-shelf Tegra components, we know what clockspeed and how many functional units it has as well thanks to Homebrew efforts that cracked the console open.

The WiiU is still a big unknown because it was a semi-custom chip, we do know it's an AMD Based VLIW GPU with an IBM PowerPC CPU though.


And exactly, you can't claim 4 different rumors as all being true.





I agree with this, but now I'm going to have to turn Witcher 3 settings to the lowest possible on my PC, just to see how shitty it looks. And then I'll compare it to Switch screenshots to see how far below their baseline they had to go to make it run on Switch! :D 



HollyGamer said:
12 teraflop confirmed, LOL. in GCN number Navi teraflop are equal to 1,4 times of GCN performance 12X 1.4 = 16.8 teraflop of GCN from Xbox One = 16.8/1.3= 12.9 times more powerful than Xbox One.

Flops is not everything when it comes final look of games, but it's there to measure theoretically . So it's a humongous improvement.

So we still need to confirmed PS5 performance before we close this nthread.

False.
Otherwise the Radeon RX5500 XT with it's 8.6-10.1 Teraflops of power would have been able to decisively beat the Radeon RX 590 with it's 6.7-7.1 Teraflops of power.

Plus you are asserting that Navi's Teraflops are 1.4x better than GCN's meaning the RX 5500 should be the equivalent of 9.38-9.94 Teraflops of the Radeon RX 590, which should see the Radeon RX 5500 XT with a significant performance advantage over the RX 590 and we know THAT didn't happen.

https://www.anandtech.com/show/15206/the-amd-radeon-rx-5500-xt-review

This is why using your flops metrics is an absolute joke, it's absolutely useless.

Last edited by Pemalite - on 13 December 2019

--::{PC Gaming Master Race}::--

Pemalite said:
HollyGamer said:
12 teraflop confirmed, LOL. in GCN number Navi teraflop are equal to 1,4 times of GCN performance 12X 1.4 = 16.8 teraflop of GCN from Xbox One = 16.8/1.3= 12.9 times more powerful than Xbox One.

Flops is not everything when it comes final look of games, but it's there to measure theoretically . So it's a humongous improvement.

So we still need to confirmed PS5 performance before we close this nthread.

False.
Otherwise the Radeon RX5500 XT with it's 8.6-10.1 Teraflops of power would have been able to decisively beat the Radeon RX 590 with it's 6.7-7.1 Teraflops of power.

Plus you are asserting that Navi's Teraflops are 1.4x better than GCN's meaning the RX 5500 should be the equivalent of 9.38-9.94 Teraflops of the Radeon RX 590, which should see the Radeon RX 5500 XT with a significant performance advantage over the RX 590 and we know THAT didn't happen.

https://www.anandtech.com/show/15206/the-amd-radeon-rx-5500-xt-review

This is why using your flops metrics is an absolute joke, it's absolutely useless.

I agree using pure teraflop metrix is false, but if you have other GPU that prove the other way around then it can be used as benchmark as well, RX 5500 is bad GPU example. You can use RX 5700 or 5700 Xt as example tho. 

You can check this video tho



HollyGamer said:

I agree using pure teraflop metrix is false, but if you have other GPU that prove the other way around then it can be used as benchmark as well, RX 5500 is bad GPU example. You can use RX 5700 or 5700 Xt as example tho. 

You can check this video tho

No. You can't. It's inaccurate... Which my Radeon RX 5500 example proves without a doubt.
Not only is the Radeon RX 5500 a newer and more efficient architecture... It also has more flops than the Radeon RX 590. - Yet cannot decisively out benchmark it.

Ergo. Flops is useless.

Another example is the Geforce 1030... The DDR4 variant with a 250mhz core overclock has 1.076Teraflops of performance (1402Mhz core * 2 Instructions per clock * 384 Cuda cores) - So it should decisively beat the GDDR5 variant which is 0.942 Teraflops, right? Wrong.

Even though they are based on the same GPU, the GDDR5 wins hands down, almost doubling in some instances.
https://www.techspot.com/review/1658-geforce-gt-1030-abomination/

Or lets take the Radeon 5870 and 7850? The Radeon 5870 has 2.72 Teraflops or performance verses the 7850's 1.76Teraflops, you would assume that the Radeon 5870 with an almost extra 1 Teraflop of performance would be the easy winner, right? Again. Wrong.
The 7850 is the clear victor. - https://www.anandtech.com/bench/product/1062?vs=1076

Same thing with the Radeon 7750. The DDR3 vs GDDR5 variants, same amount of Gflops, the DDR3 version even has twice the amount of Ram.
Yet the GDDR5 is the clear victor.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

So basically in dot point form...

* Newer GPU with more flops = Doesn't mean it's faster.
* Same GPU's, one with more flops. = Doesn't mean it's faster.
* Same GPU with identical flops. = Doesn't mean it's faster.

So on what planet can we even assume that Flops has any kind of relevancy or accuracy when every example blows that assumption out of the water?

Please. Do tell. I am all ears.



--::{PC Gaming Master Race}::--

Around the Network
Bofferbrauer2 said:
Trumpstyle said:
-

The TFlops you're quoting are at peak boost performance, which the GPU can't hold without consuming much more than those values you're posting below. For instance, in the Techpowerup test the GPU only reached 1672 Mhz on average, which results into almost exactly 7.7 TFlops. Undervolting will simply not be enough to get to 10 TFlops, especially since 160W is already the maximum a console can take. And Navi translates that into more clock speed anyway by itself unless you lock the speed - but that only gives like 30-40 Mhz in general while it would need more like 200-300 Mhz to make it work for your example.

However, I agree that I did mess up a bit, I took consumption at max boost and used the TFlops at base clock. Still, my point stands that 10 TFlops are still out of reach

Maybe you were correct after all, but you might be missing a few things, we know Amd Zen3 and Rdna2 will use 7nm+. What does 7nm+ actually mean though? This is a term Amd uses and not TSMC, everyone assumes it means 7nm EUV including me a few days ago. The terms Tsmc uses are:

N7, N7+ and N7P

N7 + Navi/RDNA1 = 1.7Ghz

7nm EUV + RDNA2 =

PS5 retail will have 40CU's but 4 will be disable, 36CU's + 2ghz = 9.2TF

Xbox Series X will have 56CU's but 8 disable, 48CU's + 2ghz = 12TF

Everyone seems to think PS5 will be above 10TF and maybe even beat Xbox Series X, they will be in for a chock. This is my guess.

Edit: I still think it will be 7nm EUV for next-gen, edited to make it clear

Last edited by Trumpstyle - on 14 December 2019

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:
Bofferbrauer2 said:

The TFlops you're quoting are at peak boost performance, which the GPU can't hold without consuming much more than those values you're posting below. For instance, in the Techpowerup test the GPU only reached 1672 Mhz on average, which results into almost exactly 7.7 TFlops. Undervolting will simply not be enough to get to 10 TFlops, especially since 160W is already the maximum a console can take. And Navi translates that into more clock speed anyway by itself unless you lock the speed - but that only gives like 30-40 Mhz in general while it would need more like 200-300 Mhz to make it work for your example.

However, I agree that I did mess up a bit, I took consumption at max boost and used the TFlops at base clock. Still, my point stands that 10 TFlops are still out of reach

Maybe you were correct after all, but you might be missing a few things, we know Amd Zen3 and Rdna2 will use 7nm+. What does 7nm+ actually mean though? This is a term Amd uses and not TSMC, everyone assumes it means 7nm EUV including me a few days ago. The terms Tsmc uses are:

N7, N7+ and N7P

N7 + Navi/RDNA1 = 1.7Ghz

N7P + RDNA2 =

PS5 retail will have 40CU's but 4 will be disable, 36CU's + 2ghz = 9.2TF

Xbox Series X will have 56CU's but 8 disable, 48CU's + 2ghz = 12TF

Everyone seems to think PS5 will be above 10TF and maybe even beat Xbox Series X, they will be in for a chock. This is my guess.

I know that everyone is caught up in this FLOPs talk, but what about Ray Tracing Cores?

Lets say in theory, Micrsoft and Sony both set out to make a 400mm2 APU. Now it is all about finding a balance between CPU Cores, GPU Cores, Ray Tracing Cores, Cache, Memory Controllers, and such. We can theorize that GPU Cores should be equal in size with both using RDNA and likely the same fab. So if Sony is dropping 40CU's on its chip and Microsoft is dropping 56CU's on their chip, then that means that Sony potentially has a sizable amount of chip left for something else. If we again consider RT Cores to be equal, then it could be possible for the PS5 to have say 36RT Cores while XSX has 20RT Cores. 

We are entering a new era with new technologies. I think we have to consider much more when thinking about these chips than just traditinal cores and clock speeds. One chip could be incredibly capable in traditional rendering, but suffer in RT rednering, while another could be less capable in tradtional rendering, but be incredibly capable in RT rendering. At the end it could mean that the chip with weaker traditional silicone and stronger RT silicone ends up supirior in overall next gen graphics capabilities. Or, the two strenghths and weaknesses could balance them out. Or, the chip with more traditional cores, could just end up better.

There are just too many factors to focus so much on FLOPs and CU's. 



Stop hate, let others live the life they were given. Everyone has their problems, and no one should have to feel ashamed for the way they were born. Be proud of who you are, encourage others to be proud of themselves. Learn, research, absorb everything around you. Nothing is meaningless, a purpose is placed on everything no matter how you perceive it. Discover how to love, and share that love with everything that you encounter. Help make existence a beautiful thing.

Kevyn B Grams
10/03/2010 

KBG29 on PSN&XBL

Pemalite said:
HollyGamer said:

I agree using pure teraflop metrix is false, but if you have other GPU that prove the other way around then it can be used as benchmark as well, RX 5500 is bad GPU example. You can use RX 5700 or 5700 Xt as example tho. 

You can check this video tho

No. You can't. It's inaccurate... Which my Radeon RX 5500 example proves without a doubt.
Not only is the Radeon RX 5500 a newer and more efficient architecture... It also has more flops than the Radeon RX 590. - Yet cannot decisively out benchmark it.

Ergo. Flops is useless.

Dude 5500 have 5TF, where you getting your number from?

Navi 50% faster than gcn in Xbox one and PS4, 20% faster than Polaris, (590= polaris)



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

KBG29 said:

I know that everyone is caught up in this FLOPs talk, but what about Ray Tracing Cores?

Lets say in theory, Micrsoft and Sony both set out to make a 400mm2 APU. Now it is all about finding a balance between CPU Cores, GPU Cores, Ray Tracing Cores, Cache, Memory Controllers, and such. We can theorize that GPU Cores should be equal in size with both using RDNA and likely the same fab. So if Sony is dropping 40CU's on its chip and Microsoft is dropping 56CU's on their chip, then that means that Sony potentially has a sizable amount of chip left for something else. If we again consider RT Cores to be equal, then it could be possible for the PS5 to have say 36RT Cores while XSX has 20RT Cores. 

We are entering a new era with new technologies. I think we have to consider much more when thinking about these chips than just traditinal cores and clock speeds. One chip could be incredibly capable in traditional rendering, but suffer in RT rednering, while another could be less capable in tradtional rendering, but be incredibly capable in RT rendering. At the end it could mean that the chip with weaker traditional silicone and stronger RT silicone ends up supirior in overall next gen graphics capabilities. Or, the two strenghths and weaknesses could balance them out. Or, the chip with more traditional cores, could just end up better.

There are just too many factors to focus so much on FLOPs and CU's. 

We know very little about Sony/Microsfts ray-tracing solution, the person who leaked Prospero first says Ps5 and Xbox Series X uses completely different ray-tracing solution. I would assume Microsoft uses AMD and Sony has there own. Yes frame-rate for games could be all over the place because someone has better ray-tracing but weaker Flop performance.

Whoever has more TF will probably market there consoles as WORLD MOST POWERFUL CONSOLE :)



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Pemalite said:
HollyGamer said:
12 teraflop confirmed, LOL. in GCN number Navi teraflop are equal to 1,4 times of GCN performance 12X 1.4 = 16.8 teraflop of GCN from Xbox One = 16.8/1.3= 12.9 times more powerful than Xbox One.

Flops is not everything when it comes final look of games, but it's there to measure theoretically . So it's a humongous improvement.

So we still need to confirmed PS5 performance before we close this nthread.

False.
Otherwise the Radeon RX5500 XT with it's 8.6-10.1 Teraflops of power would have been able to decisively beat the Radeon RX 590 with it's 6.7-7.1 Teraflops of power.

Plus you are asserting that Navi's Teraflops are 1.4x better than GCN's meaning the RX 5500 should be the equivalent of 9.38-9.94 Teraflops of the Radeon RX 590, which should see the Radeon RX 5500 XT with a significant performance advantage over the RX 590 and we know THAT didn't happen.

https://www.anandtech.com/show/15206/the-amd-radeon-rx-5500-xt-review

This is why using your flops metrics is an absolute joke, it's absolutely useless.

You are aware that you are comparing half precision to single precision in that example, right? RX 5500XT has 4.7-5.2 Teraflops in single precision, significantly less than an RX 590 yet almost at the same performance. RX 5700 beats RX Vega despite having less TFlops. Which also shows that teraflops can only be compared within same architecture (and even then, with limitations) and can't be used otherwise as a yardstick.

I agree with the rest of what you're saying, just wanted to point out that inconsistency there.