By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Rumor:PS5 & Anaconda Scarlet GPU on par with RTX 2080, Xbox exclusives focus on Cross gen, Developer complain about Lockhart.UPDATE: Windows Central said Xbox Anaconda target 12 teraflop

 

What do you think

I am excited for next gen 22 61.11%
 
I cannot wait to play next gen consoles 4 11.11%
 
I need to find another th... 2 5.56%
 
I worried about next gen 8 22.22%
 
Total:36
Pemalite said:
HollyGamer said:

I agree using pure teraflop metrix is false, but if you have other GPU that prove the other way around then it can be used as benchmark as well, RX 5500 is bad GPU example. You can use RX 5700 or 5700 Xt as example tho. 

You can check this video tho

No. You can't. It's inaccurate... Which my Radeon RX 5500 example proves without a doubt.
Not only is the Radeon RX 5500 a newer and more efficient architecture... It also has more flops than the Radeon RX 590. - Yet cannot decisively out benchmark it.

Ergo. Flops is useless.

Another example is the Geforce 1030... The DDR4 variant with a 250mhz core overclock has 1.076Teraflops of performance (1402Mhz core * 2 Instructions per clock * 384 Cuda cores) - So it should decisively beat the GDDR5 variant which is 0.942 Teraflops, right? Wrong.

Even though they are based on the same GPU, the GDDR5 wins hands down, almost doubling in some instances.
https://www.techspot.com/review/1658-geforce-gt-1030-abomination/

Or lets take the Radeon 5870 and 7850? The Radeon 5870 has 2.72 Teraflops or performance verses the 7850's 1.76Teraflops, you would assume that the Radeon 5870 with an almost extra 1 Teraflop of performance would be the easy winner, right? Again. Wrong.
The 7850 is the clear victor. - https://www.anandtech.com/bench/product/1062?vs=1076

Same thing with the Radeon 7750. The DDR3 vs GDDR5 variants, same amount of Gflops, the DDR3 version even has twice the amount of Ram.
Yet the GDDR5 is the clear victor.
https://www.goldfries.com/computing/gddr3-vs-gddr5-graphic-card-comparison-see-the-difference-with-the-amd-radeon-hd-7750/

So basically in dot point form...

* Newer GPU with more flops = Doesn't mean it's faster.
* Same GPU's, one with more flops. = Doesn't mean it's faster.
* Same GPU with identical flops. = Doesn't mean it's faster.

So on what planet can we even assume that Flops has any kind of relevancy or accuracy when every example blows that assumption out of the water?

Please. Do tell. I am all ears.

I think Digital Foundry has answer your theory flawlessly , and also tell me what metrix you suggest 



Around the Network
HollyGamer said:

You can scaling down games engine

But you will possibly lose the benefit of future hardware and more powerful hardware can do. The tech will stagnant like how COD use the same engine from PS3 era or Bethesda on every Fallout Games. 

CoD has made some great strides in visuals over the years, despite being based on a derivative of the Quake 2 engine.
It's just they have different design goals, low latency, high framerate is generally the goal for those engines...

For example with Modern Warefare 2, they introduced texture streaming which meant that texturing wasn't limited to the tiny pool of DRAM that the Playstation 3 and Xbox 360 had, resulting in a substantial uptick in visual fidelity, especially on the texturing side.

https://community.callofduty.com/t5/Call-of-Duty-Modern-Warfare-2/Modern-Warfare-2-Texture-Streaming/ba-p/9900744
https://www.eurogamer.net/articles/digitalfoundry-modern-warfare-2-face-off

We saw increases with Advanced Warefare when it brought a new lighting model and much improved post-process pipeline.
https://www.eurogamer.net/articles/digitalfoundry-2014-call-of-duty-advanced-warfare-face-off

Call of Duty Modern Warefare reboot saw great strides in geometry, improved lighting, shadowing and more.
https://www.eurogamer.net/articles/digitalfoundry-2019-cod-modern-warfare-delivers-series-most-advanced-visuals-yet


Every time they updated the engine they improved the core visuals. -
https://en.wikipedia.org/wiki/IW_engine

Call of Duty 2 (IW 2.0 engine): - Normal Maps, Bloom, Improved Shadowing.
Call of Duty  4 (IW 3.0 engine): Improved Lighting, Particles, Self Shadowing.
Call of Duty: Modern Warfare 2 (IW 4.0 engine): Texture and Mesh Streaming for better textures and models.
Call of Duty: Modern Warfare 3 (IW 5.0 engine): Improvements to streaming for larger environments, Improved Shadowing, Improved Reflections.
Call of Duty: Ghosts (IW 6.0 engine): Model geometry subdivision, HDR Lighting, Displacement Mapping, Tessellation, Specular Particles.
Call of Duty: Infinite Warfare (IW 7.0 engine): Physically based rendering for more accurate materials and lighting.

And I can keep going. Point is, it doesn't make sense to reinvent the wheel... Because these days, developers do not rebuild engines from scratch anymore.

Take the creation engine for instance, yes it looks dated today... But when it debuted with Skyrim in 2011 it was a decent looking engine that showcased what the 7th gen consoles could do with an open world environment... And that engine is based upon Oblivion's Gamebryo Engine from 2006... Which in turn is based on Morrowind Net Immerse engine from 2002.

Again, that engine spans multiple console generations and has scaled across hardware really well. - Elder Scrolls 6 is likely to be built on those same engine foundations and look absolutely stunning doing it.
Here is the jump from Morrowind on Xbox to Skyrim on Xbox 360.


Generational leap, no?

Even Unreal Engine is based on the engines that came before it, Unreal Engine 3.0 and 4.0 still contains code from the original Unreal Engine from the late 90's for example. Again. - Why reinvent the wheel?

https://en.wikipedia.org/wiki/Unreal_Engine

Essentially that engine technology has scaled from (In terms of hardware power) the Dreamcast right up to the Xbox One X.

The latest Battlefield being a graphical powerhouse (Especially on PC) is based on the Frosbite engine, the same engine which debuted with Battlefield: Bad Company on the Xbox 360 back in 2008. - The engine has had some massive overhauls since with Frostbite 1.5, 2.0, 3.0, 4.0... And I wouldn't be surprised if code from refractor still lingered somewhere.

https://en.wikipedia.org/wiki/Frostbite_(game_engine)

I think the reason why there is this misconception that older game engines hold back newer platforms is because people do not fundamentally understand what a game engine is or what it actually does.

A game engine is NOT just the "thing" that draws the pretty pictures on your display... A Game engine is a bunch of components working under a "framework" - Be it Audio, Physics, Networking and more. - Thus you can rewrite a part of the engine like with say... The lighting and materials shaders and take advantage of a newer platforms graphics capabilities.

But you can bet your ass that most game engines today are derived from technology of yesterday, because they scale.

HollyGamer said:
12 teraflop confirmed, LOL. in GCN number Navi teraflop are equal to 1,4 times of GCN performance 12X 1.4 = 16.8 teraflop of GCN from Xbox One = 16.8/1.3= 12.9 times more powerful than Xbox One.

God dammit.

A flop is a flop.

Navi doesn't have 1.4x more "theoretical flops" than Graphics Core Next. It just doesn't.

Flops is based on the number of Stream Processors * Instructions Per Clock * Clock Rate.

Navi gets 1.4x more performance, not because of flops, but because of everything else in the GPU, if you were to throw a purely compute task at Graphics Core Next, it would be able to achieve some impressive flop numbers, often higher than Navi. - But when it comes to gaming, games need more than just flops, ergo Navi is able to pull ahead.

Thus a flop on Navi is the same flop as Graphics Core Next. - Navi is just more efficient.

Trumpstyle said:

Dude 5500 have 5TF, where you getting your number from?

Navi 50% faster than gcn in Xbox one and PS4, 20% faster than Polaris, (590= polaris)

My mistake. I was reading my database incorrectly.
My point still stands however.

Bofferbrauer2 said:

You are aware that you are comparing half precision to single precision in that example, right? RX 5500XT has 4.7-5.2 Teraflops in single precision, significantly less than an RX 590 yet almost at the same performance. RX 5700 beats RX Vega despite having less TFlops. Which also shows that teraflops can only be compared within same architecture (and even then, with limitations) and can't be used otherwise as a yardstick.

I agree with the rest of what you're saying, just wanted to point out that inconsistency there.

That just reinforces my point that more flops doesn't mean better performance. The RX 590 also has more theoretical bandwidth as well.

But you are right.



Last edited by Pemalite - on 15 December 2019

--::{PC Gaming Master Race}::--

Pemalite said:
HollyGamer said:

You can scaling down games engine

But you will possibly lose the benefit of future hardware and more powerful hardware can do. The tech will stagnant like how COD use the same engine from PS3 era or Bethesda on every Fallout Games. 

CoD has made some great strides in visuals over the years, despite being based on a derivative of the Quake 2 engine.
It's just they have different design goals, low latency, high framerate is generally the goal for those engines...

For example with Modern Warefare 2, they introduced texture streaming which meant that texturing wasn't limited to the tiny pool of DRAM that the Playstation 3 and Xbox 360 had, resulting in a substantial uptick in visual fidelity, especially on the texturing side.

https://community.callofduty.com/t5/Call-of-Duty-Modern-Warfare-2/Modern-Warfare-2-Texture-Streaming/ba-p/9900744
https://www.eurogamer.net/articles/digitalfoundry-modern-warfare-2-face-off

We saw increases with Advanced Warefare when it brought a new lighting model and much improved post-process pipeline.
https://www.eurogamer.net/articles/digitalfoundry-2014-call-of-duty-advanced-warfare-face-off

Call of Duty Modern Warefare reboot saw great strides in geometry, improved lighting, shadowing and more.
https://www.eurogamer.net/articles/digitalfoundry-2019-cod-modern-warfare-delivers-series-most-advanced-visuals-yet


Every time they updated the engine they improved the core visuals. -
https://en.wikipedia.org/wiki/IW_engine

Call of Duty 2 (IW 2.0 engine): - Normal Maps, Bloom, Improved Shadowing.
Call of Duty  4 (IW 3.0 engine): Improved Lighting, Particles, Self Shadowing.
Call of Duty: Modern Warfare 2 (IW 4.0 engine): Texture and Mesh Streaming for better textures and models.
Call of Duty: Modern Warfare 3 (IW 5.0 engine): Improvements to streaming for larger environments, Improved Shadowing, Improved Reflections.
Call of Duty: Ghosts (IW 6.0 engine): Model geometry subdivision, HDR Lighting, Displacement Mapping, Tessellation, Specular Particles.
Call of Duty: Infinite Warfare (IW 7.0 engine): Physically based rendering for more accurate materials and lighting.

And I can keep going. Point is, it doesn't make sense to reinvent the wheel... Because these days, developers do not rebuild engines from scratch anymore.

Take the creation engine for instance, yes it looks dated today... But when it debuted with Skyrim in 2011 it was a decent looking engine that showcased what the 7th gen consoles could do with an open world environment... And that engine is based upon Oblivion's Gamebryo Engine from 2006... Which in turn is based on Morrowind Net Immerse engine from 2002.

Again, that engine spans multiple console generations and has scaled across hardware really well. - Elder Scrolls 6 is likely to be built on those same engine foundations and look absolutely stunning doing it.
Here is the jump from Morrowind on Xbox to Skyrim on Xbox 360.


Generational leap, no?

Even Unreal Engine is based on the engines that came before it, Unreal Engine 3.0 and 4.0 still contains code from the original Unreal Engine from the late 90's for example. Again. - Why reinvent the wheel?

https://en.wikipedia.org/wiki/Unreal_Engine

Essentially that engine technology has scaled from (In terms of hardware power) the Dreamcast right up to the Xbox One X.

The latest Battlefield being a graphical powerhouse (Especially on PC) is based on the Frosbite engine, the same engine which debuted with Battlefield: Bad Company on the Xbox 360 back in 2008. - The engine has had some massive overhauls since with Frostbite 1.5, 2.0, 3.0, 4.0... And I wouldn't be surprised if code from refractor still lingered somewhere.

https://en.wikipedia.org/wiki/Frostbite_(game_engine)

I think the reason why there is this misconception that older game engines hold back newer platforms is because people do not fundamentally understand what a game engine is or what it actually does.

A game engine is NOT just the "thing" that draws the pretty pictures on your display... A Game engine is a bunch of components working under a "framework" - Be it Audio, Physics, Networking and more. - Thus you can rewrite a part of the engine like with say... The lighting and materials shaders and take advantage of a newer platforms graphics capabilities.

But you can bet your ass that most game engines today are derived from technology of yesterday, because they scale.

HollyGamer said:
12 teraflop confirmed, LOL. in GCN number Navi teraflop are equal to 1,4 times of GCN performance 12X 1.4 = 16.8 teraflop of GCN from Xbox One = 16.8/1.3= 12.9 times more powerful than Xbox One.

God dammit.

A flop is a flop.

Navi doesn't have 1.4x more "theoretical flops" than Graphics Core Next. It just doesn't.

Flops is based on the number of Stream Processors * Instructions Per Clock * Clock Rate.

Navi gets 1.4x more performance, not because of flops, but because of everything else in the GPU, if you were to throw a purely compute task at Graphics Core Next, it would be able to achieve some impressive flop numbers, often higher than Navi. - But when it comes to gaming, games need more than just flops, ergo Navi is able to pull ahead.

Thus a flop on Navi is the same flop as Graphics Core Next. - Navi is just more efficient.

You  brought Bethesda as an example, it's mean you prove my point. Bethesda never has any new engine, they always using the same engine from 2001 era. Their engine are limited so it performed bad on hardware that come out after 2001 and new hardware , many effect, graphic and gameplay, AI, NPC etc look and played very outdated. 

Modder the one who actively fixing Oblivion and Skyrim. 

Yes flop is flop, but how Flop perform are different on every uarc, the equation of effectiveness  from one uarc to other uarc is very different . the effectiveness of TFLOPS can be measured from one UARC to other UARC. Navi it's indeed 1.4 times then GCAN.  



Pemalite said:
HollyGamer said:

Yes , because the games you played are using engines that build using new CPU from 2008 and above as baseline. Imagine if game developer still use SNES as baseline of gaming design until now, we might still stuck on 2D even we have raytracing that is under utilize. 

Having Xbox One as baseline, means you just stuck on old Jaguar while underutilize the tech available on Scarlett with SSD, AVX 256 on Ryzen 3000 , faster ram, Ray Tracing, and Iq per geometry that only available on RDNA etc etc , not include the tech for machine learning that can used on enhancing gameplay and a lot possibility if Scarlet is the baseline.  

As game designer you are limited by the canvas , you need bigger canvas and better ink. 


Engines are simply scalable, that is all there is to it, that doesn't change when new console hardware comes out with new hardware features that gets baked into new game engines.

You can turn effects down/off, you can use different (less demanding) effects in place of more demanding ones and more, which is why we can take a game like Doom, The Witcher 3, Overwatch, Wolfenstein 2 which scales from high-end PC CPU's, right down to the Switch... A game like the Witcher 3 still fundamentally plays the same as the PC variant despite the catastrophic divide in CPU capabilities.

Scaling a game from 3 CPU cores @ 1ghz on the Switch to 6 CPU cores at 1.6Ghz on the Playstation 4 to 8+ CPU cores @3.4Ghz on the PC just proves that.

The Switch was certainly not the baseline for those titles, the Switch didn't even exist when those games were being developed, yet a big open world game like the Witcher 3 plays great, game design didn't suffer.

I mean, I get what you are saying, developers do try and build a game to a specific hardware set, but that doesn't mean you cannot scale a game downwards or upwards after the fact.

At the end of the day, things like Ray Tracing can simply be turned off, you can reduce geometric complexity in scenes by playing around with Tessellation factors and more and thus scale across different hardware.

drkohler said:

blablabla removed, particularly completely irrelevant "command processor special sauce" and other silly stuff.
Ray tracing doesn't use floating point operations? I thought integer ray tracing was a more or less failed attempt in the early 2000s so colour me surprised.

You have misconstrued my statements.

The Single Precision Floating Point numbers being propagated around are NOT including the Ray Tracing capabilities of the part, because the FLOPS are a function of Clockrate multiplied by functional CUDA/RDNA/GCN shader units multiplied by number of instructions per clock. - It excludes absolutely everything else, that includes Ray Tracing capabilities.

drkohler said:

Look, as many times as you falsely yell "Flops are irrelevant", you are still wrong.

The technical baseplate for the new console SoCs are identical. AMD has not gone the extra miles to invent different paths for the identical goals of both consoles. Both MS and Sony have likely added "stuff" to the baseplate, but at the end of the day, it is still the same baseplate both companies relied on when they started designing the new SoCs MANY YEARS AGO.

And for ray tracing, which seems to be your pet argument, do NOT expect to see anything spectacular. You can easily drive a $1200 NVidia 2080Ti into the ground using ray tracing, what do you think entire consoles priced around $450-500 are going to deliver on that war front?

You can have identical flops with identical chips and still have half the gaming performance.

Thus flops are certainly irrelevant as it doesn't account for the capabilities of the entire chip.

Even overclocked the Geforce 1030 DDR4 cannot beat the GDDR5 variant, they are the EXACT same chip, roughly the same flops.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

nVidia's Ray Tracing on the 2080Ti is not the same as RDNA2's Ray Tracing coming out next year, the same technology that next-gen consoles are going to leverage, so it's best not to compare.

Plus, developers are still coming to terms on how to more effectively implement Ray Tracing, it is certainly a technology that is a big deal.

DonFerrari said:

CGI I guess the problem is that the way Pema put was that Flops are totally irrelevant.

But if we are looking at basically the same architeture and most stuff on them being the same, looking at the GPU point one being 10TF and other 12TF hardly the 10TF would be the better one.

Now sure on real world application if one have better memory (be it speed, quantity, etc) or CPU that advantage may be reversed.

So basically yes when Pema says it what he wants to say is that Tflop isn't the end all "simple number show it is better", not that it really doesn't matter at all.

Well. They are irrelevant, it's a theoretical number, not a real world one, the relevant "Flop number" would be one that is based on actual, real-world capabilities that the chips can actually achieve.

And like the Geforce 1030 example above, you can have identical/more flops, but because of other compromises, you end up with significantly less performance.

DonFerrari said:

I read all the posts on this thread. And you can't claim Oberon is real, no rumor can be claimed real until official information is gave.

Even consoles that were released in the market the real processing power were never confirmed because the measures made by people outside the company aren't reliable. Switch and WiiU we never discovered what is the exact performance of their GPU, we had just good guesses.

So please stop trying to pass rumor as official information. And also you can't claim 4 rumors that are different are all true.

The Switch we know exactly what it's capabilities are because Nintendo are using off-the-shelf Tegra components, we know what clockspeed and how many functional units it has as well thanks to Homebrew efforts that cracked the console open.

The WiiU is still a big unknown because it was a semi-custom chip, we do know it's an AMD Based VLIW GPU with an IBM PowerPC CPU though.


And exactly, you can't claim 4 different rumors as all being true.

I have 3 comments.

On the baseline... if you make a game with let's say PS4 as baseline and make it the best performance there and later you develop for Switch you are going to cut some stuff without affecting PS4 version (probably making Switch version look worse than if it was the baseline or with some performance issue). Now if you go for Switch as baseline and considering how multiplats usually work the PS4 version will only receive more resolution a little better texture, etc, it will be hold down (even to the design) by Switch.

On your comparison of GPUs you used one with DDR4 and other with GDDR5 that would already impact the comparison. We know that the core of your argument is that TFlop have almost no relevance (and after all your explanations I think very little people here put much stock in the TFlop alone), but what I said is ceteris paribus. If everything else on both GPUs is perfectly equal and just the flops are different (let's say because one have a 20% higher clockrate) then the one with the 20% higher clockrate is a stronger GPU (that sure the rest of the system would have to be made to use this advantage). Now if you mix the memory quantity, speed, bandwidth, design of the APU itself and everything else of course you will only be able to go and have a real life performance after they release. And even so you won't really have a very good measurement because same game running on 2 system the difference in performance may not be because one is worse than the other but just how proficient in that HW the dev are.

We know the capabilities of Switch, sure, but since Nintendo haven't gave any specific number I can't say we have 100% certain on a very precise number. We have a range of what we suspect are the performance of docked and undocked, also as you said yourself there is a difference between theoretical and real world.

EricHiggin said:
DonFerrari said:

We can't be sure, but considering the sales only dropped after X1X there isn't any strong showing that either PS4Pro or X1X improved sales of base model (the bump we saw with X1X was mostly due to a drop caused by it when they announced so much earlier and then near launch they announced a price reduction for X1S a to early as well).

Sure X1 have its issue, PS4 does as well. But the point was that PS3 got a lot more hurdle to overcome and was still able to do it.

Sure the point is for nextgen, but we were talking about affordability as well, so we would need someone that wants a new machine, that is nextgen (and won't bother with MS saying that their games will keep being crossgen so X1 would still suffice for him) and also the cheapest one without caring that the performance is much lower with lets say 1080p instead of 4k for a mere 100USD difference. I don't really think that is such an expressive number.

Most people I know and news we have was that people bought it because of motion controls. Such evidence is present that for the first 2 years or so people were paying a lot above MSRP to buy one.

I'm a very conservative person, so for me to go against what we have historically seem on console sales I would need hard evidence instead of speculation for a future state that would be very different from what already happened but without much difference in the situation being present so don't fell bad if I don't agree with you =p

No hard feelings, no forced assimilation, just worthwhile thoughts.

While certain info isn't available to make a reliable conclusion, I don't focus as much on the past. It's certainly necessary and useful, but too much focus on what was, without enough consideration about what is, will make discerning the future less likely.

I mean, who could have foreseen this?

I really liked the design of Xbox Series X.

Considering the size PS4 and X1 had and their capabilities, and I don't think PS5 will be smaller than Series X, I would agree with the reply that said that even if the rumor of 9 vs 12 Tflops is true (40 vs 56 CUs) were true than the silicon budget of PS5 would have been used in other stuff instead of just giving away over 33% in power (if everything else in the consoles would give the same 9 vs 12 advantage to Series X). Because that devkit was just to big to have so much less power.

Trumpstyle said:
KBG29 said:

I know that everyone is caught up in this FLOPs talk, but what about Ray Tracing Cores?

Lets say in theory, Micrsoft and Sony both set out to make a 400mm2 APU. Now it is all about finding a balance between CPU Cores, GPU Cores, Ray Tracing Cores, Cache, Memory Controllers, and such. We can theorize that GPU Cores should be equal in size with both using RDNA and likely the same fab. So if Sony is dropping 40CU's on its chip and Microsoft is dropping 56CU's on their chip, then that means that Sony potentially has a sizable amount of chip left for something else. If we again consider RT Cores to be equal, then it could be possible for the PS5 to have say 36RT Cores while XSX has 20RT Cores. 

We are entering a new era with new technologies. I think we have to consider much more when thinking about these chips than just traditinal cores and clock speeds. One chip could be incredibly capable in traditional rendering, but suffer in RT rednering, while another could be less capable in tradtional rendering, but be incredibly capable in RT rendering. At the end it could mean that the chip with weaker traditional silicone and stronger RT silicone ends up supirior in overall next gen graphics capabilities. Or, the two strenghths and weaknesses could balance them out. Or, the chip with more traditional cores, could just end up better.

There are just too many factors to focus so much on FLOPs and CU's. 

We know very little about Sony/Microsfts ray-tracing solution, the person who leaked Prospero first says Ps5 and Xbox Series X uses completely different ray-tracing solution. I would assume Microsoft uses AMD and Sony has there own. Yes frame-rate for games could be all over the place because someone has better ray-tracing but weaker Flop performance.

Whoever has more TF will probably market there consoles as WORLD MOST POWERFUL CONSOLE :)

I think one of the reports with official information have MS using a RT solution they patented.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

DonFerrari said:

.. if the rumor of 9 vs 12 Tflops is true (40 vs 56 CUs) were true than the silicon budget of PS5 would have been used in other stuff instead of just giving away over 33% in power (if everything else in the consoles would give the same 9 vs 12 advantage to Series X). Because that devkit was just to0 big to have so much less power.

The current rumours are  I think 40cus at 2GHz vs 56cus at 1.8GHz, so looking at cus only, it would be a 25% difference (both are very optimistic clock rates imho).

Power comes at a price. Particularly at 7nm, every square mm adds $ to the chip cost. It's unikely that Sony uses other stuff, it's just that Sony uses a smaller chip overall to save significant costs. Rumours have the PS5 at around 310-320mm^2, while the XSX is at 360-380mm^2. That is a significantly higher cost (probably around $30 per chip) for the latter chip. If the performance difference stays within 20%, people will barely notice the difference. Both will be constraint to a certain extent by memory. 16GByte is the rumoured size for both, but if XSX uses, say, 24GBytes (the 384bit bus rumour), that would mean a significant difference in performance. 16GByte is the very bottom you can get away with for something that has to live in the 4k world for even just a few years.



Around the Network
drkohler said:
DonFerrari said:

.. if the rumor of 9 vs 12 Tflops is true (40 vs 56 CUs) were true than the silicon budget of PS5 would have been used in other stuff instead of just giving away over 33% in power (if everything else in the consoles would give the same 9 vs 12 advantage to Series X). Because that devkit was just to0 big to have so much less power.

The current rumours are  I think 40cus at 2GHz vs 56cus at 1.8GHz, so looking at cus only, it would be a 25% difference (both are very optimistic clock rates imho).

Power comes at a price. Particularly at 7nm, every square mm adds $ to the chip cost. It's unikely that Sony uses other stuff, it's just that Sony uses a smaller chip overall to save significant costs. Rumours have the PS5 at around 310-320mm^2, while the XSX is at 360-380mm^2. That is a significantly higher cost (probably around $30 per chip) for the latter chip. If the performance difference stays within 20%, people will barely notice the difference. Both will be constraint to a certain extent by memory. 16GByte is the rumoured size for both, but if XSX uses, say, 24GBytes (the 384bit bus rumour), that would mean a significant difference in performance. 16GByte is the very bottom you can get away with for something that has to live in the 4k world for even just a few years.

Well a 30USD cost for 25% performance would be an easy expense for me =p

But looking at the size of XSX when the control is near it (much smaller than the original xbox and little bigger than X1S) I don't think PS5 will be much smaller (if so the devkit is just much much much bigger than retail version).



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

DonFerrari said:

Well a 30USD cost for 25% performance would be an easy expense for me =p

But looking at the size of XSX when the control is near it (much smaller than the original xbox and little bigger than X1S) I don't think PS5 will be much smaller (if so the devkit is just much much much bigger than retail version).

Well if you intend to sell 100mio consoles, that's a pretty $3billion in the end. If you think that is easy expense for you, just send me a percent of that

The volume of the XSX (is the power supply inside or external, btw?) is considerably bigger than the volume of the X1S. Noone has seen Sony's idea of the PS5 end product. Given Sony's track record of building small(er) things with jet engines inside, I don't have much hope in that area.



drkohler said:
DonFerrari said:

Well a 30USD cost for 25% performance would be an easy expense for me =p

But looking at the size of XSX when the control is near it (much smaller than the original xbox and little bigger than X1S) I don't think PS5 will be much smaller (if so the devkit is just much much much bigger than retail version).

Well if you intend to sell 100mio consoles, that's a pretty $3billion in the end. If you think that is easy expense for you, just send me a percent of that

The volume of the XSX (is the power supply inside or external, btw?) is considerably bigger than the volume of the X1S. Noone has seen Sony's idea of the PS5 end product. Given Sony's track record of building small(er) things with jet engines inside, I don't have much hope in that area.

I like that PS fats have their power supply inside the console so whenever I want to pick the console to play on weekend in another place I just need to pick it from my cabinet and use a stock cord and hdmi and it will do just fine.

So I'm all in for a PS5 jet engine on cozy pack =p

Still I wouldn't expect a PS5 that is smaller than PS4 original. And sure the devkit is no direct correlation to final product dimensions or even shape.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Pemalite said:

Trumpstyle said:

Dude 5500 have 5TF, where you getting your number from?

Navi 50% faster than gcn in Xbox one and PS4, 20% faster than Polaris, (590= polaris)

My mistake. I was reading my database incorrectly.
My point still stands however.

Yeah I also made a mistake, turns out Proelite over at Beyond3d was a fake insider, it was he who said the devkit for PS5 was 40CU's. But looks false.

Oberon remains a mystery, maybe it just haft to do with backwards compatibility and nothing else.

Oberon says 3 things, that the gpu is clocked at 2ghz and it have 2 backwards compatility modes. 1 mode where 18CU's is active which matches PS4 and another with 40CU's which don't match PS4 pro so you would assumed this was Boost mode with all CU's active, but the insider at beyond3d is false.



6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

Trumpstyle said:
Pemalite said:

My mistake. I was reading my database incorrectly.
My point still stands however.

Yeah I also made a mistake, turns out Proelite over at Beyond3d was a fake insider, it was he who said the devkit for PS5 was 40CU's. But looks false.

Oberon remains a mystery, maybe it just haft to do with backwards compatibility and nothing else.

Oberon says 3 things, that the gpu is clocked at 2ghz and it have 2 backwards compatility modes. 1 mode where 18CU's is active which matches PS4 and another with 40CU's which don't match PS4 pro so you would assumed this was Boost mode with all CU's active, but the insider at beyond3d is false.

Makes almost 0 sense that it would need 18CUs to match PS4 for compatibility on a 40CU, that would mean the new console would have power about PS4Pro (which is 2.25x stronger than base PS4). With XSX being 4x more powerful than X1X (and seems like GPU about 2x as powerful) then PS5 being a PS4Pro level machine would be something so weak that it would need to sell at 199 to have a chance.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."