By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
HollyGamer said:

Yes , because the games you played are using engines that build using new CPU from 2008 and above as baseline. Imagine if game developer still use SNES as baseline of gaming design until now, we might still stuck on 2D even we have raytracing that is under utilize. 

Having Xbox One as baseline, means you just stuck on old Jaguar while underutilize the tech available on Scarlett with SSD, AVX 256 on Ryzen 3000 , faster ram, Ray Tracing, and Iq per geometry that only available on RDNA etc etc , not include the tech for machine learning that can used on enhancing gameplay and a lot possibility if Scarlet is the baseline.  

As game designer you are limited by the canvas , you need bigger canvas and better ink. 


Engines are simply scalable, that is all there is to it, that doesn't change when new console hardware comes out with new hardware features that gets baked into new game engines.

You can turn effects down/off, you can use different (less demanding) effects in place of more demanding ones and more, which is why we can take a game like Doom, The Witcher 3, Overwatch, Wolfenstein 2 which scales from high-end PC CPU's, right down to the Switch... A game like the Witcher 3 still fundamentally plays the same as the PC variant despite the catastrophic divide in CPU capabilities.

Scaling a game from 3 CPU cores @ 1ghz on the Switch to 6 CPU cores at 1.6Ghz on the Playstation 4 to 8+ CPU cores @3.4Ghz on the PC just proves that.

The Switch was certainly not the baseline for those titles, the Switch didn't even exist when those games were being developed, yet a big open world game like the Witcher 3 plays great, game design didn't suffer.

I mean, I get what you are saying, developers do try and build a game to a specific hardware set, but that doesn't mean you cannot scale a game downwards or upwards after the fact.

At the end of the day, things like Ray Tracing can simply be turned off, you can reduce geometric complexity in scenes by playing around with Tessellation factors and more and thus scale across different hardware.

drkohler said:

blablabla removed, particularly completely irrelevant "command processor special sauce" and other silly stuff.
Ray tracing doesn't use floating point operations? I thought integer ray tracing was a more or less failed attempt in the early 2000s so colour me surprised.

You have misconstrued my statements.

The Single Precision Floating Point numbers being propagated around are NOT including the Ray Tracing capabilities of the part, because the FLOPS are a function of Clockrate multiplied by functional CUDA/RDNA/GCN shader units multiplied by number of instructions per clock. - It excludes absolutely everything else, that includes Ray Tracing capabilities.

drkohler said:

Look, as many times as you falsely yell "Flops are irrelevant", you are still wrong.

The technical baseplate for the new console SoCs are identical. AMD has not gone the extra miles to invent different paths for the identical goals of both consoles. Both MS and Sony have likely added "stuff" to the baseplate, but at the end of the day, it is still the same baseplate both companies relied on when they started designing the new SoCs MANY YEARS AGO.

And for ray tracing, which seems to be your pet argument, do NOT expect to see anything spectacular. You can easily drive a $1200 NVidia 2080Ti into the ground using ray tracing, what do you think entire consoles priced around $450-500 are going to deliver on that war front?

You can have identical flops with identical chips and still have half the gaming performance.

Thus flops are certainly irrelevant as it doesn't account for the capabilities of the entire chip.

Even overclocked the Geforce 1030 DDR4 cannot beat the GDDR5 variant, they are the EXACT same chip, roughly the same flops.
https://www.gamersnexus.net/hwreviews/3330-gt-1030-ddr4-vs-gt-1030-gddr5-benchmark-worst-graphics-card-2018

nVidia's Ray Tracing on the 2080Ti is not the same as RDNA2's Ray Tracing coming out next year, the same technology that next-gen consoles are going to leverage, so it's best not to compare.

Plus, developers are still coming to terms on how to more effectively implement Ray Tracing, it is certainly a technology that is a big deal.

DonFerrari said:

CGI I guess the problem is that the way Pema put was that Flops are totally irrelevant.

But if we are looking at basically the same architeture and most stuff on them being the same, looking at the GPU point one being 10TF and other 12TF hardly the 10TF would be the better one.

Now sure on real world application if one have better memory (be it speed, quantity, etc) or CPU that advantage may be reversed.

So basically yes when Pema says it what he wants to say is that Tflop isn't the end all "simple number show it is better", not that it really doesn't matter at all.

Well. They are irrelevant, it's a theoretical number, not a real world one, the relevant "Flop number" would be one that is based on actual, real-world capabilities that the chips can actually achieve.

And like the Geforce 1030 example above, you can have identical/more flops, but because of other compromises, you end up with significantly less performance.

DonFerrari said:

I read all the posts on this thread. And you can't claim Oberon is real, no rumor can be claimed real until official information is gave.

Even consoles that were released in the market the real processing power were never confirmed because the measures made by people outside the company aren't reliable. Switch and WiiU we never discovered what is the exact performance of their GPU, we had just good guesses.

So please stop trying to pass rumor as official information. And also you can't claim 4 rumors that are different are all true.

The Switch we know exactly what it's capabilities are because Nintendo are using off-the-shelf Tegra components, we know what clockspeed and how many functional units it has as well thanks to Homebrew efforts that cracked the console open.

The WiiU is still a big unknown because it was a semi-custom chip, we do know it's an AMD Based VLIW GPU with an IBM PowerPC CPU though.


And exactly, you can't claim 4 different rumors as all being true.

I have 3 comments.

On the baseline... if you make a game with let's say PS4 as baseline and make it the best performance there and later you develop for Switch you are going to cut some stuff without affecting PS4 version (probably making Switch version look worse than if it was the baseline or with some performance issue). Now if you go for Switch as baseline and considering how multiplats usually work the PS4 version will only receive more resolution a little better texture, etc, it will be hold down (even to the design) by Switch.

On your comparison of GPUs you used one with DDR4 and other with GDDR5 that would already impact the comparison. We know that the core of your argument is that TFlop have almost no relevance (and after all your explanations I think very little people here put much stock in the TFlop alone), but what I said is ceteris paribus. If everything else on both GPUs is perfectly equal and just the flops are different (let's say because one have a 20% higher clockrate) then the one with the 20% higher clockrate is a stronger GPU (that sure the rest of the system would have to be made to use this advantage). Now if you mix the memory quantity, speed, bandwidth, design of the APU itself and everything else of course you will only be able to go and have a real life performance after they release. And even so you won't really have a very good measurement because same game running on 2 system the difference in performance may not be because one is worse than the other but just how proficient in that HW the dev are.

We know the capabilities of Switch, sure, but since Nintendo haven't gave any specific number I can't say we have 100% certain on a very precise number. We have a range of what we suspect are the performance of docked and undocked, also as you said yourself there is a difference between theoretical and real world.

EricHiggin said:
DonFerrari said:

We can't be sure, but considering the sales only dropped after X1X there isn't any strong showing that either PS4Pro or X1X improved sales of base model (the bump we saw with X1X was mostly due to a drop caused by it when they announced so much earlier and then near launch they announced a price reduction for X1S a to early as well).

Sure X1 have its issue, PS4 does as well. But the point was that PS3 got a lot more hurdle to overcome and was still able to do it.

Sure the point is for nextgen, but we were talking about affordability as well, so we would need someone that wants a new machine, that is nextgen (and won't bother with MS saying that their games will keep being crossgen so X1 would still suffice for him) and also the cheapest one without caring that the performance is much lower with lets say 1080p instead of 4k for a mere 100USD difference. I don't really think that is such an expressive number.

Most people I know and news we have was that people bought it because of motion controls. Such evidence is present that for the first 2 years or so people were paying a lot above MSRP to buy one.

I'm a very conservative person, so for me to go against what we have historically seem on console sales I would need hard evidence instead of speculation for a future state that would be very different from what already happened but without much difference in the situation being present so don't fell bad if I don't agree with you =p

No hard feelings, no forced assimilation, just worthwhile thoughts.

While certain info isn't available to make a reliable conclusion, I don't focus as much on the past. It's certainly necessary and useful, but too much focus on what was, without enough consideration about what is, will make discerning the future less likely.

I mean, who could have foreseen this?

I really liked the design of Xbox Series X.

Considering the size PS4 and X1 had and their capabilities, and I don't think PS5 will be smaller than Series X, I would agree with the reply that said that even if the rumor of 9 vs 12 Tflops is true (40 vs 56 CUs) were true than the silicon budget of PS5 would have been used in other stuff instead of just giving away over 33% in power (if everything else in the consoles would give the same 9 vs 12 advantage to Series X). Because that devkit was just to big to have so much less power.

Trumpstyle said:
KBG29 said:

I know that everyone is caught up in this FLOPs talk, but what about Ray Tracing Cores?

Lets say in theory, Micrsoft and Sony both set out to make a 400mm2 APU. Now it is all about finding a balance between CPU Cores, GPU Cores, Ray Tracing Cores, Cache, Memory Controllers, and such. We can theorize that GPU Cores should be equal in size with both using RDNA and likely the same fab. So if Sony is dropping 40CU's on its chip and Microsoft is dropping 56CU's on their chip, then that means that Sony potentially has a sizable amount of chip left for something else. If we again consider RT Cores to be equal, then it could be possible for the PS5 to have say 36RT Cores while XSX has 20RT Cores. 

We are entering a new era with new technologies. I think we have to consider much more when thinking about these chips than just traditinal cores and clock speeds. One chip could be incredibly capable in traditional rendering, but suffer in RT rednering, while another could be less capable in tradtional rendering, but be incredibly capable in RT rendering. At the end it could mean that the chip with weaker traditional silicone and stronger RT silicone ends up supirior in overall next gen graphics capabilities. Or, the two strenghths and weaknesses could balance them out. Or, the chip with more traditional cores, could just end up better.

There are just too many factors to focus so much on FLOPs and CU's. 

We know very little about Sony/Microsfts ray-tracing solution, the person who leaked Prospero first says Ps5 and Xbox Series X uses completely different ray-tracing solution. I would assume Microsoft uses AMD and Sony has there own. Yes frame-rate for games could be all over the place because someone has better ray-tracing but weaker Flop performance.

Whoever has more TF will probably market there consoles as WORLD MOST POWERFUL CONSOLE :)

I think one of the reports with official information have MS using a RT solution they patented.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."