By using this site, you agree to our Privacy Policy and our Terms of Use. Close
JRPGfan said:

Theres more to real world performance than just the number of Tflops.

Has my incessant nagging finally sunk in?
Because Tflops was all you used to use...
I.E.
http://gamrconnect.vgchartz.com/post.php?id=8778326
http://gamrconnect.vgchartz.com/post.php?id=8760975
http://gamrconnect.vgchartz.com/post.php?id=8730042
http://gamrconnect.vgchartz.com/post.php?id=8728278

JRPGfan said:

These cards have higher IPC (instructions pr clock) than their older gen cards tech.

Kinda' happens with each successive architectural update to a processors design... Kinda' the entire point and everything.

JRPGfan said:

Be honest guys, even those PC master race people, how many people here have a Geforce 1080 or better?
plus with consoles and close to metal codeing, you get more bang for the buck,.... PS5  & XB2 are gonna be monsters.

You do realize the PC will have a successor to the Navi/5700 series and 2070 series before the next-gen consoles launch? Right? Right?

The consoles should fall roughly in mid-range in terms of performance tier as expected relative to the PC when they launch.

EricHiggin said:

https://www.techtimes.com/articles/186804/20161128/could-the-playstation-5-have-an-impressive-8-teraflop-gpu-for-true-4k-gaming.htm

Cerny mentioned around the launch of Pro in 2016 that he felt at least 8TF would be needed for guaranteed full native 4k. Can't help but wonder if he was privy to info about where Navi was likely to land in terms of TF calculation. It would also partially explain why Pro only hit 4.2TF. If your next gen console is 'only' going to hit 8TF-10TF, then why try and launch a monster of a console and make the next gen leap look much less impressive?

This would make it tougher for MS to market Project Scarlett since they will either have to really push the GPU performance and pay a hefty price for that, or possibly sacrifice CPU cores to keep the die size and cost down, or use two separate dies, still increasing costs. PS5 could be 8.4TF and it will seem like a much larger and more worthwhile leap in comparison to a 9TF-10TF Scarlett. Even if PS decided to shoot for 10TF and that happened to be where Scarlett landed as well, then it would still look better for PS than MS, due to the difference in performance gains on paper. PS will be able to use this to their advantage, where it would hurt MS, making their advancement seem weaker.

8 Teraflops isn't needed for guaranteed full native 4k.

It all depends on the level of fidelity you wish to chase, there is more to life than flops... Which is finally the message Microsoft and Sony are starting to put out there that is also finally starting to catch on with the gaming community. Yay. Finally.

Flops has always been irrelevant, it's a theoretical number, not a real world one.

HollyGamer said:

If Microsoft able  jump to 7nm+ than there is no reason Sony cannot, because actually 7nm + is more cheaper for the long run for both company and both are releasing at the same time frame (unlike Xbox One X that release latter after PS4 pro). Because i also have hunch both will targeted 7nm + due to release date and several news that 7nm+ already available on Samsung this month for mass produce  and TSMC will be available late this year or early next year. 

The problem is  the devkit and the test chip for the 7nm+ APU, they need to get at least 1 year or 6 month prior to launch, and none of manufacturing company able to produce in early 2019. That is why many forum dweller are pessimist that Sony and Xbox are using just normal 7 nm for devkit and for the final design. 

Depending on chip complexity a refined/smaller fabrication process can actually end up more expensive, hence why AMD is using 12nm for it's I/O chip and 7nm for the Zen CPU Complexes with Zen 2... Plus some structures don't scale downwards as well as others either.

Regular 7nm is likely the targeted process for the initial launch with 7nm+/5nm coming later as it becomes feasible.


curl-6 said:

FLOPS are generally a shitty way of comparing performance across parts of different architectures and/or made years apart. It's like the Megahertz Myth back in the day where people used clockspeed in the same flawed way, even though a part with only half the Megahertz of another could outperform it in real world applications.

In terms of raw FLOPS the Switch in portable mode is only like 60% as powerful as an Xbox 360, yet clearly that's not the case in actual performance.

There is way, way more to console power than just FLOPS or GHz.

Or back when people used "bits" to try and determine a consoles "performance". - That was just as ludicrous.







--::{PC Gaming Master Race}::--