This doesn't make sense for this argument. Strange blanket statement.
It makes perfect sense.
A CPU only provides a bottleneck in severe cases and there isn't one on the PS4, or the XBO.
Depends, I can point to a ton of games where the CPU is a bottleneck on the Xbox One and Playstation 4.
The CPU bottleneck will shift depending on the game itself and sometimes even the scene that is being displayed on the screen.
It's majority of the GPU to produce frames for a video game, 3D pipeline rendering.
The CPU assists in preparing those frames you know.
A CPU never provides 60 frames. A CPU is terrible at rendering 3D pipelines.
The CPU assists at rendering in many game engines... It was common especially in the 7th gen.
Shall I point out the rendering techniques the CPU was doing?
You clearly haven't any idea why a GPU bottleneck happens.
That is a bold assertion.
I was obviously "dumbing down" my rhetoric to make it more palatable for less technical persons that frequent this forum, if you would like me to stop, I would be more than okay to oblige and start being more technically on point?
The CPU is responsible for real-time actions, physics, audio, and a few other processes. If the bandwidth can't match that of the GPU, a bottleneck happens and you lose frames that you can actually use. Think of a partially closed dam. All of the sudden the data can't flow fast enough through the dam(CPU) because of a narrow channel.
The CPU is responsible for more than that... And you should probably list them, otherwise it is a little hypocritical if you are going to complain about my statement not being fully fleshed out and you go and do the same.
Now, 60 FPS is a GPU issue. That simple. This isn't a E8500 running a 1080 Ti.
It is a GPU and a CPU issue. - Sometimes even a RAM issue.
PS: Flops ARE everything. It gives a good baseline for performance, even outside of similar architecture in comparison. Just not on a 1:1 ratio in that case (per say NVIDIA/RADEON).
Bullshit it's not everything.
FLOPS or Single Precision Floating Point Operations... Is a Theoretical number.
By that admission alone, Flops is irrelevant... Not only are they irrelevant.. But Flops tells us absolutely nothing about the hardwares actual capability, it doesn't tell us the amount of bandwidth a chip has, it's geometry capabilities, it's texturing capabilities, whether it employs any culling to reduce processing load, whether it has various compression schemes like S3TC or Delta Colour Compression, it tells us nothing of it's quarter floating point/double floating point/integer capabilities... It tells us absolutely nothing.
It's just a theoretical number that is calculated by taking the number of pipelines * instructions per clock * clock.
I will try and keep this as simple as possible... But lets take the Geforce 1030.
That is a 6.5% difference in Gflops... And you said flops is everything.
And yet we get to the crux of the issue. Gflops doesn't tell us everything else about a GPU, only a theoretical component.
In short... The DDR4 version is often less than half the speed of the GDDR5 version.
But don't take my word for it: https://www.techspot.com/review/1658-geforce-gt-1030-abomination/
Or hows about a different scenario? (There are so many examples I can do this all day.)
Hows about we grab the Terascale based Radeon 5870 that operates at 2.72 Teraflops? It should absolutely obliterate the Radeon 7850 that operates at 1.76 Teraflops, right? That's almost a Teraflops difference huh? Both AMD based.
And yet... Again... Flops is irrelevant as the Radeon 7850 often has a slight edge.
But don't take my word for it: https://www.anandtech.com/bench/product/511?vs=549
Do you want some more examples of how unimportant flops are? I mean, I haven't even started to compare nVidia against AMD yet. Flops is everything right?