Pemalite said:
CrazyGPU said: Its math. If you want to output 2 millon pixels on screen you need 2 teraflops at a fixed quality(means without changing anything else, just resolution). Then if you need to output 8 millon you need 4 times that, at the same fixed quality. And that considering other stuff like say bandwith doesn´t become a bottleneck. If it does you need to balance that too. Of course if you want to implement better AA, shading, lighting, rays, etc, you would need even more power, which would mean even more flops for calculations. Im not being extremely precise, PS4 is not 2 teraflops, its 1.84, and you get just a little more than 2 millon pixels on screen, but the idea is the same, as when someone says 1000 MB of ram is 1 GB and thats close to 1024, the right number.
|
It's not math. You are asserting a logical fallacy by using two different constructs and forcing a relationship with one another.
Until you can tell me how exactly single precision floating point relates to resolution, then your argument is completely baseless... Because the Playstation 2 operates at 6.2Gflops. The Playstation 4 is 1840 gflops. Aka. A 296x increase.
The Playstation 2 typically rendered at 307,200 pixels, whilst the Playstation 4 is pushing 2,073,600 pixels. Aka. A 6.75x increase.
Ergo. Flops has no relation to rendered resolution.
|
Ok, I´ll go again with the teraflops thing.
I understand that there are other things besides teraflops in the graphic pipeline. In a GPU you have many cores, you have decoders, buffers, the execution units, texture units etc. Execution units can be 16 bit, 32 bit, 64bit , SIMD or other. Then you have to feed the processor, You have different levels of cache, then the bus bandwith with memory, the type of memory, the frequency of it, ROPS and so on. It´s complex and they try to balance the jerarchy to feed the processor. The processor makes 32 bit fp operations and we name that a flop.
Now Radeon High end GPUs have between 10,5 and 13 Teraflops and their graphic chips are still less performant than Nvidia´s GTX 1080 graphic card with 9 in many cases. Of course FPs is not the only thing you can use to compare. You also can speak about peak texture and pixel filtering, Peak rasterization rate or bandwith.
It´s not precise for comparing graphic card performance, and worse if you want to compare different brands and architectures, BUT GIVES YOU AN IDEA. And speaking about the same architecture, AMD in this case, we can think that a 11-13 teraflop AMD Graphic card would be able to run 4k at 30 fps.
Now, I would try to explain your PS2 example with something similar.
There is a Guy that I respect much and I believe is one man that undestand this thigns better than anybody. He was in charge of all the unreal engines. Epic Games founder Tim Sweeney.
He showed a slide on DICE 2012 session about computational analysis for predicting what was needed for next gen consoles. What did he use for that? Teraflops.
DOOM 1993. 320 x 200 x 30 fps x 6 operations per pixel = 10 MFlops.
Unreal 1998. 1024 x 768 x 30 fps x 48 ops per pixel = 1 Gigaflop.
Samaritan Demo 2011. 1920 x 1080 x 30fps x 40.000 operations per pixel = 2.5 Teraflops.
you can see it here min 5-46 to 7:56. https://www.youtube.com/watch?v=XiQweemn2_A
And next gen (PS4) didn´t get there, and many games didn´t run at 1080p 30 fps. He predicted it in 2011.
Now, the difference as you see is the operations per pixel, calculated by the GPU.
Older graphic cards didn´t reflect shadows or light, neither did transformation and lighting. So you had 6 ops per pixel in the 1st case.
Then you had GPUs able to calculate the bounce in lighting on a wall. 48 ops per pixel.
On the last case you had light bouncing from an object to the floor and then to your eyes. The video explains it. And there is your difference.
for calculating all that you need an operation. A single precision flop (what Im talking a bout) is a 32 bit floating point operation.
Acording to Tim Sweeney, founder of Unreal Technology you need 40.000 operations per pixel to manage lighting as GPUs do this gen. 3 bounces of light.
That times 30 and the resolution , you need 2.5 Teraflops for native 1080p. He is not even talking about GPUs or other stuff. Just Tflops.
Of course you can tweak your game or implement dinamic resolution or whatever. And now GPUs can eficiently manage graphics, but there is no magic. I mean Switch Tegra can output 1080p with 0.5 Teraflops but not at the same quality as a PS4 pro or with the same AA, ambient oclussion, supersampling or whatever.
With his formula and keeping 3 bounces of lights, for 4k you would need. 3840 x 2160 x 30 fps x 40.000 = 10 Teraflops.
You now have PC graphic cards that achieve that with 9, others with 12, but its doesn´t change much from his prediction in 2011.
Now, do you want a leap from that? 4 bounces of light? real global illumination. Real Next Gen? It won´t happen with PS5. 3 years is nothing.
PD: Now, If you don´t agree and think that Tim Sweeney aproximation is completely wrong, I have nothing else to say to you.
Last edited by CrazyGPU - on 13 February 2018