By using this site, you agree to our Privacy Policy and our Terms of Use. Close
CrazyGPU said:

Ok, I´ll go again with the teraflops thing.

I understand that there are other things besides teraflops in the graphic pipeline. In a GPU you have many cores, you have decoders, buffers, the execution units, texture units etc. Execution units can be 16 bit, 32 bit, 64bit , SIMD or other. Then you have to feed the processor, You have different levels of cache, then the bus bandwith with memory, the type of memory, the frequency of it, ROPS and so on. It´s complex and they try to balance the jerarchy to feed the processor. The processor makes 32 bit fp operations and we name that a flop.

I know exactly what a flop is.

However... You are ignoring Half Precision, Quarter Precision, Double Precision floating point, 8-bit integer, 16-bit integer... List goes on and on by using regular plain-jane flops.

And you still haven't been able to explain how it relates to the resolution a game is being rendered at, you have only explained what a flop is.

CrazyGPU said:
It´s not precise for comparing graphic card performance, and worse if you want to compare different brands and architectures, BUT GIVES YOU AN IDEA. And speaking about the same architecture, AMD in this case, we can think that a 11-13 teraflop AMD Graphic card would be able to run 4k at 30 fps.

It is only accurate if all other things are equal. Everything.
But because that almost never happens... It's a useless denominator.

You can still take one 2.7 Teraflop AMD GPU and compare it to a 1.7 Teraflop AMD GPU and that 1.7 Teraflop GPU will win, despite having almost a Teraflop deficit in single precision floating point.

CrazyGPU said:
He showed a slide on DICE 2012 session about computational analysis for predicting what was needed for next gen consoles. What did he use for that? Teraflops.  

 

Obviously dumbed down for the less tech literate of course. And it happens often in the video gaming world. (I.E. Like the oft-used statement where higher resolutions somehow equate to higher development costs for video games, which is false.)

I have just provided an example prior that basically undermines your position by using the PS2 > PS4 example.

CrazyGPU said:
And next gen (PS4) didn´t get there, and many games didn´t run at 1080p 30 fps. He predicted it in 2011.

And if the Playstation 4 only had 40GB/s memory bus it would have been a 720P machine.

Funny how that works, huh?


CrazyGPU said:
Older graphic cards didn´t reflect shadows or light, neither did transformation and lighting.  So you had 6 ops per pixel in the 1st case.

Older graphics processors did support reflections... Such as Cube Environment Mapping/Environment Mapping. It's been a staple for decades.

Transform and Lighting has been a graphics feature for decades, heck even the Nintendo 64 had a T&L engine.
Pretty much every Direct 7 (Every single Geforce/Radeon card every made in the history of graphics except for ATI's RV100 chip) has support for transformation and lighting.

The difference between then and now is that... Back then it was all done on fixed function hardware.

CrazyGPU said:
That times 30 and the resolution , you need 2.5 Teraflops for native 1080p. He is not even talking about GPUs or other stuff. Just Tflops.

Here is the thing though. Flops is a theoretical number often unachievable in real world scenarios.


CrazyGPU said:
With his formula and keeping 3 bounces of lights, for 4k you would need. 3840 x 2160 x 30 fps x 40.000 = 10 Teraflops.

And yet. Despite that, game engines have gotten more efficient.
Having a tiled based deferred renderer has helped tremendously... Frostbite even implemented light culling, meaning it was able to have more lights with bounce than ever before.
http://www.dice.se/wp-content/uploads/2014/12/GDC11_DX11inBF3_Public.pdf

CrazyGPU said:
Now, do you want a leap from that? 4 bounces of light? real global illumination. Real Next Gen? It won´t happen with PS5. 3 years is nothing.

Global illumination is already a thing.
And there are different types of Global Illumination such as Voxel Cone Global Illumination.
Unreal Engine 4 supports Sparse Voxel Octree Global Illumination for instance. (SVOGI)

http://www.icare3d.org/research-cat/publications/interactive-indirect-illumination-using-voxel-cone-tracing.html
https://www.geforce.com/whats-new/articles/stunning-videos-show-unreal-engine-4s-next-gen-gtx-680-powered-real-time-graphics



CrazyGPU said:
PD:  Now, If you don´t agree and think that Tim Sweeney aproximation is completely wrong, I have nothing else to say to you.

If you think a video from the last console generation is representative of hardware and the technology we have today, I have nothing else to say to you.




www.youtube.com/@Pemalite