By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
CrazyGPU said:
Despite texture compression, geometry culling , new Rasterization tecniques, voxels and so on, I don´t see the next gen being game changing.

It's techniques like those which allows nVidia GPU's to have 50% or more performance for the same amount of flops... And there is still more to come.

It will allow us to do more with less hardware is all I am getting at... It will mean that we don't need 512GB/s of bandwidth as we can achieve similar results with something like 384GB/s of bandwidth.

And Voxels?

CrazyGPU said:

Raw numbers are an indicator, not an exact comparator, but still, considering all this tecniques we are still away from old days jumps and the feeling people will have even with the same amount of improvement will be lower because we are at a much better image quality level than we were in the old days.

They are an indicator, but they aren't accurate, the issue is most people just see the black and white numbers and run with it instead of actually taking a deeper look for a proper comparison.


Talking about Graphics here, 

You don´t do an exact comparison either. You don´t know if the tecniques are going to save 20% of bandwith, 40% or only 10%. You don´t know  how AMD will implement it inside the new hardware. So you don´t know if the compresion of an uncompressed 512GB/s stream of data can be compressed to 480, 384, or 256 GB/s of data. So even if you take those tecniques into account you are inacurate too. It´s like comparing Nvidia Teraflops to AMD Teraflops. Teraflops can be the same amount, but the Nvidia implementation makes use of those teoretical maximum teraflops much better than AMD in practise now, so you can´t compare different architectures and be accurate. But as you don´t have anything else for a proper comparison, you have to go with something. So we compare with what we have , teraflops, GB/s, and so on. And the comparison is better if we compare similar architectures of the same brand.

with your numbers, near 0.2 Teraflops PS3 vs a little more than 1.8 Tf PS4 is 9 times more. No way the PS5 will have 9 times the teraflops of PS4. 

Also considering tecniques or not, the jump from standar ps4, 176GB/s to let say 512 GB/s, equivalent to 800 GB/s uncompressed, just to put a number, is far smaller than going from 22,4 GB/s of PS3 to 176 GB/s of PS4. And there is no way a PS5 will have 8 times more bandwith to feed the processor. 

So,  the two things that are  really important to improve performance and have a balanced graphic architecture, the calculation (teraflops) and the feeding for that calculation (cache, memory bandwith, theorical or with tecniques), will improve less than they did before, and the improvement will feel less important than before too even if it were the same. 

Software is not going to solve that. PS4 performance was always similar to a Radeon HD 7850-7870 on PC and no exclusive programming changed the graphics capability of the console. And if it did for you, it never became a Geforce GTX 1060 because of that.

With a 10-12 Teraflops PS5 machine, we would have a 5,4-6,5 improvement in theoretical Teraflops

and with 800 GB/s of uncompressed bandwith (if you consider that the ps4 did not compress anything) the improvement will be 4,5 times.

So again, you will have 4k, 30 fps. 60 in some games. With PS4 graphics and a little more when devs get used to it, but nothing to write home about. 

A great CPU, Hard Disk, or anything else is not going to change that. It´s not going to be the Ray tracing beast with new lighting and geometry many of us would wish for.