JustBeingReal said:
Graphics rendering is never maxed out throughout 100% of the GPU time, GPU downtime is what Compute Queues are designed to take advantage of, it's all about making sure processing time isn't wasted, so what I'm talking about is using the hardware as efficiently as possible, without wasting resources that would otherwise go unused. An example of this is something like Assassin's Creed Unity, where we have Ubisoft stating that if it wasn't for the weaker CPUs in either PS4 or Xbox One they could run the game at 100FPS, well they're running the game at less than 1/3 of that speed most of the time, wasting all of that GPU down time which could be otherwise used on physics and AI. If we look at Ubisoft's benchmarks and Sony's recent SKD 2.0 slides (http://www.dualshockers.com/2014/11/23/ps4-sdk-2-0-revealed-includes-a-lot-of-interesting-new-tech-and-features-game-developers-can-use/), the CPU can be used for physics or AI that the player directly interacts with, so more close-up specific stuff, while the GPU could easily handle huge crowds filled with either. The GPU can supplement part of the demand, so it can easily be used to make up for the weaker CPU. PS4 does have additional ALUs compared to XB1, it also has extra texture mapping units and ROPs, so higher resolutions, better AA, AF and more demanding textures can all be taken advantage of, while even better physics and AI simulation is easily programmable by developers. |
But you can't use it if other resources or channels are already congested. I don't say it can't be used but the GPU is restricted and dependant of other resources and these are also used by CPU and by GPU contexts so we will have to see to what extent devs will be able to use it but I am not sure if it is that much. There will be a trade-off between using the GPU for graphics or GPGPU, you can't have both in that case without penalting the other.