| Intrinsic said: There is nothing wrong with a game being sub 1080p, and the fact that they needed to patch PS4 AC4 to reach 1080p pretty much tells you all you need to know about Ubisoft. I was just pointing out why the reason or excuse they gave is BS. As I said, there is a reason for why they did what they did. And I know what it is, its nothing bad per say... just that the reason they gave is not true.. and honestly insulting if you know anything about how games render pipelines work. And them using GPGPU has nothing to do with this. I also think some people don't realize how litle bandwidth or actual data any CPU tasks uses. To help put things in perspective? The CPU you are using to read this right now (assuming you are reading this on some sort of desktop/ laptop. Of Which its OS uses DDR3 ram. And I am going to assume its using amongst the fastest DDR3 ram out there. Has a peak memory bandwidth of under 20GB/s. More like 17.8GB/s to be exact. Now do you think a game that will run on that that same PC, along with its memory hungry OS all within the confines of that 20GB/s, some how is bandwidth starved on consoles with 60GB+ and 190GB+ of memory??????? Oh, and thanks for the well wishes.. I hve only been working on that particular app on and off for the past 3 years lol. Becoming clear to me its not a one man job. Fun fact: |
Unless we start using thinfoil hats and have a big conspiracy do downgrade the game we have to assume Ubi stumble in some kind of problem to reach 1080p and retain all the scope of the game.
And as you pointed console memory bw are very high and devs try to feed as much as possible to the GPU. But the CPU problem claim is understandable:
If the CPU is demanding to much bw that chunk will be unavailabe to the GPU as long as the CPU is bussy. And a lot of computation has to be done in the 33us back and forth CPU->GPU even if the GPU becomes idle it has to wait the CPU finish their tasks and free the memory.
I don´t know if all the PS4 memory pool could be read/written at the same cicles, or how big the pipeline could get before CPU start starving. If I remember even Cerny said that that was the reason they considered ERAM for a long time into development, but decided for a big and fast GDDRAM.
There are bootlenecks everywhere anywhere. Dosen´t matter how advanced is the archtecture.







