By using this site, you agree to our Privacy Policy and our Terms of Use. Close
CrazyGPU said:

Performance graphs are great, and it shows that theoretically a PS4 GPU is in line with Radeon HD 7850 or near HD 7870. Its weird that even with that amount of power, its not able to run games like Battlefield 4 at 1080p 30 fps. Same with Watch Dogs. Those games run at 900p and scale. Scalliing makes textures more blurry. 

Now, Could it be that the AMD Jaguar multicore CPU is holding back the performance of the GPU?. 

"Sucker Punch said in a note in their own GDC 2014 Post-Mortem (regarding the CPU) “While the CPU has ended up working pretty well, it’s still one of our main bottlenecks."

http://www.redgamingtech.com/ infamous-second-son-post-mortem-part-2-ps4-performance-compute-particle-system/
Battlefield 4 uses 95% of CPU power of PS4
http://bf4central.com/2013/11/battlefield-4-uses-95-cpu-power-found-ps4-xbox-one/

I was thinking that with optimization, the "PS4 HD 7850 GPU equivalent" would get better graphics than the PC counterpart, but if the CPU is holding back performace, then it might be the case why the PS4 cant achieve 1080p in most games like the PC card does, and the XBONE is almost 720p because of its weaker GPU.

Cerny says that the GPGPU on the PS4 can make up for CPU weakness, but if the graphic card is doing compute, would it be able to cope with the same graphic quality? Actually Nvidia takes a hit when a graphic card use PhysX.

Also this link from this forum is saying that memory can become a bottleneck too. 

http://gamingbolt.com/crytek-8gb-ram-can-be-easily-filled-up-will-surely-be-limiting-factor-on-ps4xbox-one

Any PS4 dev here or someone with deep knowledge to comment on this?


Memory is used to store all types of data. The weaknesses with an AMD GPU include mediocre tessellation performance. It is instead best advised that the devs used a high detailed mesh because it is simply faster to pass more vertex data into the pipeline rather than generating the extra quads on the fly with tessellation. The drawbacks to this is a clear increase in memory consumption. I think The Order: 1886 holds the record currently so far for having the most vertex data weighing in just over 700MB for a level! Last gen games don't even come close 10MB of vertex data most of the times for a level. Render targets have also seen substantial increases to memory consumption too with killzone shadow fall leading the pack with 800MB whereas games from last generation didn't even hit 100MB. The biggest culprits to memory consumption are textures for the most part and the situation doesn't improve when employing the use partially resident textures as developers will be more motivated to put even bigger resolution textures. The next thng to worry about on memory consumption is the desire to improve transparency. An A-buffer implementation of order independent transparency will seriously put in alot of memory overheads. 

This isn't even accounting for the rest of data such as the ingame physics program, the game data itself, music and all the other things. I could definitely see why 8GB of memory may not be enough but it would require alot of techniques or data that depends on memory consumption.