By using this site, you agree to our Privacy Policy and our Terms of Use. Close
mutantsushi said:
Egann said:

For that reason, if the RAM pool has comparable speeds to the ERAM, the one with the unified pool will not only be easier to develop for, but it will actually be faster because it has fewer operations to make and is not continuously wasting operations flipping information in and out of the ERAM pool.

Now I could be wrong. All these units have APU's, so it's not likely that the X1 is burning any of it's GPU's processor doing any ESRAM flipping. It's likely that the power lost is actually going to come out of enemy AI or another background operation, and not the graphics.

And remember that MS said that it "discovered" that reverse transfer of info was "free" only late in development, so Move Engines that handle memory loads/unloads are not designed with that workload in mind.  This is besides the fact that the quoted ESRAM max bandwidth only applies if both directions of transfer are maxed out.  You can write a test program to do that easily enough, but in an actual game, there will often not be any use value to using the reverse direction fully, so that theoretical bandwidth just isn't helpful, in addition to the impediments to actually reaching that theoretical peak.  XBone devs have an extra restriction on their work, that they need to optimize everything around the ESRAM, yet doing so doesn't achieve any real benefit vs. PS4 whose GDDR can do the same thing without jumping thru hoops (32MB sized hoops).  PS4 devs can also consider other approaches to graphics that simply wouldn't be compatable with 32MB ESRAM limitations.

Your second point is missing a major thing: XBone's ESRAM is PHYSICALLY displacing GPU cores, it takes up die space that Sony uses for 50% more GPU cores.  That is the power loss, because the power JUST ISN'T THERE in XBone.  What that power is used for is up to the dev, PS4 has alot more flexibility because it has like 8x the number of GPGPU threads that let the GPU be used for standard graphics, physics, sound raycasting, even AI.  That can use aspects of GPU cores that are temporarily not used by their graphics function, or actually take over a core from graphics functions.  (An interesting use is using GPGPU to more efficiently achieve graphics than standard shader models allow)

I did not know that. Wow, using ESRAM was a bone-headed mistake. Had they been a good manufacturer and used EDRAM like Nintendo they would have only lost about 15% of the die to embedded memory. (EDRAM is physically a third the size of ESRAM, as it only uses two transistors per bit. ESRAM uses six.)