By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:


I was nagged at to see this thread via Steam.

Of course it would be.
You can do multiple render targets that fit just fine into 10Mb of eDRAM, you could also fit a 30Mb G-buffer into 10Mb of eDRAM.

Sounds impossible right?

Not exactly. - You can do multiple passes and you can do tiling.

After the first tile of the G-buffer is created, you can keep the Z in-place and then proceed to light it, then you can move onto the next tile, then the next and the next... This method allows you to minimse the overhead of reloads, provided if you aren't doing anything with neighbouring pixels.

From what I can recall, Xenos will have a bandwidth cap of roughly 16GB/s during this process, hence with a 30MB G-Buffer, that would take roughly 2 ms, which is more than do-able on the Xbox 360 and with just 10Mb of eDRAM.

The WiiU just requires far less trickery to achieve the same thing, but don't think that functionally it's superior, because it's not.

Also Megafenix, after all this time you really should stop posting information that you clearly have zero idea about, you LITERALLY have no idea if it's even correct or not or what half the stuff does.

Also, cats.

Do note that multipassing can consume tons of bandwidth ... 


thats true, and thats why trying to use single pass should be a priority although is not always possible

Talking about bandwidth, How much bandwidth would it take for the deffered rendering on the ps3 if developers had to use 5 spus and obviously the internal memory banwidth?

each spu had access to internal memory of 256kb and they say that each of them was about 300GB/s of bandwidth

http://www.zdnet.com/blog/storage/build-an-8-ps3-supercomputer/220

 

One of the games that used deffered rendering on ps3 was killzone 2 and they say it used about 60% of the ps3 power, considering that it used deffered rendering and that the technique requires 5 spus then its credible they indeed used 60% or more

 

developers really did good job even eith the limitations on the hardwrae, but they put to much pressure on it to achieve things that in this new generation will have less performance hit on the hardware thank to theadditional memory bandwidth and the new features on the gpu and cpu