By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Shinen is using triple buffering for the gbuffer on Fast Racing Neo, bandwidth is not a problem

megafenix said:

See, only a minute and you read the tweet?

what i brought is a fact, go read it yourself if you doubt it

 "

Deferred rendering needs multiple framebuffers. We also store GPU computed intermediate buffers there for faster access.

"

It's more accurate for Shin'en to say that "Deferred rendering needs Multiple Render Target" rather than multiple framebuffers. Render to Target Textures in deferred rendering is used to compute multiple lights in a single pass. Framebuffers are meant to store the final colour and I doubt the WII U has an efficient way of ordering fragments in a UAV to make use of multiple framebuffers for fast but also accurate OIT ... 



Around the Network
zarx said:
Triple buffering refers to the frame buffer Vsync not the Gbuffer. You can use triple buffering on a forward renderer just as well as a deferred one. There were a whole bunch of games that used deferred rendering on PS360, in fact most AAA games since about 2007 have used some variation of differed rendering. And crytek are not referring to bandwidth with that comment but rather memory capacity, as in 32MB is too small to put everything into so they had to pick and choose what they would put into it .

MSAA is possible to implement on a differed renderer but it is insanely expensive as you have to apply it to every single render target. You will never see that implemented on consoles but some games on PC do support differed MSAA.

+1 ... 



fatslob-:O said:
megafenix said:

See, only a minute and you read the tweet?

what i brought is a fact, go read it yourself if you doubt it

 "

Deferred rendering needs multiple framebuffers. We also store GPU computed intermediate buffers there for faster access.

"

It's more accurate for Shin'en to say that "Deferred rendering needs Multiple Render Target" rather than multiple framebuffers. Render to Target Textures in deferred rendering is used to compute multiple lights in a single pass. Framebuffers are meant to store the final colour and I doubt the WII U has an efficient way of ordering fragments in a UAV to make use of multiple framebuffers for fast but also accurate OIT ... 

English is not their first language, it could be a translation mistake.

So would these "render targets" be stored in eDRAM? Cos according to them the whole reason they upgraded to a deferred rendering solution with their new engine was due to the Wii U's large eDRAM making it doable at 60fps.



zarx said:
Triple buffering refers to the frame buffer Vsync not the Gbuffer. You can use triple buffering on a forward renderer just as well as a deferred one. There were a whole bunch of games that used deferred rendering on PS360, in fact most AAA games since about 2007 have used some variation of differed rendering. And crytek are not referring to bandwidth with that comment but rather memory capacity, as in 32MB is too small to put everything into so they had to pick and choose what they would put into it .

MSAA is possible to implement on a differed renderer but it is insanely expensive as you have to apply it to every single render target. You will never see that implemented on consoles but some games on PC do support differed MSAA.


yea, i read the tweets again and most likely are triple framebuffers. As for deffered rendering being used in ps3 and 360, well, from what i have read there were not vastly used and developers would have to do some workarounds to get it to work, in ps3 they used the spus(seems they had to use 5 in killzone)

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/BlackRockStudioTechnology_Deferred_Shading.pdf

Here some developers had to dismiss the deffered rendering as the gbuffer required 12MB of edram and xbox 360 only had 10MB of it, no to mention that those 10MB were needed for the framebuffer of 720p

 

 

here is another article were they seemed to have acheved the deffered rendering in ps3 nd 360 but by using some implementation tricks to reduce the memory bandwidth requirements but  that could mean more shader power that would normallybe required if you did it in the normal way

http://webstaff.itn.liu.se/~perla/Siggraph2011/content/talks/18-ferrier.pdf

 

I am not saying that its impossible to use this technique on the last consoles, but clearly the implementation isnt stright forward and could mean that they would use more shader power as they are using to many spus(ps3 is using 5 spus for this techqiue) and the xbox 360 is using parallelism in the gpu and hep from the cpu. In a normal implementation of this tecnique you would not require this much shader power, the implementation in this way may still be better than forward rendering and would still save up shader power, but doesnt look like it would have the performance that it would give in a console with enough memory bandwidth to use it without these workarounds



curl-6 said:

English is not their first language, it could be a translation mistake.

So would these "render targets" be stored in eDRAM? Cos according to them the whole reason they upgraded to a deferred rendering solution with their new engine was due to the Wii U's large eDRAM making it doable at 60fps.

Alright, I want you to know this the most ... 

DO NOT take deferred rendering as something of a FEAT. The reason why we moved to deferred rendering in the first place was because it put less pressure than a forward renderer at the time. Deferred rendering is just a technique! 

Yes you can store render targets in the eDRAM ... 



Around the Network
curl-6 said:
zarx said:
 There were a whole bunch of games that used deferred rendering on PS360

Looking at the wikipedia page, the ones listed all seem to be 30fps.


most games on PS360 were 30FPS whether they were deferred rendered or not. There were a few 60fps ones tho for example Trials HD is a 60fps with deferred rendering, and if I remember correctly Mortal Kombat 9 and Wipeout HD is also 60fps with deferred rendering.



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

fatslob-:O said:
curl-6 said:

English is not their first language, it could be a translation mistake.

So would these "render targets" be stored in eDRAM? Cos according to them the whole reason they upgraded to a deferred rendering solution with their new engine was due to the Wii U's large eDRAM making it doable at 60fps.

Alright, I want you to know this the most ... 

DO NOT take deferred rendering as something of a FEAT. The reason why we moved to deferred rendering in the first place was because it put less pressure than a forward renderer at the time. Deferred rendering is just a technique! 

Yes you can store render targets in the eDRAM ... 


well, it depends, like the article about the deffered rendering in ogre 3 said, in the worst case forward rendering may require the shader power of objects*lights while deffered rendering would take the shader power for objects+lights. If thats the case then you would save up a lot amount of shder power, if we have 6 lights and 6 objects for forward rendering that could be 6*6 and in deffered rendering would be 6+6, this means that for forward rendering you would require about 3x more shader power than what you would need for the deffered rendering, and this is a basic example, surely games may require more than 6 lights and 6 objects

http://www.ogre3d.org/tikiwiki/tiki-index.php?page=Deferred+Shading

"

What is Deferred Shading?

Deferred shading is an alternative approach to rendering 3d scenes. The classic rendering approach involves rendering each object and applying lighting passes to it. So, if an ogre head is affected by 6 lights, it will be rendered 6 times, once for each light, in order to accumulate the affection of each light. 
Deferred shading takes another approach : In the beginning, all of the objects render their "lighting related info" to a texture, often called the G-Buffer. This means their colours, normals, depths and any other info that might be relevant to calculating their final colour. Afterwards, the lights in the scene are rendered as geometry (sphere for point light, cone for spotlight and full screen quad for directional light), and they use the G-buffer to calculate the colour contribution of that light to that pixel.

See the links in Further Reading section to read more about it. It is recommended to understand deferred shading before reading this article, as the article focuses on implementing it in ogre, and not explaining how it works.

Deferred Shading Advantages

The main reason for using deferred shading is performance related. Classing rendering (also called forward rendering) can, in the worst case, require num_objects * num_lights batches to render a scene. Deferred shading changes that to num_objects + num_lights, which can often be a lot less. 
Another reason is that some new post-processing effects are easily achievable using the G-Buffer as input. If you wanted to perform these effects without deferred shading, you would've had to render the whole scene again.

Deferred Shading Disadvantages

There are several algorithmic drawbacks with deferred shading - transparent objects are hard to handle, anti aliasing can not be used in DX9 class hardware, additional memory consumption because of the G-Buffer. 
In addition to that, deferred shading is harder to implement - it overrides the entire fixed function pipeline. Pretty much everything is rendered using manual shaders - which probably means a lot of shader code.

"

 

zarx said:
curl-6 said:
zarx said:
 There were a whole bunch of games that used deferred rendering on PS360

Looking at the wikipedia page, the ones listed all seem to be 30fps.


most games on PS360 were 30FPS whether they were deferred rendered or not. There were a few 60fps ones tho for example Trials HD is a 60fps with deferred rendering, and if I remember correctly Mortal Kombat 9 and Wipeout HD is also 60fps with deferred rendering.

wipeout hd was one of the few games i ever heard that had a variable resolution instead of variable framerate, now that is interesting

http://www.eurogamer.net/articles/wipeout-hds-1080p-sleight-of-hand

 

And yes, although deffered rendering was possible on ps3 and in 360, the implementation was not striaght forward and required the use of at least 5 spus and cboc 360 gpu parallelism+cpu to get it to work. not saying isnt good idea but clearly they used far to much shader power that would not have been required if they had the memory bandwidth for this technique in the first place, so obviously the performance in the new consoles including wii u requires less shader power than those workarounds

 

 



megafenix said:


yea, i read the tweets again and most likely are triple framebuffers. As for deffered rendering being used in ps3 and 360, well, from what i have read there were not vastly used and developers would have to do some workarounds to get it to work, in ps3 they used the spus(seems they had to use 5 in killzone)

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/BlackRockStudioTechnology_Deferred_Shading.pdf

 

Here some developers had to dismiss the deffered rendering as the gbuffer required 12MB of edram and xbox 360 only had 10MB of it, no to mention that those 10MB were needed for the framebuffer of 720p

 

 

here is another article were they seemed to have acheved the deffered rendering in ps3 nd 360 but by using some implementation tricks to reduce the memory bandwidth requirements but  that could mean more shader power that would normallybe required if you did it in the normal way

http://webstaff.itn.liu.se/~perla/Siggraph2011/content/talks/18-ferrier.pdf

 

I am not saying that its impossible to use this technique on the last consoles, but clearly the implementation isnt stright forward and could mean that they would use more shader power as they are using to many spus(ps3 is using 5 spus for this techqiue) and the xbox 360 is using parallelism in the gpu and hep from the cpu. In a normal implementation of this tecnique you would not require this much shader power, the implementation in this way may still be better than forward rendering and would still save up shader power, but doesnt look like it would have the performance that it would give in a console with enough memory bandwidth to use it without these workarounds


If Shin'en weren't using similar tricks I would be very suprised. They would be stupid to do a straight implementation when there are so many ways of improving it depending on what the specific requirements of the game are and the hardware it's running on. If they didn't it would be leaving easy optimisation on the table, and I very much doubt a studio like that would do that. As for it being hard yes there are issues with using it, but there are for forward rendering as well. And if DICE USA could do it on the OG Xbox with a licensed Shrek game it's clearly not that big a problem.



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

fatslob-:O said:
curl-6 said:

English is not their first language, it could be a translation mistake.

So would these "render targets" be stored in eDRAM? Cos according to them the whole reason they upgraded to a deferred rendering solution with their new engine was due to the Wii U's large eDRAM making it doable at 60fps.

Alright, I want you to know this the most ... 

DO NOT take deferred rendering as something of a FEAT. The reason why we moved to deferred rendering in the first place was because it put less pressure than a forward renderer at the time. Deferred rendering is just a technique! 

Yes you can store render targets in the eDRAM ... 

I am aware it is a technique. One that makes it more efficient to do some things. It's not so much that it's a feat, more that it's something Wii U seems better equipped for than last gen consoles.



zarx said:
megafenix said:


yea, i read the tweets again and most likely are triple framebuffers. As for deffered rendering being used in ps3 and 360, well, from what i have read there were not vastly used and developers would have to do some workarounds to get it to work, in ps3 they used the spus(seems they had to use 5 in killzone)

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/BlackRockStudioTechnology_Deferred_Shading.pdf

 

Here some developers had to dismiss the deffered rendering as the gbuffer required 12MB of edram and xbox 360 only had 10MB of it, no to mention that those 10MB were needed for the framebuffer of 720p

 

 

here is another article were they seemed to have acheved the deffered rendering in ps3 nd 360 but by using some implementation tricks to reduce the memory bandwidth requirements but  that could mean more shader power that would normallybe required if you did it in the normal way

http://webstaff.itn.liu.se/~perla/Siggraph2011/content/talks/18-ferrier.pdf

 

I am not saying that its impossible to use this technique on the last consoles, but clearly the implementation isnt stright forward and could mean that they would use more shader power as they are using to many spus(ps3 is using 5 spus for this techqiue) and the xbox 360 is using parallelism in the gpu and hep from the cpu. In a normal implementation of this tecnique you would not require this much shader power, the implementation in this way may still be better than forward rendering and would still save up shader power, but doesnt look like it would have the performance that it would give in a console with enough memory bandwidth to use it without these workarounds


If Shin'en weren't using similar tricks I would be very suprised. They would be stupid to do a straight implementation when there are so many ways of improving it depending on what the specific requirements of the game are and the hardware it's running on. If they didn't it would be leaving easy optimisation on the table, and I very much doubt a studio like that would do that. As for it being hard yes there are issues with using it, but there are for forward rendering as well. And if DICE USA could do it on the OG Xbox with a licensed Shrek game it's clearly not that big a problem.


in ps3 and 360 they had to do it that way since bandwidth wasnt enough, and those implementations came at a cost sicne they had to sue 5 spus on ps3 to achieve this technique and gpu parallelism+cpu on 360. That means they had to sarifice more shader power to acheive the implementation, the adventage of deffered rendering should be to use few shader power but here seems to be the contrary, surely they may be still using less shader power than what forward rendering could have required, but still put to much pressure on the hardware due to the lack of memory bandwidth. Using these workarounds on wii u would come with no benefit sicne the console would be put on to much pressure, is better to sacrifice bandwidth than shader power in wii us case since wii u has plenty of bandwidth and so you would save up more shader power on this technique than what ps3 and 360 were able to using those tricks