By using this site, you agree to our Privacy Policy and our Terms of Use. Close
zarx said:
maxima64 said:


Wasnt that his pint all the time?

the less passes the better(of course as long as the end result is almost the same)?

even if its true that there are still rendering passes for the nearby objects would you still prefer to waist rendering passes for both the neraby objects and the ones that are distant to you and can barely be seen?

 

the whoe point is just reduce the rendering passes as much as possible as long as the result is equal or almost equal to the other approach with more passes, why put more gpu stress on the gpu when is not necessary?

 

seriously that guy at least provides examples and reliable sources, what have you done besides randomly throwing words without something to support it?

i am not even sure whats your purpose or what is the conlusion you want to reach, at least provide resources to prove your point, not just say whatever you want

If you want links then

http://c0de517e.blogspot.co.nz/2014/09/notes-on-real-time-renderers.html

http://c0de517e.blogspot.co.nz/2011/01/mythbuster-deferred-rendering.html

http://c0de517e.blogspot.co.nz/2008/04/how-gpu-works-part-2.html

http://dice.se/publications/directx-11-rendering-in-battlefield-3/

http://dice.se/publications/bending-the-graphics-pipeline/

http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Deferred%20Shading%20Optimizations.pps

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/Rendering_Techniques_in_SplitSecond.pdf

http://www.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology

Have fun

would be also nice to also copy and paste what you want me to read, from what i read there are no lies from megafenix, those topics increase his credibility, like for example your first link

 

1.-http://c0de517e.blogspot.co.nz/2014/09/notes-on-real-time-renderers.html

"

Deferred shading
Geometry pass renders a buffer of material attributes (and other proprieties needed for lighting but bound to geometry, e.g. lightmaps, vertex-baked lighting...). Lighting is computed in screenspace either by rendering volumes (stencil) or by using tiling. Multiple shading equations need either to be handled via branches in the lighting shaders, or via multiple passes per light.

  • Benefits
    • Decouples texturing from lighting. Executes only texturing on geometry so it suffers less from partial quads, overdraw. Also, potentially can be faster on complex shaders (as discussed in the forward rendering issues).
    • Allows volumetric or multipass decals (and special effects) on the GBuffer (without computing the lighting twice).
    • Allows full-screen material passes like analytic geometric specular antialiasing (pre-filtering), which really works only done on the GBuffer, in forward it fails on all hard edges (split normals), and screen-space subsurface scattering.
    • Less draw calls, less shader permutations, one or few lighting shaders that can be hand-optimized well.
  • Issues
    • Uses more memory and bandwidth. Might be slower due to more memory communication needed, especially on areas with simple lighting.
    • Doesn't handle transparencies easily. If a tiled or clustered deferred is used, the light information can be passed to a forward+ pass for transparencies.

"

 

or how about your second link

2.-http://c0de517e.blogspot.co.nz/2011/01/mythbuster-deferred-rendering.html

"

1) Is deferred good?
Yes, deferred is great. Indeed, you should always think about it. If for "deferred" we mean doing the right computations in the right space... You see, deferred shading is "just" an application of a very general "technique". We routinely take these kind of decisions, and we should always be aware of all our options. 
Do we do a separable, two-pass blur or a single pass one? Do we compute shadows on the objects or splat them in screen-space? What do I pass through vertices, and what through textures?
We always choose where to split our computation in multiple passes, and in which space to express the computation and its input parameters. That is fundamental!

Deferred shading is just the application of this technique to a specific problem: what we do if we have many analytic, lights in a dynamic scene? With traditional "forward" rendering the lights are constant inputs to the material shader, and that creates a problem when you don't know which lights will land on which shader. You have to start to create permutations, generate the same shader with support of different number of lights, then at runtime see how many lights influence a given object and assign the right shader variant... All this can be complicated, so people started thinking that maybe having lights as shader constants was not really the best solution.

 

Let's say on the other hand that in your forward renderer you really hated to create multiple shaders to support different numbers of lights per object. So you went all multipass instead. First you render all your objects with ambient lighting only, then for each extra light you render the object with additive blending feeding as input that light.

It works fine but each and every time you're executing the vertex shader again, and computing the texture blending to multiply your light with the albedo. As you add more textures, things really become slow. So what? Well, maybe you could write the albedo out to a buffer and avoid computing it so many times. Hey! Maybe I could write all the material attributes out, normals and specular. Cool. But now really I don't need the original geometry at all, I can use the depth buffer to get the position I'm shading, and draw light volumes instead. Here it comes, the standard deferred rendering approach!

So yes, you should think deferred. And make your own version, to suit your needs!
2) Deferred is the only way to deal with many lights.
Well if you've read what I wrote above you already know the answer. No :)
Actually I'll go further than that and say that nowadays that the technique has "cooled down" there is no reason for anyone to be implementing pure deferred renderers. And if you're doing deferred chances are that you have a multipass forward technique as well, just to handle alpha. Isn't that foolish? You should at the very least leverage it on objects that are hit by a single light!
And depending on your game multipass on everything can be an option, or generating all the shader permutations, or doing a hybrid of the two, or of the three (with deferred thrown in too). Or you might want to defer only some attributes and not others, work in different spaces...

"

 

From what we can read from your links it only supports what megafenix has been saying from the beginning. With defered rendering you avoid over computation and puts less stress on the gpu but requires more  memory bandwidth; its true that the technique also has issues like msaa and alpha blending, but there are solutions for that like FXAA and using forward rendering just where tranparency s required and such

 

Dont see whats so difficult to understand, the point is is better to look for solutions that stress the gpu as less as possible so that the saved up resources can be put for other good work; if there is a way to reduce the number of passes and the use of to many shaders for a work and you come up with an idea to do it with similar results or maybe equal to the first approach then obviously you should use that solution, thats what optimization is about, its about using the resources wisely and since wii u strenght lies in the memory bandwidth and lacks horsepower compared to the other new consoles, wouldnt be better to put that bandwidth for a good use in a way you can reduce the stress on the gpu by using less shaders for a work?

 

To Pemalite

there some true of what you speak, neverthless the adventages of defered vs forward are already known and proven and its not just about how many passes bu also how many shaders you are using, if many shaders are busy in a work that means that they cannot be used for other stuff until they are released from their duties, with defered there are less shaders in use and obviously you can take profit of the available shaders for other stuff. Obviously if a solution uses less passes but still takes the same time that the other solution that uses more passes then there would be almost no benefit from it, so you have to take into account that to see if its worth it, but even if they take the same time doesnt mean they have the same memory requirements or shader power requirements, so we also must account that

 

to Hynad

Its clear you have not been followig the thread, to prove a point you at least need back up and the sources and explanation megafenix provided are clear; sure he mistook something like the framebuffers for gbuffers from shinens comment but at least he admitted, and the way shinen made the comment without checking others peoples comments could also be mistook by anyoneit wheres others here even when the proof is in front of their noses wont admit their mistakes. I actually like the thread, even if the gbuffers resulted in frmaebuffers, knowing that shinen is able to put triple buffering 720p on edram, a gbuffer for the defered rendering(in the tweet they confirm the use of this technique and likely the gbuffer is also being used for post processing effects) and intermediate buffers kind of proves that the wii u edram bandwidth could indeed be in the 500GB/s  bandwidth or more since even xbox one esram was running short for 900p(could be double buffering but they dont mention if its one, two or three buffers so must likely is 2 which is almost the same as 3 buffers of 720p) and the gbuffer, and wii u is using 720p triple buffers, gbuffer for defered render and post processign effects and many intermediate buffers, thats a great deal sicne would be impossible to do it with less bandwidth that the xbox one esram has and also esram has adventages like no refreshing which gives better performance over edram