By using this site, you agree to our Privacy Policy and our Terms of Use. Close
zarx said:
megafenix said:

with multipasses you can achieve more things sure, but i am not suggesting to always use single pass but to try to use as less passes as possible to strain the hardwrae as low as possible, thats the kind of approach defered rendering tries to achieve vs the forward rendering, defered rendering strains less the shaders and the pipeline and can render more lights with less resources(except for memory bandwidth needs) than forwrd rendering would need to render those same lights, sure is not perfect and trades off bandwidth so that you dont use to much shader power, but hey since wii u has plenty of memory bandwidth and is low on shader power this approach is ideal

thats why i brought the topic of single pass vs multipass,of course that its almost impossible to achieve something in single pass than in multipass, but thats not the point, the point is that is better to look for a solution that uses as less passes as possible to achieve a work that could have taken more passes with other approach


Actually that is not always true in fact using multiple passes will often be faster, and easier to implement. Complex shaders have a significant performance impact even on the latest GPUs. For many cases using multiple simple shader passes will end up being cheaper than trying to do everything in a single pass. Especially in modern game engines where you can mix a lot of different textures and effects on a single model, and lots of different models in the same scene. Often it will be cheaper to do each in a seperate pass so that you can use simple shaders that have less setup time and you don't waste performance calling assets and effects for areas that don't need them.

For example if you are rendering a human character with SSS on their skin, and completly different shading for their cloths. It will probably be faster to do a seperate pass for each. Rather than do it all in a single pass where you are calling a super complex shader that does both types of shading. The more things you try and do in a single pass the more redundant processing you will need to do. Especially if you have multiple types of meterials and effects in defferent parts of the scene, as that would require branching where the shader has to work out wich effects it has to do for each pixel as branches are very expensive on GPUs.

Another example would be transparent objects in a deferred renderer. Normally deferred rendering does not support transparent or translucent objects. The methods for rendering transparencies in a deferred renderer a super expensive and usually involve multiplying the size of the G Buffer. So the way most devs get around this is to just render transparent objects in a completly seperate forward shader pass.


it depends on the applicatio, but in must cases is better to try achieve things with less passes so that you can save up shader power and use itfor other things, thats the whole point about about defered vs forward, forward stresses the gpu to much while the defered uses less shader and pipeline resources by tading off memory bandwidth

here

http://http.developer.nvidia.com/GPUGems/gpugems_ch28.html

"

 Optimizing Vertex Processing

  • Reduce the number of vertices processed. This is rarely the fundamental issue, but using a simple level-of-detail scheme, such as a set of static LODs, certainly helps reduce vertex-processing load.
  • Use vertex-processing LOD. Along with using LODs for the number of vertices processed, try LODing the vertex computations themselves. For example, it is likely unnecessary to do full four-bone skinning on distant characters, and you can probably get away with cheaper approximations for the lighting. If your material is multipassed, reducing the number of passes for lower LODs in the distance will also reduce vertex-processing cost.

Speeding Up Fragment Shading

If you're using long and complex fragment shaders, it is often likely that you're fragment-shading bound. If so, try these suggestions:

  • Render depth first. Rendering a depth-only (no-color) pass before rendering your primary shading passes can dramatically boost performance, especially in scenes with high depth complexity, by reducing the amount of fragment shading and frame-buffer memory access that needs to be performed. To get the full benefits of a depth-only pass, it's not sufficient to just disable color writes to the frame buffer; you should also disable all shading on fragments, even shading that affects depth as well as color (such as alpha test).
  • Consider using fragment shader level of detail. Although it offers less bang for the buck than vertex LOD (simply because objects in the distance naturally LOD themselves with respect to pixel processing, due to perspective), reducing the complexity of the shaders in the distance, and decreasing the number of passes over a surface, can lessen the fragment-processing workload.

"

 

Of course that defered rendering is not perfect and besides the problems with huge bandwidth requirements you also have issues like alpha blending and msaa, but there are good solutions for that, instead of msaa for example you can use temporal antialaising, which laos is cheaper than msaa and is better suited with defered rendering, FXAA or Fast Approximate Anti-Aliasing is also a good option

 

As for the transparency, well you can combine defered and forward rendering, forward rendering would be just used for the parts where you need transparency or can use some other solutions like the ones found here

http://www.csc.kth.se/utbildning/kth/kurser/DD143X/dkand12/Group1Marten/final/final_TransparencyWithDeferredShading.pdf

"

How can we render transparent object using deferred shading?

Within the frame of this project, several techniques for rendering transparent objects

were examined on their advantages and disadvantages. Below we suggest a

review of previous studies on deferred shading as well as some of the practical solutions

for application of this technique in transparency. We propose a prototype of own developed technique for rendering of transparent objects. The built of our

deferred shader is described along with actual integration, of the front renderer and

deferred shader techniques, is explained in particular. We discuss the test results of

our prototype in terms of performance and image quality and describe the model

we used for this testing.

 

Rendering transparent objects with deferred shading impose some problems as the

depth-buffer used for rendering during deferred shading only supports one fragment

at a time. In our work, we have chosen to use alpha-blending in a post pass using

front rendering. Despite the flaws of alpha-blending, it is still very straightforward

and is easy to implement into a deferred shader. In the next section we discuss

basic functions of our algorithm

 

Deferred shading with Front Rendering

The front render is used to process transparent objects and fits well into the deferred

shading pipeline. It renders all opaque objects first with the deferred shader

and then renders the transparent objects on top using the front renderer. This is

important as the depth buffer has to be filled with opaque objects first, to prevent

rendering of non-visible transparent objects. When the front renderer is performed,

the final picture can be rendered to the frame buffer for display.

The implementation has the following rendering stages:

1. Render all opaque geometry to the G-Buffer

2. Bind the G-Buffer as texture. For each light in the scene draw a full screen rectangle

and calculate lighting at each pixel using the data from the G-Buffer. Save

result in the P-Buffer.

3. Sort all transparent entities in back to front order.

4. Render all transparent geometry using the front renderer. Blend the result to the

P-Buffer using the depth buffer to filter out any non-visible transparent geometry.

5.-Copy P-Buffer to frame buffer.

"