By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Shinen is using triple buffering for the gbuffer on Fast Racing Neo, bandwidth is not a problem

megafenix said:


really?

what intrerpretation can you make from this?

"

Who Uses TBR?

• Microsoft

• Talisman

• Imagination Technologies

• KYRO and KYRO II (Desktop PC)

• PowerVR CLX2 (Sega Dreamcast)

• PowerVR MBX (OpenGL ES 1.x)

• PowerVR SGX (targets OpenGL ES 2.0)

• AMD

• Imageon 2380 (OpenGL ES 1.x)

• Xenos (Xbox 360)

• Z430 and Z460 (targets OpenGL ES 2.0)

"

It's tile based rendering in a sense that the framebuffer is split up to be rendered in multiple passes however a true tile based GPU also employs geometry binning and ray casting for hidden surface removal ...



Around the Network
fatslob-:O said:
megafenix said:


really?

what intrerpretation can you make from this?

"

Who Uses TBR?

• Microsoft

• Talisman

• Imagination Technologies

• KYRO and KYRO II (Desktop PC)

• PowerVR CLX2 (Sega Dreamcast)

• PowerVR MBX (OpenGL ES 1.x)

• PowerVR SGX (targets OpenGL ES 2.0)

• AMD

• Imageon 2380 (OpenGL ES 1.x)

• Xenos (Xbox 360)

• Z430 and Z460 (targets OpenGL ES 2.0)

"

It's tile based rendering in a sense that the framebuffer is split up to be rendered in multiple passes however a true tile based GPU also employs geometry binning and ray casting for hidden surface removal ...

o well, you dont believ that guy right?

how abou these guys then?

http://www.freescale.com.cn/cstory/ftf/2009/download/aut_f0265.pdf



I kind of skimmed this, and I feel like megafenix needs to stop copying and pasting the same stuff over and over again, and fatslob should better substantiate his claims when telling people they are wrong. It would be easier to just lay it all out at once rather than giving little info piece by piece. When someone presents incorrect information, you don't just say "you're wrong" or "you're wrong because of X based on my own authority."
It's on you to provide the proof if a claim (especially one that also has information on it, whether it accurately backs it or not) is incorrect or misleading.



MDMAlliance said:
I kind of skimmed this, and I feel like megafenix needs to stop copying and pasting the same stuff over and over again, and fatslob should better substantiate his claims when telling people they are wrong. It would be easier to just lay it all out at once rather than giving little info piece by piece. When someone presents incorrect information, you don't just say "you're wrong" or "you're wrong because of X based on my own authority."
It's on you to provide the proof if a claim (especially one that also has information on it, whether it accurately backs it or not) is incorrect or misleading.


very well, of course the reason i copy and paste same stuff is because that faslob guy avoids it in the conversation, but guess everything is on the table, that trsutfull source and that image clealry demostrate that amd z430 is tile based



megafenix said:

o well, you dont believ that guy right?

how abou these guys then?

http://www.freescale.com.cn/cstory/ftf/2009/download/aut_f0265.pdf

*sigh* 

I'll believe AMD and Microsoft over freescale semiconductor cause they atleast specialize in graphics ...

The one here who should be more honest is you. Just admit that your wrong already ... 



Around the Network
fatslob-:O said:
megafenix said:

o well, you dont believ that guy right?

how abou these guys then?

http://www.freescale.com.cn/cstory/ftf/2009/download/aut_f0265.pdf

*sigh* 

I'll believe AMD and Microsoft over freescale semiconductor cause they atleast specialize in graphics ...

The one here who should be more honest is you. Just admit that your wrong already ... 


Since that source and image demostrate you were wrong, why i admit something i am not mistaking about?

with dear respect, i admit that i was wrong about the 3 gbuffers and that those were 3 framebuffers of 720p, but as far as what you mentioned about deffered rendering not providing that good performance when developers and other important people dont agree then why dont you admit they were right and you wrong?

 

being right in one thing doesnt equal to be right about everything, everybody including me make can make mistakes, i already admitted one of them, why dont you just fllow and do the same for a change?

ah, and you may like thi too

http://www.dailytech.com/Freescale+Licenses+AMD+Technologies/article8909.htm

"

Freescale Licenses AMD Technologies
Anh Tuan Huynh (Blog) - September 17, 2007 3:43 PM - See more at: http://www.dailytech.com/Freescale+Licenses+AMD+Technologies/article8909.htm#sthash.pa5aw6lW.dpuf

AMD OpenGL ES 2.0 and OpenVG 1.0 technologies coming to a Freescale i.MX processor near you

AMD today announced Freescale Semiconductor will license its 2D and 3D graphics technology. Freescale Semiconductor will use the AMD graphics technologies to equip its i.MX processors with OpenGL ES 2.0 and OpenVG 1.0 technologies. OpenGL ES 2.0 and OpenVG technologies are designed for mobile applications where battery life is key, including portable gaming, navigation and media player devices. - See more at: http://www.dailytech.com/Freescale+Licenses+AMD+Technologies/article8909.htm#sthash.8Q0YZ4MM.dpuf

"

 

that and this is all what there is, you can keep denying it but reliable sources are far more believable than you(page 44)

http://www.freescale.com.cn/cstory/ftf/2009/download/aut_f0265.pdf



megafenix said:


Since that source and image demostrate you were wrong, why i admit something i am not mistaking about?

with dear respect, i admit that i was wrong about the 3 gbuffers and that those were 3 framebuffers of 720p, but as far as what you mentioned about deffered rendering not providing that good performance when developers and other important people dont agree then why dont you admit they were right and you wrong?

Y'know what I'm not going to bother anymore with you since this is getting off topic ... 

Go make a new thread if you want to argue about this ...



fatslob-:O said:
megafenix said:


Since that source and image demostrate you were wrong, why i admit something i am not mistaking about?

with dear respect, i admit that i was wrong about the 3 gbuffers and that those were 3 framebuffers of 720p, but as far as what you mentioned about deffered rendering not providing that good performance when developers and other important people dont agree then why dont you admit they were right and you wrong?

Y'know what I'm not going to bother anymore with you since this is getting off topic ... 

Go make a new thread if you want to argue about this ...


very well, although arguing is not my style, i prefer to talk and reach a conclusion



Some qustions:

they promised 8k textures. they are still on that?

Xbox One also have a 32Mb ESRAM, and PS4 dont.
Why every Xbox One game has problems with 1080p and none PS4 game has?
So, this kind of ram cant compensate some raw power?



fatslob-:O said:
Pemalite said:


Then it all died out for awhile, probably because the PC was advancing rapidly and the Xbox 360 and Playstation 3 launched, so it was easier/cheaper for developers to ditch the tiled rendering so they can push out games faster.

Of course, things eventually changed, mobile came storming into the market with ARM Mali, PowerVR,
Qualcomm's Adreno (Aka. Mobile Radeon) all pushing tiled approaches due to power and efficiency.
Then the Xbox 360 and Playstation 3 simply got old, people still demanded better graphics, so tiled rendering was a solution once again and it seems to have continued to carry onwards, which is a good sign.

That's half the story but ...

They died out because they couldn't solve the hardware accelerated transform and lighting issue fast enough. 

After having added shaders to their design they always made the most sense in the mobile space where bandwidth was sparse plus I don't think the Adreno features tile based rendering. 

I'm not so sure that tile based rendering was the solution to push for better graphics. It's really good for the purpose of geometry binning but that's about it though. I almost forgot but tile based rendering also came back to desktops with the 2nd gen Nvidia Maxwell and Intel Haswell featuring it! Both of those GPU architectures support conservative rasterization which enables programmable binning so it's possible to do some tile based rendering and maybe AMD can support conservative rasterization too with their GCN GPUs just like how their hardware always supported volume tiled resources as well as pixel shader ordering ...

True! There was issues in getting TnL working with tiled based rendering.

Ironically, TnL with tiled based rendering only became possible when manufacturers abolished the TnL hardware in their GPU's and performed those functions on the shader hardware instead.

megafenix said:

its true that multipass is an option, but multipasses put to much work on the gpu comapred to single pass

http://www.orpheuscomputing.com/downloads/ATI-smartshader.pdf

"

The key improvements offered by ATI’s SMARTSHADER™ technology over existing hardware vertex and pixel
shader implementations are:
• Support for up to six textures in a single rendering pass, allowing more complex effects to be achieved
without the heavy memory bandwidth requirements and severe performance impact of multi-pass
rendering

Every time a pixel passes through the rendering
pipeline, it consumes precious memory bandwidth as data is read from and written to texture
memory, the depth buffer, and the frame buffer. By decreasing the number of times each pixel on
the screen has to pass through the rendering pipeline, memory bandwidth consumption can be
reduced and the performance impact of using pixel shaders can be minimized. DirectX® 8.1 pixel
shaders allow up to six textures to be sampled and blended in a single rendering pass. This
means effects that required multiple rendering passes in earlier versions of DirectX® can now be
processed in fewer passes, and effects that were previously too slow to be useful can become
more practical to implement

"

http://books.google.com.mx/books?id=BV8MeSkHaD4C&pg=PA64&lpg=PA64&dq=multi+pass+rendering+vs+single+pass&source=bl&ots=oUGfAJqSQO&sig=2y6jekyjXj1FpAxE8wICmzw5B1E&hl=es&sa=X&ei=vyNNVOPsPKHmiQLFvIDICQ&ved=0CDwQ6AEwBA#v=onepage&q=multi%20pass%20rendering%20vs%20single%20pass&f=false

 

As you can read form the deveopers diary, they implemented deffered rendering using 5 spus of the ps3(there are only 8 and one of them is not available for games) and parallelism of the xbox 360 gpu+cpu work, the key adventage of g buffers is that you use few shader power but here ps3 and 360 had to put a lot pressure on the hardware to achieve it which almost nulls one of the adventages of the technique due to the lack of memory bandwidth. Since wii u edram has plenty of bandwidth it doenst need to use multipass or to much shader power to acheive the technique and has lot better performance. On 360 i could bet that besides the pressure on the gpu the 720p was done with a single buffer and not double buffering since that way you would reduce the memory consumption of the edram to 5MB and with some trickery use the rest for the deffered rendering(normally 12MB of edram would do but there are not there)

You can read here the problems developers went to implement deffered rendering on the last generation consoles, and obviously requirements like needing 5 spus out of the 8 existent ones is not very cheap, and using parallelism on the 360 gpu plus the help of the cpu isnt very cheap either

 

 

http://webstaff.itn.liu.se/~perla/Siggraph2011/content/talks/18-ferrier.pdf

 

 

In wii u you wont have to do this and the primary adventage of the deffered rendering will be available(use few shader power by trading bandwidth), triple framebuffers of 720p is only 10.8MB where in 360 10MB was barely enough for the 720p with double buffering, and as you can read in the first artlicle, developers wnated to use the deffered rendering on 360 but they needed 12MB of edram that were not present(thats why sometime later developers used the trick found in the second artlicle), in wiius case making some calculations its likely that a gbuffer would take out about 8.64MB of edram, combining that with the triple framebuffer is only about 19.44MB which leaves 12.6MB of edram plus the extra 3MB of faster edram and sram, thats why besides the triple framebuffering and gbuffer shinen still is able to fit some intermediate buffers there

 

As for the tiling technology, yea its something pretty cool and something tells me that shinen is using it in fast racing neo, looking at the terrain here

loos like the terrain is composed of tiles, i bet they used it since is impossible to fit the 4k-8k textures on the texture memory(even bc1 compression gives you 10MB storage for each texture) and so deviding the textures into tiles makes possible to use the texture cache or texture memory

http://books.google.com.mx/books?id=bmv2HRpG1bUC&pg=PA281&lpg=PA281&dq=tiles+textures+tessellation&source=bl&ots=6hOJ8zd7wA&sig=mtlU58XVFicKUMz5klAr4cDRX9w&hl=es&sa=X&ei=TPQ3VNKyCtKRNs7vgqAD&ved=0CFwQ6AEwCw#v=onepage&q=tiles%20textures%20tessellation&f=false

"

No offense Megafenix, but for the love of god, stop posting ancient, irrelevant crap.

ATI's "Smartshader" doesn't apply to GPU's today, that crap was released 13 years ago and ended 7 years ago, why you would think that Direct X 9 shader operations apply to a Direct X 11 world beats me, AMD doesn't even use VLIW anymore, let alone seperate Vertex and Pixel shader entities in the hardware.

As for your "6 textures in a single pass". - That's called single pass multi-texturing, that technology has been around for decades, shall I school you on it? Hint: It has nothing to do with the multi-pass that everyone else is talking about.

Another issue in what you copy/pasted is you claim the Playstation 3 has "8" SPU's and only 7 are available for games, well... No. The Playstation 3 has 8, one is disabled in order to increase yields and another is reserved for other tasks, that makes 6.

jonathanalis said:
Some qustions:

they promised 8k textures. they are still on that?

Xbox One also have a 32Mb ESRAM, and PS4 dont.
Why every Xbox One game has problems with 1080p and none PS4 game has?
So, this kind of ram cant compensate some raw power?


Actually, eDRAM/eSRAM can compensate for bandwidth deficits in a console to a certain degree, it's actually the entire point of it's invention and it's historical use, even in Sony consoles.

In regards to the Xbox One and Playstation 4 specifically though, there is significantly more at play than just bandwidth differences, which is going to hamper the console, but I would suggest waiting and seeing what 343i does with Halo 5 to see what the hardware is capable of.

Also, please don't use "raw power" in reference to Ram/Bandwidth, Ram doesn't have any compute hardware, it cannot accellerate a damn thing.

If you are truly worried about the hardware, the graphics, resolution, framerates, then I suggest you do one thing... Drop all your underpowered consoles off a cliff, build a PC and join the PC Gaming Master Race.



--::{PC Gaming Master Race}::--