By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Shinen is using triple buffering for the gbuffer on Fast Racing Neo, bandwidth is not a problem

fatslob-:O said:
Pemalite said:

There is Anisotropic filtering... And then there is Anisotropic filtering.

2Gb? Raw maybe, but that's not going to be it's actual size when it comes to rendering time.

As for the GDDR5, economy of scale is what will bring the cost down, it happens with the volatile DRAM market constantly, when there is an abundance of DRAM chips (Costs of production be damned!) then prices drop, this is why DDR3 got to such crazy low prices.

I'm sure devs will be able to pull off 4x anistropic filtering in their game EASILY on the PS4 so long as they have it accounted for in mind. With 72 TMU's, that should come for free. 

Even with DXT5, the texture is still over 500MB!

Economies of scale is only meant to emphasize cost production advantages with higher volumes. Like it or not businesses will try to avoid selling at a loss, otherwise they won't be there for much longer. *cough* AMD *cough* 

The market will correct itself one way or another by closing down manufacturers cause what your describing simply isn't sustainable ... 


I hope 4x Anisotropic isn't all the PS4 pushes out, 8x should be the minimum, 16x should be the goal.
However even at 16x you still have other techniques available to improve the filtering quality, just look at anisotropic quality between nVidia and AMD, despite both having 16x options, they still both offer differing levels of quality.

 

DXT5 assumes a 4:1 compression ratio, there are methods for higher levels of compression, you could for instance setup a 128-bit palletted texture using 4-bits per pixel in the index buffer which would give you a compression ratio of an impressive 32:1, which is ironically possible even on the XBox 360.

Plus, super high-quality textures aren't always needed and not always used anyway, when someones camera gets close-enough to a particular surface, the low quality texture is swapped for a higher quality one, most evident in Unreal powered games, this conserves memory. (You get the side-effect of pop-in unfortunatly, but made well, can be a non-issue.)
Plus, some textures are used multiple times in a single scene, so you don't need a hundred 16k textures for every rock on the ground loaded into memory, just the one texture of a rock which can then be modified with shaders to look different. - Instancing is another method to give some differentiation between the same object like grass, rocks, wood panels on a building etc'.

Besides, 3dc+ should be the new baseline this generation as all consoles and PC hardware support it, which also means more stuff is generally compressed.
On the flipside, we have finally moved past sub-1Gb memory systems, we gotta' use that new memory baselines for something, bigger textures is obviously the first step.

 

Of course it's not sustainable, yet has happened constantly in history in the DRAM market, heck (showing my age now) I remember it happening to EDO Ram.

DRAM manufacturers attempt to predict the levels of demand in advance, this doesn't always align with where the market is or is heading, for example, DRAM manufacturers boosted DDR3 Ram production prior to Window 8's launch in the anticipation that it's memory requirements would increase, that didn't happen and DRAM manufacturers the world-over, cried. (Windows 8's lack of high-demand didn't help things either.)

Sometimes a new DRAM standard is launched, then the company's switch the factory's over to the new standard in order to capitalise on the initial high-prices, because of such there is more DRAM being produced than the market actually needs, flooding the marketplace causing prices to drop.

In both cases, companies end up with massive piles of DRAM sitting around doing nothing, which means no revenue and thus prices drop, this is why 8Gb of DDR3 got to as low as $30-$40 at one point then quad-drupled in price once manufacturing capacity decreased. (It's sitting at about $100 AUD now that the market has settled.)

GDDR5 production will continue to increase as the Playstation 4 continues steaming ahead and as nVidia and AMD eventually release more high-volume graphics processors (Aka. Lower-end) with GDDR5 memories, then the market will eventually "pop" once nVidia and AMD shift to DDR6/Some other memory standard and the low-end shifts to cheaper DDR4. (Once scales of economy cheapens DDR4 that is.)

Again, this happened constantly through history, DRAM is a voltatile market.

maxima64 said:


Wasnt that his pint all the time?

the less passes the better(of course as long as the end result is almost the same)?

even if its true that there are still rendering passes for the nearby objects would you still prefer to waist rendering passes for both the neraby objects and the ones that are distant to you and can barely be seen?


Rendering in frametime is the main thing you need to look at, not the amount of passes.

Some hardware is better equipped to handle lots of small passes rather than one large one.
Thus, one large pass may take 14ms where-as 6 smaller passes can take 2ms each for 12ms, both of which fit in the 16ms time frame required for 60fps, which gives you wiggle room to add more.

Of course, some other hardware will handle less passes better, some game engines simply require lots of passes out of necessity.

Again, there are more shades of grey than what is being painted with the black and white brush.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:


I hope 4x Anisotropic isn't all the PS4 pushes out, 8x should be the minimum, 16x should be the goal.
However even at 16x you still have other techniques available to improve the filtering quality, just look at anisotropic quality between nVidia and AMD, despite both having 16x options, they still both offer differing levels of quality.

It's a big step in quality compared to trilinear filtering from last gen consoles ... Like I said, with 72 TMU's 4x anisotropic filtering should be almost free on the PS4 so a little more work from the developers side and their decision too they should be able to net 8x anisotropic filtering pretty easily and I wouldn't single out the X1 either being able to do it. Don't worry too much about the consoles being bottlenecked by fixed function units. 

The reason why Nvidia was inferior in AF was because their hardwired algorithm was more angle dependent than AMD's. BTW there's and even higher quality texture filtering scheme than 16x AF ... 

Pemalite said:

DXT5 assumes a 4:1 compression ratio, there are methods for higher levels of compression, you could for instance setup a 128-bit palletted texture using 4-bits per pixel in the index buffer which would give you a compression ratio of an impressive 32:1, which is ironically possible even on the XBox 360.

Plus, super high-quality textures aren't always needed and not always used anyway, when someones camera gets close-enough to a particular surface, the low quality texture is swapped for a higher quality one, most evident in Unreal powered games, this conserves memory. (You get the side-effect of pop-in unfortunatly, but made well, can be a non-issue.)
Plus, some textures are used multiple times in a single scene, so you don't need a hundred 16k textures for every rock on the ground loaded into memory, just the one texture of a rock which can then be modified with shaders to look different. - Instancing is another method to give some differentiation between the same object like grass, rocks, wood panels on a building etc'.

Besides, 3dc+ should be the new baseline this generation as all consoles and PC hardware support it, which also means more stuff is generally compressed.
On the flipside, we have finally moved past sub-1Gb memory systems, we gotta' use that new memory baselines for something, bigger textures is obviously the first step.

It's not all about having higher compression ratio's ... A high signal to noise ratio isn't very ideal for quality. 

What you described in your second statement was mipmapping. 

Actually, 3DC+ does no better than BC1 or DXT5 in terms of compression ratio ... 

Pemalite said:

Of course it's not sustainable, yet has happened constantly in history in the DRAM market, heck (showing my age now) I remember it happening to EDO Ram.

DRAM manufacturers attempt to predict the levels of demand in advance, this doesn't always align with where the market is or is heading, for example, DRAM manufacturers boosted DDR3 Ram production prior to Window 8's launch in the anticipation that it's memory requirements would increase, that didn't happen and DRAM manufacturers the world-over, cried. (Windows 8's lack of high-demand didn't help things either.)

Sometimes a new DRAM standard is launched, then the company's switch the factory's over to the new standard in order to capitalise on the initial high-prices, because of such there is more DRAM being produced than the market actually needs, flooding the marketplace causing prices to drop.

In both cases, companies end up with massive piles of DRAM sitting around doing nothing, which means no revenue and thus prices drop, this is why 8Gb of DDR3 got to as low as $30-$40 at one point then quad-drupled in price once manufacturing capacity decreased. (It's sitting at about $100 AUD now that the market has settled.)

GDDR5 production will continue to increase as the Playstation 4 continues steaming ahead and as nVidia and AMD eventually release more high-volume graphics processors (Aka. Lower-end) with GDDR5 memories, then the market will eventually "pop" once nVidia and AMD shift to DDR6/Some other memory standard and the low-end shifts to cheaper DDR4. (Once scales of economy cheapens DDR4 that is.)

Again, this happened constantly through history, DRAM is a voltatile market.

This can't just keep happening ... Sooner or later more and more manufacturer's will have to close down preventing a flood of DRAM's in the market's future. There's only three big DRAM manufacturer's left in the industry and that's Samsung, Micron, and SK Hynix but there used to be another manufacturer known as Elpida but they were no more after that scenario you described ...

Flooding the market with goods isn't a good idea in the long term since that just eliminates entire industries in general ... 



fatslob-:O said:

It's a big step in quality compared to trilinear filtering from last gen consoles ... Like I said, with 72 TMU's 4x anisotropic filtering should be almost free on the PS4 so a little more work from the developers side and their decision too they should be able to net 8x anisotropic filtering pretty easily and I wouldn't single out the X1 either being able to do it. Don't worry too much about the consoles being bottlenecked by fixed function units. 

The reason why Nvidia was inferior in AF was because their hardwired algorithm was more angle dependent than AMD's. BTW there's and even higher quality texture filtering scheme than 16x AF ...


I am not arguing the fact that the PS4 and Xbox One should almost get free AF, but you did just re-affirm my entire argument, that there are higher/better levels of Filtering.

Actually, nVidia had the edge in filtering after the Geforce 6000 series right up untill AMD launched their Radeon 6000 series (Ironic, huh?)
In the Radeon 5000 series AMD had a bug in it's filtering, which became an eyesore in some games, it was a pet peeve of mine when I had dual Radeon 5850's back then.
Prior to the Geforce 6000 series, you had the Geforce FX, nVidia pulled all sorts of crazies in the drivers in order to achieve performance parity with ATI, that includes reducing filtering quality for a performance gain.
AMD did similar things once their edge started to slip against nVidia with the Radeon x8xx and x19xx and obviously, the 29xx series.


fatslob-:O said:

It's not all about having higher compression ratio's ... A high signal to noise ratio isn't very ideal for quality. 

What you described in your second statement was mipmapping. 

Actually, 3DC+ does no better than BC1 or DXT5 in terms of compression ratio ...

 

Exactly my point, it's not all about higher compression ratio's, otherwise we would be sitting at 32:1 compression ratio's as standard by now as it's well and truly possible.

No, I wasn't describing Mip-mapping.

3DC and thus 3DC+ is more or less an evolutionary step from DXT5, it's not supposed to compress to higher ratio's, it's supposed to compress more formats, which results in less memory required overall.

fatslob-:O said:

 

This can't just keep happening ... Sooner or later more and more manufacturer's will have to close down preventing a flood of DRAM's in the market's future. There's only three big DRAM manufacturer's left in the industry and that's Samsung, Micron, and SK Hynix but there used to be another manufacturer known as Elpida but they were no more after that scenario you described ...

Flooding the market with goods isn't a good idea in the long term since that just eliminates entire industries in general ...

Of course it can keep happening.
Just as there are times when DRAM isn't profitable, there are other times where it's stupidly profitable, which helps even things out, DRAM manufacturers play for the long haul and try to capitalise on market swings (For example: LDDR2/3 and DDR4).

Over the past few years though there has been consolidation, which means there is less competition, but it also means there is also more volatility when something goes wrong. (I.E. Factory fire.)



--::{PC Gaming Master Race}::--

maxima64 said:


Wasnt that his pint all the time?

the less passes the better(of course as long as the end result is almost the same)?

even if its true that there are still rendering passes for the nearby objects would you still prefer to waist rendering passes for both the neraby objects and the ones that are distant to you and can barely be seen?

 

the whoe point is just reduce the rendering passes as much as possible as long as the result is equal or almost equal to the other approach with more passes, why put more gpu stress on the gpu when is not necessary?

 

seriously that guy at least provides examples and reliable sources, what have you done besides randomly throwing words without something to support it?

i am not even sure whats your purpose or what is the conlusion you want to reach, at least provide resources to prove your point, not just say whatever you want

If you want links then

http://c0de517e.blogspot.co.nz/2014/09/notes-on-real-time-renderers.html

http://c0de517e.blogspot.co.nz/2011/01/mythbuster-deferred-rendering.html

http://c0de517e.blogspot.co.nz/2008/04/how-gpu-works-part-2.html

http://dice.se/publications/directx-11-rendering-in-battlefield-3/

http://dice.se/publications/bending-the-graphics-pipeline/

http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Deferred%20Shading%20Optimizations.pps

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/Rendering_Techniques_in_SplitSecond.pdf

http://www.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology

Have fun



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

zarx said:
maxima64 said:


Wasnt that his pint all the time?

the less passes the better(of course as long as the end result is almost the same)?

even if its true that there are still rendering passes for the nearby objects would you still prefer to waist rendering passes for both the neraby objects and the ones that are distant to you and can barely be seen?

 

the whoe point is just reduce the rendering passes as much as possible as long as the result is equal or almost equal to the other approach with more passes, why put more gpu stress on the gpu when is not necessary?

 

seriously that guy at least provides examples and reliable sources, what have you done besides randomly throwing words without something to support it?

i am not even sure whats your purpose or what is the conlusion you want to reach, at least provide resources to prove your point, not just say whatever you want

If you want links then

http://c0de517e.blogspot.co.nz/2014/09/notes-on-real-time-renderers.html

http://c0de517e.blogspot.co.nz/2011/01/mythbuster-deferred-rendering.html

http://c0de517e.blogspot.co.nz/2008/04/how-gpu-works-part-2.html

http://dice.se/publications/directx-11-rendering-in-battlefield-3/

http://dice.se/publications/bending-the-graphics-pipeline/

http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/Deferred%20Shading%20Optimizations.pps

http://www2.disney.co.uk/cms_res/blackrockstudio/pdf/Rendering_Techniques_in_SplitSecond.pdf

http://www.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology

Have fun

would be also nice to also copy and paste what you want me to read, from what i read there are no lies from megafenix, those topics increase his credibility, like for example your first link

 

1.-http://c0de517e.blogspot.co.nz/2014/09/notes-on-real-time-renderers.html

"

Deferred shading
Geometry pass renders a buffer of material attributes (and other proprieties needed for lighting but bound to geometry, e.g. lightmaps, vertex-baked lighting...). Lighting is computed in screenspace either by rendering volumes (stencil) or by using tiling. Multiple shading equations need either to be handled via branches in the lighting shaders, or via multiple passes per light.

  • Benefits
    • Decouples texturing from lighting. Executes only texturing on geometry so it suffers less from partial quads, overdraw. Also, potentially can be faster on complex shaders (as discussed in the forward rendering issues).
    • Allows volumetric or multipass decals (and special effects) on the GBuffer (without computing the lighting twice).
    • Allows full-screen material passes like analytic geometric specular antialiasing (pre-filtering), which really works only done on the GBuffer, in forward it fails on all hard edges (split normals), and screen-space subsurface scattering.
    • Less draw calls, less shader permutations, one or few lighting shaders that can be hand-optimized well.
  • Issues
    • Uses more memory and bandwidth. Might be slower due to more memory communication needed, especially on areas with simple lighting.
    • Doesn't handle transparencies easily. If a tiled or clustered deferred is used, the light information can be passed to a forward+ pass for transparencies.

"

 

or how about your second link

2.-http://c0de517e.blogspot.co.nz/2011/01/mythbuster-deferred-rendering.html

"

1) Is deferred good?
Yes, deferred is great. Indeed, you should always think about it. If for "deferred" we mean doing the right computations in the right space... You see, deferred shading is "just" an application of a very general "technique". We routinely take these kind of decisions, and we should always be aware of all our options. 
Do we do a separable, two-pass blur or a single pass one? Do we compute shadows on the objects or splat them in screen-space? What do I pass through vertices, and what through textures?
We always choose where to split our computation in multiple passes, and in which space to express the computation and its input parameters. That is fundamental!

Deferred shading is just the application of this technique to a specific problem: what we do if we have many analytic, lights in a dynamic scene? With traditional "forward" rendering the lights are constant inputs to the material shader, and that creates a problem when you don't know which lights will land on which shader. You have to start to create permutations, generate the same shader with support of different number of lights, then at runtime see how many lights influence a given object and assign the right shader variant... All this can be complicated, so people started thinking that maybe having lights as shader constants was not really the best solution.

 

Let's say on the other hand that in your forward renderer you really hated to create multiple shaders to support different numbers of lights per object. So you went all multipass instead. First you render all your objects with ambient lighting only, then for each extra light you render the object with additive blending feeding as input that light.

It works fine but each and every time you're executing the vertex shader again, and computing the texture blending to multiply your light with the albedo. As you add more textures, things really become slow. So what? Well, maybe you could write the albedo out to a buffer and avoid computing it so many times. Hey! Maybe I could write all the material attributes out, normals and specular. Cool. But now really I don't need the original geometry at all, I can use the depth buffer to get the position I'm shading, and draw light volumes instead. Here it comes, the standard deferred rendering approach!

So yes, you should think deferred. And make your own version, to suit your needs!
2) Deferred is the only way to deal with many lights.
Well if you've read what I wrote above you already know the answer. No :)
Actually I'll go further than that and say that nowadays that the technique has "cooled down" there is no reason for anyone to be implementing pure deferred renderers. And if you're doing deferred chances are that you have a multipass forward technique as well, just to handle alpha. Isn't that foolish? You should at the very least leverage it on objects that are hit by a single light!
And depending on your game multipass on everything can be an option, or generating all the shader permutations, or doing a hybrid of the two, or of the three (with deferred thrown in too). Or you might want to defer only some attributes and not others, work in different spaces...

"

 

From what we can read from your links it only supports what megafenix has been saying from the beginning. With defered rendering you avoid over computation and puts less stress on the gpu but requires more  memory bandwidth; its true that the technique also has issues like msaa and alpha blending, but there are solutions for that like FXAA and using forward rendering just where tranparency s required and such

 

Dont see whats so difficult to understand, the point is is better to look for solutions that stress the gpu as less as possible so that the saved up resources can be put for other good work; if there is a way to reduce the number of passes and the use of to many shaders for a work and you come up with an idea to do it with similar results or maybe equal to the first approach then obviously you should use that solution, thats what optimization is about, its about using the resources wisely and since wii u strenght lies in the memory bandwidth and lacks horsepower compared to the other new consoles, wouldnt be better to put that bandwidth for a good use in a way you can reduce the stress on the gpu by using less shaders for a work?

 

To Pemalite

there some true of what you speak, neverthless the adventages of defered vs forward are already known and proven and its not just about how many passes bu also how many shaders you are using, if many shaders are busy in a work that means that they cannot be used for other stuff until they are released from their duties, with defered there are less shaders in use and obviously you can take profit of the available shaders for other stuff. Obviously if a solution uses less passes but still takes the same time that the other solution that uses more passes then there would be almost no benefit from it, so you have to take into account that to see if its worth it, but even if they take the same time doesnt mean they have the same memory requirements or shader power requirements, so we also must account that

 

to Hynad

Its clear you have not been followig the thread, to prove a point you at least need back up and the sources and explanation megafenix provided are clear; sure he mistook something like the framebuffers for gbuffers from shinens comment but at least he admitted, and the way shinen made the comment without checking others peoples comments could also be mistook by anyoneit wheres others here even when the proof is in front of their noses wont admit their mistakes. I actually like the thread, even if the gbuffers resulted in frmaebuffers, knowing that shinen is able to put triple buffering 720p on edram, a gbuffer for the defered rendering(in the tweet they confirm the use of this technique and likely the gbuffer is also being used for post processing effects) and intermediate buffers kind of proves that the wii u edram bandwidth could indeed be in the 500GB/s  bandwidth or more since even xbox one esram was running short for 900p(could be double buffering but they dont mention if its one, two or three buffers so must likely is 2 which is almost the same as 3 buffers of 720p) and the gbuffer, and wii u is using 720p triple buffers, gbuffer for defered render and post processign effects and many intermediate buffers, thats a great deal sicne would be impossible to do it with less bandwidth that the xbox one esram has and also esram has adventages like no refreshing which gives better performance over edram



Around the Network
Pemalite said:


I am not arguing the fact that the PS4 and Xbox One should almost get free AF, but you did just re-affirm my entire argument, that there are higher/better levels of Filtering.

Actually, nVidia had the edge in filtering after the Geforce 6000 series right up untill AMD launched their Radeon 6000 series (Ironic, huh?)
In the Radeon 5000 series AMD had a bug in it's filtering, which became an eyesore in some games, it was a pet peeve of mine when I had dual Radeon 5850's back then.
Prior to the Geforce 6000 series, you had the Geforce FX, nVidia pulled all sorts of crazies in the drivers in order to achieve performance parity with ATI, that includes reducing filtering quality for a performance gain.
AMD did similar things once their edge started to slip against nVidia with the Radeon x8xx and x19xx and obviously, the 29xx series.

There are higher level's of texture filtering but I'm only trying to put some rest on your concern's of consoles not being able to deliver high quality texture filtering. 

No, evergreen DID have better texture filtering than Nvidia and even by the time that Fermi launched ... If there's any bug it's with AMD's drivers because after all, those "optimizations" only serve to downgrade in their drivers. 

Pemalite said:


Exactly my point, it's not all about higher compression ratio's, otherwise we would be sitting at 32:1 compression ratio's as standard by now as it's well and truly possible.

No, I wasn't describing Mip-mapping.

3DC and thus 3DC+ is more or less an evolutionary step from DXT5, it's not supposed to compress to higher ratio's, it's supposed to compress more formats, which results in less memory required overall.

It's not all about having insane texture resolutions either ... 

"when someones camera gets close-enough to a particular surface, the low quality texture is swapped for a higher quality one" Sounds a lot like "mipmapping" to me when those transitions can be described by having collections of different resolution textures. 

Older texture compression formats on last gen consoles already handled a lot of the common surface formats. The only thing that has really improved is the quality of the compression ...

Pemalite said:


Of course it can keep happening.
Just as there are times when DRAM isn't profitable, there are other times where it's stupidly profitable, which helps even things out, DRAM manufacturers play for the long haul and try to capitalise on market swings (For example: LDDR2/3 and DDR4).

Over the past few years though there has been consolidation, which means there is less competition, but it also means there is also more volatility when something goes wrong. (I.E. Factory fire.)

Hence why these things need to stop otherwise the chances of lowering prices on goods becomes less and less likely with those conditions. FYI there's not a whole lot of proposals for DDR5 and DDR4 is here to stay for a while so there's now less chances of capitalizing of new memory standards ... 



maxima64 said:

to Hynad

Its clear you have not been followig the thread, to prove a point you at least need back up and the sources and explanation megafenix provided are clear; sure he mistook something like the framebuffers for gbuffers from shinens comment but at least he admitted, and the way shinen made the comment without checking others peoples comments could also be mistook by anyoneit wheres others here even when the proof is in front of their noses wont admit their mistakes. I actually like the thread, even if the gbuffers resulted in frmaebuffers, knowing that shinen is able to put triple buffering 720p on edram, a gbuffer for the defered rendering(in the tweet they confirm the use of this technique and likely the gbuffer is also being used for post processing effects) and intermediate buffers kind of proves that the wii u edram bandwidth could indeed be in the 500GB/s  bandwidth or more since even xbox one esram was running short for 900p(could be double buffering but they dont mention if its one, two or three buffers so must likely is 2 which is almost the same as 3 buffers of 720p) and the gbuffer, and wii u is using 720p triple buffers, gbuffer for defered render and post processign effects and many intermediate buffers, thats a great deal sicne would be impossible to do it with less bandwidth that the xbox one esram has and also esram has adventages like no refreshing which gives better performance over edram

I have read the whole thread. Thank you very much.

That being said, I would say "it is clear you don't know Megafenix's history on VGC"... But something tells me you know him very, very well.

 

 

EDIT, after your ban: Ha ha. It's quite sad (yet very much hilarious) that you need to act as someone else to try to have someone support you in this thread.  



Hynad said:

I have read the whole thread. Thank you very much.

That being said, I would say "it is clear you don't know Megafenix's history on VGC"... But something tells me you know him very, very well.

That was OP ... 



fatslob-:O said:
Hynad said:

I have read the whole thread. Thank you very much.

That being said, I would say "it is clear you don't know Megafenix's history on VGC"... But something tells me you know him very, very well.

That was OP ... 

Re-read my comment again. I make it rather clear that I already knew.

I'm the one who told Khan.



Hynad said:

Re-read my comment again. I make it rather clear that I already knew.

I'm the one who told Khan.

It looks like I didn't catch the edit ... 

This thread just turned into a comedy of pure gold. HAHAHA