| Staude said: ^Not only am I not wrong, but it's not smart to rely on one simple article when there are so many. |
Now these are hard facts. They cant be argued with. It's up to you if you continue believing your so called evidence.
Now the 360’s GPU is one impressive piece of work and I’ll say from the get go it’s much more advanced than the PS3’s GPU so I’m not sure where to begin, but I’ll start with what Microsoft said about it. Microsoft said Xenos was clocked at 500MHZ and that it had 48-way parallel floating-point dynamically-scheduled shader pipelines (48 unified shader units or pipelines) along with a polygon performance of 500 Million triangles a second.
Before going any further I’ll clarify this 500 Million Triangles a second claim. Can the 360’s GPU actually achieve this? Yes it can, BUT there would be no pixels or color at all. It’s the triangle setup rate for the GPU and it isn’t surprising it has such a higher triangle setup rate due to it having 48 shaders units capable of performing vertex operations whereas all other released GPUs can only dedicate 8 shader units to vertex operations. The PS3 GPU’s triangle setup rate at 550MHZ is 275 million a second and if its 500MHZ will have 250 million a second. This is just the setup rate do NOT expect to see games with such an excessive number of polygons because it wont happen.
Microsoft also says it can also achieve a pixel-fillrate of 16Gigasamples per second. This GPU here inside the Xbox 360 is literally an early ATI R600, which when released by ATI for the pc will be a Directx 10 GPU. Xenos in a lot of areas manages to meet many of the requirements that would qualify it as a Directx 10 GPU, but falls short of the requirements in others. What I found interesting was Microsoft said the 360’s GPU could perform 48 billion shader operations per second back in 2005. However Bob Feldstein, VP of engineering for ATI, made it very clear that the 360’s GPU can perform 2 of those shaders per cycle so the 360’s GPU is actually capable of 96 billion shader operations per second.
To quote ATI on the 360’s GPU they say.
(Did anyone notice that each shader unit on the 360’s GPU doesn’t perform as many ops per pipe as the rsx? The 360 GPU makes up for it by having superior architecture, having many more pipes which operate more efficiently and along with more bandwidth.)
Did Microsoft just make a mistake or did they purposely misrepresent their GPU to lead Sony on? The 360’s GPU is revolutionary in the sense that it’s the first GPU to use a Unified Shader architecture. According to developers this is as big a change as when the vertex shader was first introduced and even then the inclusion of the vertex shader was merely an add-on not a major change like this. The 360’s GPU also has a daughter die right there on the chip containing 10MB of EDRAM. This EDRAM has a framebuffer bandwidth of 256GB/s which is more than 5 times what the RSX or any GPU for the pc has for its framebuffer (even higher than G80’s framebuffer).
Thanks to the efficiency of the 360 GPU’s unified shader architecture and this 10MB of EDRAM the GPU is able to achieve 4XFSAA at no performance cost. ATI and Microsoft’s goal was to eliminate memory bandwidth as a bottleneck and they seem to have succeeded. If there are any pc gamers out there they notice that when they turn on things such as AA or HDR the performance goes down that’s because those features eat bandwidth hence the efficiency of the GPU’s operation decreases as they are turned on. With the 360 HDR+4XAA simultaneously are like nothing to the GPU with proper use of the EDRAM. The EDRAM contains a 3D logic unit which has 192 Floating Point Unit processors inside. The logic unit will be able to exchange data with the 10MB of RAM at 2 Terabits a second. Things such as antialiasing, computing z depths or occlusion culling can happen on the EDRAM without impacting the GPU’s workload.
Xenos writes to this EDRAM for its framebuffer and it’s connected to it via a 32GB/sec connection (this number is extremely close to the theoretical because the EDRAM is right there on the 360 GPU’s daughter die.) Don’t forget the EDRAM has a bandwidth of 256GB/s and its only by dividing this 256GB/s by the initial 32GB/s that we get from the connection of Xenos to the EDRAM we find out that Xenos is capable of multiplying its effective bandwidth to the frame buffer by a factor of 8 when processing pixels that make use of the EDRAM, which includes HDR or AA and other things. This leads to a maximum of 32*8=256GB/s which, to say the least, is a very effective way of dealing with bandwidth intensive tasks.
In order for this to be possible developers would need to setup their rendering engine to take advantage of both the EDRAM and the available onboard 3D logic. If anyone is confused why the 32GB/s is being multiplied by 8 its because once data travels over the 32GB/s bus it is able to be processed 8 times by the EDRAM logic to the EDRAM memory at a rate of 256GB/s so for every 32GB/s you send over 256GB/s gets processed. This results in RSX being at a bandwidth disadvantage in comparison to Xenos. Needless to say the 360 not only has an overabundance of video memory bandwidth, but it also has amazing memory saving features. For example to get 720P with 4XFSAA on traditional architecture would require 28MB worth of memory. On the 360 only 16MB is required. There are also features in the 360's Direct3D API where developers are able to fit 2 128x128 textures into the same space required for one, for example. So even with all the memory and all the memory bandwidth, they are still very mindful of how it’s used.











