| theword said: @haxxiy My original post is not meant to say which console GPU is better, rather it is just a projection of what we could be looking at in the future. But I will try to address your misunderstanding on the technical side of thing below. Now, when I said 360 GPU was more powerful then PS3 GPU this would take at least a 20 pages essay to point out technical intricatecies of why it is so. The PS3 as a whole with the aid of the 7 CELL cores, can on paper do nearly twice the 360 in floating points calculation. But we are talking just the GPU alone (CELL excluded). The 360 GPU despite being released 1 year before were actually one generation ahead of the PS3 GPU. Why? Microsoft had asked ATI to spend two years designing a unique from the ground up GPU for the 360, with many novel features for its time. The result was the first unifided shaders GPU on the market. The benefit of unified shaders is that it is more effiecient than traditional shader (up to 30% more so by some experts). Another unique feature that greatly enhanced performance was the 10MB smart memory logic that does 4xAA, and other specials effects for nearly no loss in performance. The 256GB/s band width that is associated with this memory is worth mentioning. Because of this you will see some people claim that the 360 GPU has an effective bandwidth of 278GB/sec. The importance of this extra bandwidth and the smart memory logic on 360 GPU can not be understated. It is this design decision that helped make nearly every multiplatform game looked smoother around the edges for the 360. While PS3 programmers have to implement all kinds of tricks including reducing resolution to get any kind of acceptable AA performance. For whatever reasons, not until very late in the system design did Sony commisioned NVIDIA to design a GPU for them. And at that time Sony did not think the blu ray issue would delay the PS3 one year. So effectively NVIDIA had less than a year to design the PS3 GPU. Not enough time to do anything revolutionary so they just took a Geforce 7800GTX and molded it into something that would work with the CELL. The lack of time to design (ATI had nearly two years to do a from the ground up design) severely limited the potential of PS3 GPU, which felt like an afterthough rather than a perfect part of the whole. |
Whoa man I was just posting theoretical fill rate figures to compare the differences between this gen and last gen, no intents on pointing 'lolz, look how PS3 is b3tter' or shit.
Yes, the EDRAM was really a smart move from Microsoft which enabled 4xAA at very little performance cost. To RSX do the same it would need the SPEs to pre-cull polygons etc. something far harder to do. The unified shader was a nice move which made the Xenos to be kinda a prototype of R600 cards while the RSX used still the geforce 7 architecute. Still, the RSX has bigger theoretical fill rate and more operations per cycle than the Xenos:
Xenos: 48 dynammicaly scheduled pipelines = vector4 MADD + scaler = 240 alus (shader ops) per clock cycle or 480 shader FLOPS
RSX: 8 vertex pipelines = vector + scalar | 24 parallel pixel-shader pipelines = 2 vector4 + 2 scalers + 1 texture = 264 + 16 = 280 alus (shader ops) per cycle or 520 shader FLOPS







