By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
eyeofcore said:

All three examples from above say otherwise so how in the hell just 160/200 shaders at 550mhz, so a GPU with 176-220GFLOPS beats a GPU that is 240GFLOPS? Look, you need to set a side your bias if you consider yourself a professional and smart because your reaction points out to oposite of that in a nano second.

40nm is old, but it is not ancient by your claims and is still widely used in the industry because if it was ancient then it would not be used at all to any degree. >_>


Even if the Wii U's GPU's Gigaflop performance was lower than the Xbox 360 or Playstation 3's GPU's, it would still be faster, it's a pointless metric to use when comparing different architectures.

Basically, the Wii U's GPU can do *more* per Gigaflop thanks to technologies such as better compression algorithms, better culling, more powerfull geometry engines, you name it.
If a GPU only had to deal with floating point math, then it would be an accurate metric to gauge performance, but that's not reality.

As for 40nm vs 28nm, well. The law of Physics plays here, 28nm is pretty much superior.
With that said, 40nm is also stupidly cheap and very very mature.
You can actually optimise the type of transister for a given fabrication process in order to minimise leakage, what that means is you can pack more transisters into the same space without dropping down a lithographic node.

For example the Radeon 290X has 43% more Transisters than the Radeon 7970, yet it's die size is only 20% larger at the same fabrication process.

The WII U may be superior due to the fact that it has a larger memory. BTW Radeon HD 5000 tessellators were awful. (Uhhhg how painful that was.) Anyways agreed with everything else.