BlueFalcon said:
That's an excellent observation HoloDust! Why, thanks :) HD5550 has a VP rating of 27 and a power consumption of 39W on 40nm node. I think that HD5550 GDDR5 has rating of 27VP - HD5550 DDR3 is at 21.8VP. Also, I think 39W is TDP for desktop part - if you look at Mobility 5750, which is Redwood Pro (400:20:8), its TDP is 25W (I am, as always, baffled with these significant differences between desktop and mobile parts). Redwood HD5550 (DX11) = 550mhz GPU clocks (352 Gflops), 320 SPs, 16 TMUs, 8 ROPs, 28.8 GB/sec memory bandwidth over 128-bit bus = 27 VP RV730 HD4650 (DX10.1) = 600mhz GPU clocks (384 Gflops), 320 SPs, 32 TMUs, 8 ROPs, 16 GB/sec memory bandwidth over 128-bit bus = 17.8 VP R700 Wii U's rumored GPU (DX10.1) = 550mhz GPU clocks (352 Gflops), 320 SPs, 16 TMUs, 8 ROPs, 12.8 GB/sec memory bandwidth over 64-bit bus Well, I think that 5550 DDR2 could be good aproximation for rumoured specs so far - it is 320:16:8 part @550MHz, with DDR2 over 128 bit bus, for total bandwith of 12.8GB/s (just like suggested WiiU's over 64bit bus). That card's rating is 16.5 VP. ^ That means R700 in Wii U is actually slower than HD5550. You can also see that even with just 320 SPs, these GPUs still need memory bandwidth ==> 4650 is much slower than HD5550 wven with higher GPU clock and 2x the TMUs, partly because it has just 16GB/sec memory bandwidth. The other reason is HD4000 series has worse performance per clock/IPC than HD5000 does. This can be observed by comparing HD4870 vs. HD5770. Despite HD4870's near 50% memory bandwidth advantage over HD5770 (http://www.gpureview.com/show_cards.php?card1=564&card2=615), the 2 cards are actually similar in performance. If we assume that some "secret" sauce & eDRAM has allowed R700 in Wii U to land between HD4650 and HD5550, we would get a VP rating of 22.4 (avg of HD4650 and HD5550). I honetly have no idea if those extra parts can make up for lack of memory bandwith and boost it to level of HD5550 DDR3 over 128bit bus (or more). But if it can, we might be back to 21.8 VP of HD5550 DDR3. The GPU in Xbox 360 was claimed to be similar to X1800XT 512MB by ATI themselves, or a VP rating of 16.7. That means a full blown HD5550 GPU is roughly 62% faster than the GPU in the Xbox 360. HD5550 has 28.8 GB/sec memory bandwidth but Wii U's only has 12.8GB/sec. I am inclined to believe that 32MB of eDRAM cannot make up for losing more than half of the memory bandwidth. For that reason I'd put R700 in Wii U at 22.4 VP instead of 27 VP of HD5550. Well, some time ago, I listened to your advice and tried to compare architectures, clocks and configs when I was making those comparison tables - I came to guesstimate, based on similarity to HD2xxx series (as first unified shader PC cards), that Xenos is somewhere around 14.8 VP.
The other downside is lack of DX11 which means no next generation effects in any of Wii U's games. To be honest, first time I started thinking it is based on Redwood is after some developers claiming it is DX11 capable GPU. I might be completely wrong though, or not up to speed with latest. ================================ The facepalm moment for me is Nintendo could have just went with a $130 65W Trinity A10-5700 and ended up with a faster CPU and GPU. Backward compatibility maybe? But then again, Dolphin proved to be more than capable of emulating Wii, I see no reason why Nintendo could not make near perfect x86 Wii emulator on their own. |