By using this site, you agree to our Privacy Policy and our Terms of Use. Close
HoloDust said:
timmah said:

My take: We have a GPU roughly on par with the HD5550 (maybe better due to customization) or a downscaled HD6570 with customization (or even something in the same ballpark from their embedded line, such as a modified E6760). The initial ports had trouble due to poor optimization, resulting in performance issues either from CPU code issues or poor memory management (not using texture compression, not properly utilizing EDRAM, etc.), by no fault of the devs because they haven't had time to learn the system. Results porting from the PS4/Nextbox should be similar to scaling the graphics back on PC games from a mid-range gaming rig with a 6670, 7750, or 7770 type card to a more modest 'budget' rig with a 5550-6570 type card. Ports can run well enough and look fine (if optimized correctly), but will be scaled back graphically by some margin and most likely run at 720P. Still nowhere near the Wii-PS360 gap by any stretch.


6570 is 480:24:8 part, its DDR3 version with same memory bandwith and same clock as 5550 performs some 40-45% better than 5550. If that's what inside, it shouldn't have had problem running NFS @30pfs/720p from the get go. But, that part is 118mm^2 at 40nm, so I don't think it fits...

My pick is still Redwood LE (5550) or RV730 (4650) shrank to 40nm - 160SP config looks just...not sure what words to use if Nintendo went with that, but who knows. Anyway, even if it's 320:16:8 config inside, we're looking at some 8x between PS4 and WiiU...really shame they haven't gone for at least something like that rumoured E6760 with proper memory bandwith.

It could still theoretically be a custom chip based on the architecture of the E6760, it is certainly very, very customized, so we really don't know what part they used as the basis for the chip. My assumption from seeing performance in the NFS-MW video compared to other cards is, it CANNOT be a 160SP 6450, not a chance, and performance seems to be AT LEAST in the neighborhood of the stock 5550 up to 6570 range based on how it's running that game. As far as memory bandwidth, we've heard developers complain about the CPU (though that should be ok when properly utilized due to a short pipeline and OOOE), but there's been nothing but praise for the memory architecture so far, so maybe that's not a big deal. Not a single developer has said anything about Memory bandwidth issues. Every system Nintendo has built since GC has been very efficient and well balanced, so I assume they would continue that tradition.

Keep in mind, we have a part number for the memory, but don't know to what degree it's been customized, if at all. We don't know for certain what the memory clock rate is, and nobody has actually presented a benchmark to test real-world performance. How does the memory controller use EDRAM? Can the system use some type of predictive prefetch to EDRAM to accelerate real-world memory performance? We have a die shot of the GPU, but no solid answer as to what part it's based on. We also don't know wat the 30% of the GPU that is 'unknown' does - we are fairly certain it has a hardware tessellation unit, but what else did Nintendo bake into the chip? Did Nintendo & AMD create some crazy DX11 type fixed functions to give developers 'free' lighting or other 'free' effects? Maybe some kind of proprietary hardware texture compression is available to negate a potential RAM bottleneck? Some of that is a bit far fetched, but this is Nintendo we're talking about. We have no idea the details of the 'unknowns', and I would assume that 30% does something... I wish Nintendo would just release the damn specs!

EDIT: No way that wold give PS4 an 8x advantage. Last I heard, the rumor was about 1.3TFLOPS GPU (and I'm betting it's going to be downclocked for heat & power reasons), which is about a 3-4x difference based on raw power if you assume the 5550. Wii-PS360 was around a 20x power differential.