forethought14 said:
fatslob-:O said:
Your forgetting the fact that the eDRAM takes up a significant amount of die space. After all cache doesn't cost a small amount of transistors. BTW I don't literally mean "off the shelf" that was a slight hyperbole. By that I mean pretty damn similar. If it truly had around 700 million transistors of enabled logic then why is it so hard for the wii u to completely beat the PS360 and why does it consume 35 watts in total not including the disc drives etc ? Right now 600 million transistors of logic make sense because the actual graphics processing component is around 100mm^2 not the 156mm^2 you initally thought. It could easily compare to a 320 shader part that has some disabled shaders too and BTW none of those 160 shader parts make sense because they only have 4 rops so don't just assume that I am referring to 160 shader parts. An HD 5550 is looking pretty likely right now for what nintendo has used as a base.
Oh as for your "latte" having more logic do you even know if all of that is ENABLED logic. IE the shaders that ACTUALLY WORK. It's very common to see some alot of GPU manufacturers disable a part of the DIE that NOT WORKING. Since the "latte" probably has around 900 million transistors in total and a third of it is probably reserved for things like the eDRAM. Half of it is probably used for things like GPU logic and the rest is used to create extra eDRAM and and GPU logic so that the chip doesn't end up having less yields.
Do we even have a DF or LOT analysis to even come to the conlusion that it runs worse on the PS4 ? (Doesn't matter anyways since the PS4 is just 5 days away from analysis.)
|
I can't answer that, because I simply don't know. I think the question that should be asked, is what functions are being used right now? It has a tessellator on fixed-function silicon, that's likely not being used right now, it has extra GPRs compared to a conventional R700 series GPU, all of that extra cache (Nintendo has mentioned that Wii U was heavily reliant on memory) how much of that is actually being used? Unfortunately with Wii U dev kits being badly documented, I wouldn't be surprised if a lot of GX2 functions of Latte are being left unused. Like how Two Tribes suddenly discovered a hardware feature that reduced memory usage, and saved them 100MB. Was this feature not documented? How would they have just "discovered" something.
OK, then say it some other way. Saying "Off shelf part" makes it seem like Nintendo took a, for example, 5550, looked at it, and thought, "Hmm...this one's good, get rid of useless stuff and put eDRAM". They've worked on the console for over 3 years, that's a lot of time to have done all sorts of things to the GPU.
eDRAM is 40nm Renesas eDRAM, we know information from this DRAM as it comes directly from the Renesas website, it should take around 220 million (+/- a few million), leaving 717 million for the GPU (depending on the million transistors per mm2). You're forgetting that off-shelf parts use die space for things you'll need for them being PC parts. Latte won't need those components and can easily be used for something else. And no, Latte has a DX 10.1 (plus extras) feature set-like compatibility for GX2 (based on documentation), it's not based on a 5000 series GPU (full DX 11 compatibility). It has an R700 base, and they modified it heavily from there. AMD does work with customers, and would allow modifications of stuff like shaders to work better on particular hardware.
And I never said that all of that 156.21mm2 was GPU logic, I already subtracted all of the eDRAM and I apologize for bad wording, I'm referring to the "logic" as anything usable to gaming, everything that's on the die that can technically be used for gaming purposes. However, eDRAM is also a factor for gaming too, conventional GPUs don't have that, and it can be very useful for increasing performance. You seem to be passing this off like it's disposable and not part of the entire GPU system. Heck, even for the CPU since it also has direct access. ROPs though, would not make a big difference in transistors, even if they added in 4 more.
And I wasn't thinking about certain Wii-related functions in the GPU that are likely used for BC purposes, so I suppose it is possibly lower than what I have calculated. Though, going by Shiota's (from Iwata Asks) comment, he makes it seem like they simply took Wii U parts, and modified them so that they could be used for Wii BC as well. Of course, we don't know what exactly what they're talking about, but since they're talking about BC on Wii U for Wii, that could be the case. And I believe Marcan said that Wii emulation wasn't being run by an implanted "Hollywood GPU", so this could mean Wii GPU emulation is done from the same Wii U parts, meaning that not much would have been wasted for Wii GPU emulation at all, if any was. Though, if that's not true, that would put transistors at around 600 (Wii BC logic + gamepad compression shouldn't take up much space at 40nm) would still put it above some parts with more shaders. Plus, Latte is produced on a more mature 40nm process than any 5000 series GPU was....
You know, that "unusable logic" idea factors more for the commercial off-shelf GPU, a console GPU will not have as much if there's any, otherwise that'll be a terribly designed GPU if it has so much unused logic.
And no, we don't have a DF analysis, but so far, reviewers have complained about these framerate issues. If those framerate issues are there (unless the consoles playing those games are malfunctioning in some way), then I would blame the developers, not the hardware.
|