By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Wii U GPU Type CONFIRMED! Custom AMD E6760!

HoloDust said:
justinian said:
Even if this is true - which I doubt, it is probably not the standard HD6760.

Whatever AMD GPU the wii u uses, it would most likely be modified or updated similar to the way the original R520 was modified to become the Xenos on the X360.

AMD or ATI also updated the Gamecube's Flipper GPU to the wii's Hollywood.

So even if it is a 6760, AMD would probably have "doped" it up a bit, as this seems their standard procedure where console GPUs are concerned. Rumours also state as much.

It is not - because it does not belong to desktop HD 67xx series which is based on Juniper cores. It belongs to Turks core, so closest desktop counterpart is HD 6570 GDDR5, or if you look at mobility cards it's HD 6650M/6750M (both Turks based).

You are right. My error. I am so used to using "HD" before the number with AMD cards that I did it automatically. I indeed meant E6760.

Still it is likely to be an updated or "modernised" version of said chip. I hope it is updated anyway, as they were on AMD (ATI) previous GPU powered consoles.



Around the Network
Lafiel said:
Viper1 said:

It's fake

 

AMD graphics card tech support would not have access to confidential information for the console side of the business operations nor would they simply give the information way like it was nothing.

probably, but all things considered the E6760 or a modified version of the same chip (Turks) does fit very nicely to what we know of the Wii U GPU

I'm not saying it doesn't use that chip.  I'm saying the way that information was gained is fake.



The rEVOLution is not being televised

I just read through this board, and that Marty guy or whatever wow. He tried to claim others didn't know what they were talking about, yet he claimed the 8800gt came out in 2006. When it in fact was a 2008 card. The 2006 cards were the 7000 series.



errorpwns said:
I just read through this board, and that Marty guy or whatever wow. He tried to claim others didn't know what they were talking about, yet he claimed the 8800gt came out in 2006. When it in fact was a 2008 card. The 2006 cards were the 7000 series.

It was released in 2007?



October 29, 2007 to be precise.

 

The 8800 GTX did release in November of 2006 but the GT was 11 months later.



The rEVOLution is not being televised

Around the Network

Oh, my bad. I was thinking of a different card series. I must of been thinking about the AMD r700. Now I feel stupid lol.  Still, the 8800gt was released in late 07.  Not in 06.



errorpwns said:

Oh, my bad. I was thinking of a different card series. I must of been thinking about the AMD r700. Now I feel stupid lol.  Still, the 8800gt was released in late 07.  Not in 06.

Yeah, had something similar couple pages back. No biggie.

Stuff gets so complicated!



HoloDust said:

5000 score is for GPU score, P score is 5600+ (against 5870 in spec sheet of e6760), so I'd say that's pretty close.
As for 4850 - if you look you can find it by yourself, but here are 2 results at stock speeds:

http://www.3dmark.com/3dmv/3144570

http://www.3dmark.com/3dmv/3574315

That's some 1.3x of stated P score of e6760

Now, I much more prefer using pure GPU scores, as they are not that dependant of CPU - take a look at this result for 4850, but with Phenom II 955 and see that GPU score is pretty much the same as with 620

http://www.3dmark.com/3dmv/3843371

That said, if you want to persist in claiming that e6760 is not based on Turks with 480:24:8 configuration, that's your choice, if not feel free to look for GPU scores of 6570 and 6650m at stock speeds (which vary from 4500-5300) and compare them to 4850 GPU score of around 7500.

So you believe there can be up to 3000 points difference between the same setups without any clock improvements? Come on...

As I said before, 3dmark doesn't always record the correct speeds.

Here's the 4850 with a 50% faster cpu, the QX9650;
http://nl.hardware.info/productinfo/benchmarks/6/3d-chips?products[]=26815&specId=3912&tcId=132

Here's the 4850 with a Core i7 965 which is about 130% faster than the Athlon II X4 620;
http://nl.hardware.info/productinfo/benchmarks/6/3d-chips?tcId=190&specId=4935&products[]=26815

These are independent benchmarks with every component at stock speeds.

Now are you really gonna try and say that a 4850 with a 620 can do the same scores as a 4850 with a Core i7 965?

From what I see the 4850 scores 6100 points at best. Just a few hundred points above the e6760, or a few percent.

In any way, there's no way a 4850 scores well over 6000 points without an overclock or a much faster processor. Let alone 7500+.

 

So no, the 4850 isn't 1.6 or 1.4 or 1.3 times faster. Not even close.



This thread has made me start thinking that everyone may be looking too far at each extreme to actually accurately guess the GPU that the Wii U is using ...

I was just looking at the Radeon HD 6670 and Radeon HD 6570 and it seems like either might be a good starting point for a low power video game console. As PC graphics cards they seem to run HD console games (that ran at 720p@30fps on the HD consoles) at higher detail settings at 1280x1024 at above 60fps; and that probably translates to 720p@60fps when you're also rendering to the Wii U tablet. At idle these graphics cards run in the 10W range, and they peak at around 60w (a large portion of which is due to the GDDR5 memory).

Through customization, Nintendo could probably maintain (or modestly improve) performance while reducing power consumption. With optimization developers may be able to improve real world performance by 50% to 100% of what was seen on the PC; and developers who wanted to push better visuals could reduce frame-rate to 30fps and (roughly) double detail. At 720p@30fps the best looking games would probably have 2 to 3 times the detail of their HD console counterparts, and these same games running at 1080p@60fps would likely push the most powerful of next generation consoles pretty hard.


Ultimately, what got me thinking this is the question "why would Nintendo seek substantial customization of a high powered GPU to make it energy efficient, or of an embedded GPU to make it powerful enough, when they could use a GPU that was 90% of what they (probably) want"



...and the saga continues

 

http://www.neogaf.com/forum/showpost.php?p=42495681&postcount=5388 - single post

http://www.neogaf.com/forum/showthread.php?t=490844&page=108 - thread