By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Wii U vs PS4 vs Xbox One FULL SPECS (January 24, 2014)

I love this thread! I learn a lot of things .

@BlueFalcon: About MS/Sony going with an 8-core FX chip instead of a 4-core one, maybe their plan is to have 8 slower cores drawing less power and generating less heat than 4 faster cores, using the extra 4 cores to balance the lack of its power.

And yes, PC games don't use more than 4 cores, but 3 years ago most of them used only 2. And since most games are developed with consoles in mind, that's no excuse for not going with an 8 core chip. Besides, if they learned how to develop for PS3 with the PPUs and all the like, they can learn to develop for a CPU with 8 "simpler" cores.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
BlueFalcon said


Also, neither Sony nor MS would ever use a desktop-based GPU in a console. It will be a mobile version. For example, HD7850M uses about 32W of power, while HD7950M uses about 50W (that's basically a slightly downclocked HD7870). 

The only thing stopping MS or Sony from dropping ~ HD7870 (HD7950M) part into their next console is cost, not power consumption. A10-5700 + HD7950M would be a much fit for a next generation console for well-rounded graphics than FX8xxx + HD7770, but it sounds like Sony is now run by accountants not gamers. I at least hope they do include some kind of a dedicated GPU or the console is in trouble if the A10 APU is all that it has.

Too many mistakes in your answer, so again I repeat:

The HD7770 GPU is a 125mm^2 die. The HD7950M GPU you apparently want to see is a 350mm^2 die. Your chip is almost three times larger (translation: three times more expensive) than mine. You lose instantly and permanently. The 384bit bus width of your chip kills the idea instantly, too. Even 256bit is at the edge of possibility, your PCB just gets too expensive (7770 is 128bit). The idea that any OEM buys a huge, expensive chip (7950) that is then downclocked/castrated (7950M) to perform roughly the same as a small chip is ridiculous. Not going to happen. Again I repeat: every mm^2 counts, every additional PCB layer (for wider busses) counts. The 7770 chip is the cheapest GPU that is good enough for 1080p/60Hz.

The AMD FX8... series of processors is ideally suited for XBoxNext due to its design pecularities - I must remember everyone that XBoxNext has to perform two entirely separate things in parallel: run games and run Kinect2. What GPU MS will use is anyone's guess at this point (unless you have the latest dev kit). They have all the money to go 78xx if they want to.

At this point we have rumoured to (performance estimate in %):

WiiU: triple-core PPC and dedicated (heavily modified) 4850 type GPU (100%, about 130% PS3/360)

PS4: A10 APU (70-90%)

PS4: A10 (for physics, move etc) with dedicated GPU in the 7770 range (150-170%)

XBoxNext: FX8xxx with dedicated GPU in the 7770 range (170-220%)

Yes, I know nobody will like my estimates. You'd rather like to read that PS4/NextBox will be 6 times more powerful than WiiU. Not going to happen. Ever.



drkohler said:

Too many mistakes in your answer, so again I reapeat:

The HD7770 GPU is a 125mm^2 die. The HD7950M GPU you apparently want to see is a 350mm^2 die. Your chip is almost three times larger (translation: three times more expensive) than mine. You lose instantly and permanently. The 384bit bus width of your chip kills the idea instantly, too. Even 256bit is at the edge of possibility, your PCB just gets too expensive (7770 is 128bit). The idea that any OEM buys a huge, expensive chip (7950) that is then downclocked/castrated (7950M) to perform roughly the same as a small chip is ridiculous. Not going to happen. Again I repeat: every mm^2 counts, every additional PCB layer (for wider busses) counts. The 7770 chip is the cheapest GPU that is good enough for 1080p/60Hz.

The AMD FX8... series of processors is ideally suited for XBoxNext due to its design pecularities - I must remember everyone that XBoxNext has to perform two entirely separate things in parallel: run games and run Kinect2. What GPU MS will use is anyone's guess at this point (unless you have the latest dev kit). They have all the money to go 78xx if they want to.

At this point we have rumoured to (performance estimate in %):

WiiU: triple-core PPC and dedicated (heavily modified) 4850 type GPU (100%, about 130% PS3/360)

PS4: A10 APU (70-90%)

PS4: A10 (for physics, move etc) with dedicated GPU in the 7770 range (150-170%)

XBoxNext: FX8xxx with dedicated GPU in the 7770 range (170-220%)

Yes, I know nobody will like my estimates. You's rather read that PS4/NextBox will be 6 times more powerful than WiiU. Not going to happen. Ever.

I just disagree in one part... the Wii U GPU seems more like a RV740 downclocked... HD 4770 with low clock.

And PS4 needs another GPU or these rumores are all fake.



drkohler said:
BlueFalcon said


Also, neither Sony nor MS would ever use a desktop-based GPU in a console. It will be a mobile version. For example, HD7850M uses about 32W of power, while HD7950M uses about 50W (that's basically a slightly downclocked HD7870). 

The only thing stopping MS or Sony from dropping ~ HD7870 (HD7950M) part into their next console is cost, not power consumption. A10-5700 + HD7950M would be a much fit for a next generation console for well-rounded graphics than FX8xxx + HD7770, but it sounds like Sony is now run by accountants not gamers. I at least hope they do include some kind of a dedicated GPU or the console is in trouble if the A10 APU is all that it has.

Too many mistakes in your answer, so again I repeat:

The HD7770 GPU is a 125mm^2 die. The HD7950M GPU you apparently want to see is a 350mm^2 die. Your chip is almost three times larger (translation: three times more expensive) than mine. You lose instantly and permanently. The 384bit bus width of your chip kills the idea instantly, too. Even 256bit is at the edge of possibility, your PCB just gets too expensive (7770 is 128bit). The idea that any OEM buys a huge, expensive chip (7950) that is then downclocked/castrated (7950M) to perform roughly the same as a small chip is ridiculous. Not going to happen. Again I repeat: every mm^2 counts, every additional PCB layer (for wider busses) counts. The 7770 chip is the cheapest GPU that is good enough for 1080p/60Hz.

The AMD FX8... series of processors is ideally suited for XBoxNext due to its design pecularities - I must remember everyone that XBoxNext has to perform two entirely separate things in parallel: run games and run Kinect2. What GPU MS will use is anyone's guess at this point (unless you have the latest dev kit). They have all the money to go 78xx if they want to.

At this point we have rumoured to (performance estimate in %):

WiiU: triple-core PPC and dedicated (heavily modified) 4850 type GPU (100%, about 130% PS3/360)

PS4: A10 APU (70-90%)

PS4: A10 (for physics, move etc) with dedicated GPU in the 7770 range (150-170%)

XBoxNext: FX8xxx with dedicated GPU in the 7770 range (170-220%)

Yes, I know nobody will like my estimates. You'd rather like to read that PS4/NextBox will be 6 times more powerful than WiiU. Not going to happen. Ever.


Just on your Wii U GPU ...

Were you aware that the GPU you selected for the Wii U is more than 4 times the processing power of the XBox 360's GPU?

Even ethomaz's correction of the Radeon HD 4770 produces 960 GFLOPS, which is exactly 4 times the performance of the XBox 360 GPU.

 

Realistically, I think the Wii U's GPU is probably (something like) the Radeon 7690M XT which is (roughly) 3 times the performance of the XBox 360/PS3, is based on the Turks core, manufactured using the 40nm process, and runs at 25 Watts. Even the die size of the GPU at 118mm^2 falls roughly in line with what we would expect being that Nintendo added a bunch of eDRAM to the chip.



HappySqurriel said:

Even ethomaz's correction of the Radeon HD 4770 produces 960 GFLOPS, which is exactly 4 times the performance of the XBox 360 GPU.

The HD 4770 runs at 750Mhz... I said a RV740 downclocked to 500Mhz... at max 600GFLOPs... in line with the rumors.

 



Around the Network
ethomaz said:

drkohler said:

Too many mistakes in your answer, so again I reapeat:

The HD7770 GPU is a 125mm^2 die. The HD7950M GPU you apparently want to see is a 350mm^2 die. Your chip is almost three times larger (translation: three times more expensive) than mine. You lose instantly and permanently. The 384bit bus width of your chip kills the idea instantly, too. Even 256bit is at the edge of possibility, your PCB just gets too expensive (7770 is 128bit). The idea that any OEM buys a huge, expensive chip (7950) that is then downclocked/castrated (7950M) to perform roughly the same as a small chip is ridiculous. Not going to happen. Again I repeat: every mm^2 counts, every additional PCB layer (for wider busses) counts. The 7770 chip is the cheapest GPU that is good enough for 1080p/60Hz.

The AMD FX8... series of processors is ideally suited for XBoxNext due to its design pecularities - I must remember everyone that XBoxNext has to perform two entirely separate things in parallel: run games and run Kinect2. What GPU MS will use is anyone's guess at this point (unless you have the latest dev kit). They have all the money to go 78xx if they want to.

At this point we have rumoured to (performance estimate in %):

WiiU: triple-core PPC and dedicated (heavily modified) 4850 type GPU (100%, about 130% PS3/360)

PS4: A10 APU (70-90%)

PS4: A10 (for physics, move etc) with dedicated GPU in the 7770 range (150-170%)

XBoxNext: FX8xxx with dedicated GPU in the 7770 range (170-220%)

Yes, I know nobody will like my estimates. You's rather read that PS4/NextBox will be 6 times more powerful than WiiU. Not going to happen. Ever.

I just disagree in one part... the Wii U GPU seems more like a RV740 downclocked... HD 4770 with low clock.


Based on what? You've been saying this for a while now and your only reasoning seems to be launch games.



Play4Fun said:
Based on what? You've been saying this for a while now and your only reasoning seems to be launch games.

Wii U board and it's components size plus all the info release for now... I already explaned that two or three time in this topic.



ethomaz said:

HappySqurriel said:

 Even ethomaz's correction of the Radeon HD 4770 produces 960 GFLOPS, which is exactly 4 times the performance of the XBox 360 GPU.

The HD 4770 runs at 750Mhz... I said a RV740 downclocked to 500Mhz... at max 600GFLOPs... in line with the rumors.

 


I've moved towards believing that Nintendo is far more likely to choose a GPU that already has the rough cost, performance and energy consumption they want and to moderately modify it (essentially choose a notebook GPU) than to take a desktop GPU and try to engineer it down to the cost, performance and energy consumption they want.

While there are multiple options in the Radeon 6xxxM and 7xxxM lines that are based on the Turks core, are manufactured using a 40nm process, have 480 stream processors, and run between 25 and 40 Watts (all things that match most of the rumours) the fact that they all have between (roughly) 600GFLOPS and 700GFLOPS of processing power gives us a pretty good idea of what Nintendo could do by picking one of these GPUs as their base. Even the desktop GPUs that match this profile (Radeon 6570, 6670, 7570 and 7670) all still run in the 600 GFLOPS to 800 GFLOPS range, although their wattage is higher (in the 60 Watt range).

Ultimately, we can argue endlessly about what the Wii U GPU might be based on, but I think it is fairly clear that it is probably (about) 3 times the performance of the XBox 360 GPU; and there is probably some increased efficiency and better features that come from being a more modern GPU.



ethomaz said:

Play4Fun said:
Based on what? You've been saying this for a while now and your only reasoning seems to be launch games.

Wii U board and it's components size plus all the info release for now... I already explaned that two or three time in this topic.


I've seen more logical speculation that implies otherwise, so whatever. Just seems like "lol Nintendo" mindset to me.



HappySqurriel said:
I've moved towards believing that Nintendo is far more likely to choose a GPU that already has the rough cost, performance and energy consumption they want and to moderately modify it (essentially choose a notebook GPU) than to take a desktop GPU and try to engineer it down to the cost, performance and energy consumption they want.

While there are multiple options in the Radeon 6xxxM and 7xxxM lines that are based on the Turks core, are manufactured using a 40nm process, have 480 stream processors, and run between 25 and 40 Watts (all things that match most of the rumours) the fact that they all have between (roughly) 600GFLOPS and 700GFLOPS of processing power gives us a pretty good idea of what Nintendo could do by picking one of these GPUs as their base. Even the desktop GPUs that match this profile (Radeon 6570, 6670, 7570 and 7670) all still run in the 600 GFLOPS to 800 GFLOPS range, although their wattage is higher (in the 60 Watt range).

Ultimately, we can argue endlessly about what the Wii U GPU might be based on, but I think it is fairly clear that it is probably (about) 3 times the performance of the XBox 360 GPU; and there is probably some increased efficiency and better features that come from being a more modern GPU.

RV740 is a mobile GPU... the chip equips the Mobility Radeon HD 4830 and Mobility Radeon HD 4860... of couser it equip the Radeon 4770 desktop too... there is no difference between mobile and desktop GPU hardware... the diffs are config (clock) and software (drivers).

The Radeon 6xxxM and Radeon 7xxxM uses the same desktop chip present in Radeon HD 5xxx and HD 6xxxx.

The workstation GPU (FireGL) uses the same desktop chip with clock and drivers optmized.