By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Wii U vs PS4 vs Xbox One FULL SPECS (January 24, 2014)

HoloDust said:
JEMC said:

No HD8800M chip has been announced, but it will likely be the same as the 8800M at higher frequencies.


Not sure have you meant HD8800 (desktop) or HD8900m (mobile) there, but if you meant HD8900m, considering previous generations, HD8970m wiil be mobile version of HD8870 at some 75-85% clock - so, if rumours for 88xx series are true, that will be 1792:112:32 card at around 850MHz core clock - for comparison that's around level of 7950 Boost edition - I would be hugely surprised to see that in any console.

My bad. I ment the mobile one, the HD8900M.

If the next gen consoles can get one of those, I can't imagine the things they will be able to do.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network

Yeah, that is certainly interesting stuff with the mobile 8xxx series. Good info as a high-end mobile GPU is certainly plausible if they don't go the APU route.



The 8870M seems a good option for A10 APU.



superchunk said:
IBM Power >>> AMD in terms of CPUs. Basically, any AMD (or Intel) CPU would have to have much higher specs to match an IBM CPU.

I beg to differ. Any modern quad core AMD CPU would level an IBM-based CPU into the ground for games.  An A10-5800K CPU is 22% slower than a 1st generation quad-core i7 870 2.93ghz (same architecture as Nehalem i7). Even the FX-8350 "8-core" is still slower in games than 1st generation i7 but only by about 5%.

http://www.computerbase.de/artikel/prozessoren/2012/test-amd-fx-8350-vishera/6/

For PC gamers this is a big deal since they might want to spend $100 more to go from GTX670 to 680 for just 10% more performance. With a slower CPU, that would be $100 wasted. That's why Intel CPUs are more popular for games since they allieviate as much CPU bottleneck as possible for those high-end gaming GPUs. Also because Intel CPUs use less power in both stock and overclocked states, they are preferable. PC gamers want the maximum performance if they can afford it, which puts AMD out of the running unless your budget can't fit an i5/i7. That's why AMD's CPUs are not popular on the PC when paired with high-end GPUs. But let's not project from that a notion that AMD CPUs are slow compared to anything else. If Intel is like Bugatti Veyron Super Sport/Ferrari F70, then AMD is still like a Porsche 911 compared to everyone else making gaming CPUs.  

Metro 2033 developer estimated that just one of the dual-threaded cores in i7 870 (Nehalem/Lynnfield 1st generation i7s) is more powerful than the entire Xenon 3 core 6 threaded IBM PC in the Xbox 360:

http://www.eurogamer.net/articles/digitalfoundry-tech-interview-metro-2033?page=4

It's more complicated since according to that interview, the developer says the 360's CPU could exceed PC on a per-thread per clock basis if they used properly vectorized instructions. But that sounds more theoretical in nature than actually practical. Very few developers will go out of their way to actually tap the full potential of PowerPC/Cell CPUs (and we know very few did). If the CPU is an x86 based, the optimization and performance is a lot easier/better right away. That minimizes development costs and allows the developers to focus on GPU-based optimizations which are more important imo.

I would say a quad-core AMD CPU would be easily as fast as an 8-core IBM Power PC one in most circumstances. AMD's budget APUs may be 30-40% behind Intel's $225 i5-3570K or $325 i7-3770K but you gotta look at the big picture where those 2 brands are still relatively close compared to everyone else in the CPU business.

Here is a good chart of what happens if you pair an A10-5800K CPU + HD7950 3GB vs. i7 3770K + HD7950 3GB:

http://techreport.com/review/23662/amd-a10-5800k-and-a8-5600k-trinity-apus-reviewed/16

That chart is showing that an A10-5800K CPU, when paired with a $280 1792 Shader HD7950 GPU, renders 40 fps in 1% of the frame rates. That means in 99% of the cases, when paired with a $280 HD7950, the A10-5800K CPU gets >>>> 40 fps in the games they tested (BF3, Skyrim and Batman AC). The charts of average frame rates highlight that average frame rates are still way higher than PS3/360. This is running at 1920x1080:

 

Sure, the performance looks dissapointing compared to Intel CPUs but keep in mind how much those Intel CPUs cost. Is it realistic to expect an i5 or i7 in a console if you are aiming to keep the price reasonable in this economy? Probably not. The next logical trade-off is going with an AMD CPU since it offers 70% of the performance in games for $100-200 less. For a next gen console I think that's a good compromise since most people play on their HDTVs where rendering above 60 fps in not particularly beneficial. You'd definitely have a way faster gaming console if you had an A10 CPU + HD7950 rather than if you had a Core i5 and HD7770.

Even though A10-5800K is slower than Intel's CPUs for games, I don't see this being as critical in the console environment where MS / Sony are not really targeting 70-100 fps. Also, given the popularity of PS3/360, it seems console gamers are not too concerned about 60 fps average being a requirement in all games. Just going from 30 fps for most console games to 40-50 fps will be huge and an A10 AMD quad core will allow you to get there assuming your GPU is capable. If I was allocating the CPU vs. GPU budget, I would be MUCH more concerned with having a potent GPU that can actually render 30-40 fps in next gen 2017-2019 titles than worry about a CPU that can get me above 60 fps in 2013-2015 games. Sony's decision to go for an x86 CPU (even if only quad-core) is the best thing they've done since it would dramatically drop the price of the console vs. going with an Intel CPU while resulting in a reasonable reduction in performance.

Think about it if 1 core in a modern 1st generation i7 was more powerful than the entire 3-core 6 threaded IBM CPU in the Xbox 360, even if IBM doubled the instructions per clock and doubled the core count, and released a 3.2ghz 6-core 12-threaded IBM CPU, that would only match a 1st generation Core i7. Are we seriously expecting IBM to double the instructions per clock and release a 6-core 12-threaded 3.2ghz Out of Order CPU for the Xbox 720? I think Sony made the better call on the CPU side in terms of price/performance compromise. The key missing variable is: IS there a dedicated GPU along that APU, or not? 



HoloDust said:

While I mostly agree, here's some comparison of those cards (with Voodoo ratings):

In case of 6970 and 7870, we have VLIW4 vs GCN, and though it seems that 6970 has lot more shaders and TMUs, equilized to 1GHz of 7870, 6970 would be more like 1352:84:28 card - not too different from 7870, but still, considering GCN is newer and quite improved architecture, 7870 wins by some 10%

Perfect! I am really glad you decided to simplify things for yourself and make life a lot easier. Normally it would a headache trying to compare a VLIW4 1536 SP: 96 TMU: 32 ROP : 880mhz GPU clock HD6970 with GCN 1280 SP: 80 TMU: 32 ROP : 1000mhz GPU clocked HD7870 but with Voodoo GPU Ratings, it's already done for you by meticilously looking at PC GPU reviews across the web, and averaging the performance across many games. You could sit there and spend hours trying to find reviews and averaging the performance yourself but you'll end up in the same spot:

http://www.computerbase.de/artikel/grafikkarten/2012/test-nvidia-geforce-gtx-660/5/

All we need now are exact GPU specs of PS4/Xbox 720's GPUs and we'd be able to narrow down the performance and compare it to PS3/360. 

HoloDust said:

 

7870 - GCN 1280:80:32@1000/1200; 256bits

8770 - GCN2 768:48:16@unkown clocks; 128bits

I hope you see where I got confused, even if GCN2 is jump as VLIW4->GCN, 8770 with those specs would need (in my guesstimate) to run at 1.5GHz to achieve 7870 level of performance.

 

 

Ya, I don't think GCN 2.0 will be a large jump from GCN 1.0 in terms of performance per functional units, maybe 10-15% increase in IPC, the rest coming from increased functional units and/or higher GPU clocks. A lot of rumors on the PC point to marginal type of increase, maybe 20-30% on the AMD desktop side. It's not going to be revolutionary like going from HD6970 to HD7970Ghz was. 

Since mobile HD8900M parts and desktop Sea Islands (GCN 2.0) parts appear to be slated to launch around Q2 2013, we are going to be waiting for their specs. For the next 2-3 months, we'd just be guessing at their specs and probably spinning wheels. With a lot of games launching in Q1 2013, it's probably more productive to spend the next 3 months gaming :) Since AMD will release both HD8900M and HD8000 desktop parts before PS4/Xbox 720 launch, by Fall 2013we'll have all the specs of HD8950M/8970M and HD8750-8970 desktop parts. All the performance of desktop cards will be added to Voodoo GPU power by BoFox and we'll be set for a PS4 vs. Xbox 720 GPU showdown. Since all 3 next gen consoles are likely to have AMD GPUs, things should be pretty easy to compare this time.

What do you guys think about the Hard Drive strategy? Would you prefer a next gen console with a stripped down SKU at say $299-349 with no mechanical hard drive, with MS and Sony giving you an option to buy your own and leaving room inside the box to fit it in? Would be be OK with a USB 3.0 external hard drive like Wii U's move? Or would you rather prefer that all SKUs came with a mechanical hard drive like PS3 does today in North America?

I think I'd actually prefer a cheaper SKU with no mechanical hard drive at all so I could get a cheaper external on Newegg/Amazon and just insert it in. Or I would just use my old 2.5 inch USB 2.0 640GB one which should be good enough for basic demo/game save storage.



Around the Network
BlueFalcon said:

You could sit there and spend hours trying to find reviews and averaging the performance yourself but you'll end up in the same spot:

http://www.computerbase.de/artikel/grafikkarten/2012/test-nvidia-geforce-gtx-660/5/

 .....

What do you guys think about the Hard Drive strategy? Would you prefer a next gen console with a stripped down SKU at say $299-349 with no mechanical hard drive, with MS and Sony giving you an option to buy your own and leaving room inside the box to fit it in? Would be be OK with a USB 3.0 external hard drive like Wii U's move? Or would you rather prefer that all SKUs came with a mechanical hard drive like PS3 does today in North America?


Hey, thanks for that link, I really love how they've done that mouse-over-for-pecentage thing on charts.

As for HDD, I'm hoping that all SKUs have one from the start - that means mandatory installs, which would help devs quite a lot, imo. But, while I expect Sony to go with this, I'm suspecting MS will follow the current practice.



Updated OP with latest GPU info and clear mentions of dual GPU approaches.



Final 720 devkit???

Hi,

I have informations that may interest you (I already mail the press and online press few minutes ago so I will c/p the post).

The new Xbox will not be called 720 nor Oban. It will run on Windows 8 structure especially designed for the Xbox.

It will likely be announced in January via Gas Powered Games (Supreme Commande, Dungeon Siege). They will also announce a launch game title known as «*Project W*» (Internal).

The Xbox will use Blu-Ray disc and also have HDCP protection. Xbox can also reads DVD disc and 360's game will be of course working.

It's already the 4th Devkit which is likely the one to be release to the public

The Xbox graphics card is still (to this day) an AMD Radeon HD 7770 which is modified for the system.

The GCN Architecture is equipped with 10 Compute Units (640 Stream Processors), 40 Texture Units, 64 Z/Stencil ROP Units and 16 Color ROP Units

The 7700 use a 2GB eDRAM and 1125MHz Memory Clock.
DirectX 11.1 support (Tesselation) and PRT. The technology here use an adaptive anti-aliasing but also MLAA and EQAA with anisotropic texture filtering.

About the resolution and ports. The Xbox still use HDMI input for a maximum resolution being at 4096x2160 pixels and 2560x1600 (using Stereoscopic 3D)

As for the processor*: so far as I know it's running on a *power architecture-based processor includes IBM's latest technology in an energy-saving silicon package. the IBM Power7 processor has six cores, and four threads per core, for a total capacity of 32 simultaneous threads. As for the power consumption it may be similar to the preceding P6, while quadrupling the number of cores, with each core having higher performance.

It's also interesting because the system use a 32-bit integer register and ALSO a 64-bit floating point register (with each on their own unit). It also includes a number of «*private*» register for its own use, including know the «*program counter*»

Another great thing is the virtual address system which maps all addresses into a 64-bit area. So it can share memory in a «*flat*» 32-bit and ALL of the programs can have different blocks of 64 bits on their own*!

As for the online service, it still use the same Xbox live market for the moment and it will likely have no changes IMO.
The console features 8 GB of GDDR5 RAM which is clocked at 667Mhz for 1333MB/sec/pin actually. They are structured on a 240-Pin DIMM kit with a memory modules based on 16 52Mx8-Bit FBGA component per module. This RAM use a low latency timing of 10-10-10 at 1,5V

The row tRCmin is determined to 49,6ns for a trfc min of 260ns*! The odd things is that the RAM have a burst lenght of 8 interleave without any limit..

See ya

http://forum.teamxbox.com/showthread.php?t=681409&page=2



lol already confirmed fake.



Well, the 2GB eDRAM part was hard to believe.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.