By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Wii U vs PS4 vs Xbox One FULL SPECS (January 24, 2014)

BlueFalcon said:
zarx said:

Sea Islands it's self is a slightly modified version of Southern Islands. Whatever custom chip that AMD have cooked up for Sony bassed on VLIW5 will likely incorperate some of the same improvements they were working on for Sea Islands. 

Southern Islands and Sea Islands are SIMD design, Graphics Core Next, not VLIW5. 

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/4 

GCN (aka HD7000-8000 series) is the biggest architectural change for AMD graphics cards since 2007's HD2000 series.

Also, people these rumors calling it "12 Shader Cores" or "18 Shader Cores" is flat out incorrect. It's 12 or 18 Compute Units. Each Compute Unit houses 64 Stream/Shader processors. So if you are talking about Shaders, it would be 768 or 1152. A Compute Unit is not a "Shader Core" as it houses 64 Shader cores inside of it.

1 Compute Unit has 64 ALUs (shader cores). To say a GPU has "12 Shader Cores" is like saying a car's horsepower is 8 cylinders....Interchanging Compute Units with Shader Cores is flat out incorrect. 


Ah right that's what i get for reading Wikipedia pages lol. I even remember reading that article now....



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

Around the Network
zarx said:

Link? The largest GDDR5 chips I can find reference to are 2Gbit. http://www.elpida.com/en/products/gddr5.html, http://www.skhynix.com/products/graphics/graphics.jsp?info.ramCategory=&info.ramKind=26&info.eol=NOT&posMap=graphicsGDDR5, http://www.samsung.com/global/business/semiconductor/product/graphic-dram/catalogue

So unless you have a source that 4Gbit GDDR memory chips will be available in the next few months.

You are right. Even Hynix lists 2Gb (256MB) GDDR5 as the maximum density (http://www.hynix.com/products/graphics/graphics.jsp?info.ramCategory=&info.ramKind=26&info.eol=NOT&den=2Gb&posMap=graphicsGDDR5)

Even the Asus Ares 2 that costs $1,600 had to use 24 GDDR5 chips to reach 6144MB of GDDR5 (or 256MB per chip):

http://www.techpowerup.com/reviews/ASUS/ARES_II/3.html

"The problem here is one of memory density - at the moment 256MB of [GDDR5] RAM is the highest amount of memory that can be packed onto a single chip. Multiple chips can be stacked together and accessed in parallel, but then the memory bus that connects them to the rest of the system becomes a lot more complex - and more expensive - to make. At 2GB we're already looking at eight memory modules crammed onto the mainboard, and 4GB would see that doubled to what could be an unmanageable 16, bringing with it the necessity for an expensive 256-bit or even a 512-bit memory bus."

Orbis: The Next-Gen PlayStation Takes Shape

http://www.gamesindustry.biz/articles/digitalfoundry-assessing-playstation-orbis-rumours

Sounds like 4GB of GDDR5 for PS4 might be optimistic in 2013 now that I think about it. But it's not impossible because you could fit 8 chips around the Pitcairn GPU and the 256-bit native bus on the Pitcairn HD7870 (HD7970M) is perfect for this. 8 chips on the front of the board, and 8 on the back. It would be expensive though.

If PS4 uses off-the-shelf PC components, I think it's more reasonable it uses 4GB of DDR3 and 2GB of GDDR5 for the GPU. That type of setup more closely resembles how a PC functions.



@ superchunk
where is the info on those 4 move engines next box use to move data around? would like to know more about that.

question @ALL
if the GPUs on PS4 and nextbox are from AMD... what makes people believe that xbox will have edram or esram and PS4 will not... if wii U uses it, its obvious AMD thinks its a good idea for the GPU and i believe AMD will use it too in Sony machine. Right?



Proudest Platinums - BF: Bad Company, Killzone 2 , Battlefield 3 and GTA4

sergiodaly said:
question @ALL
if the GPUs on PS4 and nextbox are from AMD... what makes people believe that xbox will have edram or esram and PS4 will not... if wii U uses it, its obvious AMD thinks its a good idea for the GPU and i believe AMD will use it too in Sony machine. Right?

In the Wii U, "there are four 4Gb (512MB) Hynix DDR3-1600 devices surrounding the Wii U's MCM (Multi Chip Module). Memory is shared between the CPU and GPU, and if I'm decoding the DRAM part numbers correctly it looks like these are 16-bit devices giving the Wii U a total of 12.8GB/s of peak memory bandwidth." (http://www.anandtech.com/show/6465/nintendo-wii-u-teardown)

Because MS crippled the memory bandwidth of R500 GPU in Xbox 360 in half and Wii U's GPU's memory bandwidth is also crippled by shared DDR3-1600, instead of dedicated GDDR5, both of those solutions try to mask the memory bandwidth penalty by including eDRAM. If you want to retain the full power of the GPU, you would go with GDDR5 for the GPU and drop the eDRAM. If you want to save costs, you go with cheaper DDR memory and eDRAM. If PS4 uses dedicated GDDR5 for the GPU, it's actually the most optimal approach (which is why no AMD/NV GPUs have eDRAM on the PCB). Despite sounding fancy, the use of eDRAM/eSRAM is actually a cost-savings solution to minimize the performance penalty of foregoing a wider memory bus+GDDR5. The lack of eDRAM on PS4 but the inclusion of GDDR5 would actually be a good thing, not a disadvantage. It would imply the graphics sub-system would not be compromised. If Sony cripples the 256-bit bus of HD7970M in half, then sure eDRAM is possible.



Cost-saving?esRAM are expensive



Around the Network
sergiodaly said:
@ superchunk
where is the info on those 4 move engines next box use to move data around? would like to know more about that.

question @ALL
if the GPUs on PS4 and nextbox are from AMD... what makes people believe that xbox will have edram or esram and PS4 will not... if wii U uses it, its obvious AMD thinks its a good idea for the GPU and i believe AMD will use it too in Sony machine. Right?

I picked the move engines info from GAF. Basically all I could get out of it is that they had to do with assisting data movement between the various components. This is supposedly what will make up the transfer rate differences between the DDR3 ram and the sDRAM. IDK, its really an uknown at this point and could very well be for something completely different. But looking at the diagram VGLeaks supplied, it does make sense.

Nintendo/MS both must of decided that it was more important to save money on the main RAM and supplement its lack of speed with embedded ram. Sony seems to have wanted to fix its developer complications and stick to something that is very similar to PCs, thus it went for the costlier GDDR5. Since it is already a very fast memory, the embedded ram isn't needed.



@BlueFalcon

It have 32 ROPs and 72 TMUs... a true GCN architecture... there is no fake specs.



zarx said:

So unless you have a source that 4Gbit GDDR memory chips will be available in the next few months.

Are you kidding?

2008 news... "Presently Qimonda has 512Mb (16Mx32) GDDR5 chips at 3.60GHz, 4.0GHz and 4.50GHz clock-speeds in PG-TFBGA-170 packages in production."

http://www.xbitlabs.com/news/graphics/display/20080510113121_GDDR5_in_Production_New_Round_of_Graphics_Cards_War_Imminent.html

The PS3 uses 8x512MB GDDR5 (or 4Gbit like you said) to get ~170GB/s.... GTX 680 uses 16 chips because they need a 512bits bus width for near 380GB/s brandwidth... it's easy to make maths.

8 x 32bits = 256bits bus width = 190GB/s at high speeds
16 x 32bits = 512bits bus width = 380GB/s at high speeds

nVidia choose 16 x 256MB chips because they need to reach 380GB/s bandwidth... not because there is no 512MB chip.



ethomaz said:

zarx said:

So unless you have a source that 4Gbit GDDR memory chips will be available in the next few months.

Are you kidding?

2008 news... "Presently Qimonda has 512Mb (16Mx32) GDDR5 chips at 3.60GHz, 4.0GHz and 4.50GHz clock-speeds in PG-TFBGA-170 packages in production."

http://www.xbitlabs.com/news/graphics/display/20080510113121_GDDR5_in_Production_New_Round_of_Graphics_Cards_War_Imminent.html

The PS3 uses 8x512MB GDDR5 (or 4Gbit like you said) to get ~170GB/s.... GTX 680 uses 16 chips because they need a 512bits bus width for near 380GB/s brandwidth... it's easy to make maths.

8 x 32bits = 256bits bus width = 190GB/s at high speeds
16 x 32bits = 512bits bus width = 380GB/s at high speeds

nVidia choose 16 x 256MB chips because they need to reach 380GB/s bandwidth... not because there is no 512MB chip.


that's 512Mb, note the little "b" that stands for bits not bytes 512Mb would be 64MB per chip. The GTX 680 has a 256-bit Bus, manufacturers use 2 chips per controller for the 4GB models which causes bottlenecks when using more than 2GB of VRAM. 

There are no RAM maunufacturers making larger than 2Gb GDDR5 chips which is 256MB per chip. I have checked them all and they all top out at 2GBit.



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

D-Joe said:
Cost-saving?esRAM are expensive

It's only expensive relative to its size but not in the context of the entire "GPU kit". The Xbox 360 GPU "kit" was $141 in 2005, including eDRAM:
http://www.xbitlabs.com/news/multimedia/display/20051123214405.html

In 2010, it cost nearly $50 for 2GB of GDDR5 at cost from AMD (https://www.google.com/search?q=mercury+research+graphics+card+cost&hl=en&tbo=d&source=lnms&tbm=isch&sa=X&ei=0XQHUf6GLLSvygHX5YH4Bg&ved=0CAcQ_AUoAA&biw=1920&bih=989#imgrc=g3aafU_5vjHpwM%3A%3BIFzhtEtu0xhYnM%3Bhttp%253A%252F%252Fi.imgur.com%252F4u06C.jpg%3Bhttp%253A%252F%252Fforums.anandtech.com%252Fshowthread.php%253Fp%253D34458518%3B881%3B339)

^ That's nearly as much for 2GB of GDDR5 as the entire PS3's GPU and all of PS3's memory. By that point the entire Xbox 360 GPU was also probably not worth more than $45-50 (http://www.xbitlabs.com/news/multimedia/display/20091215232901_Sony_Still_Sells_PlayStation_3_at_a_Loss_Analysis.html)

If using eDRAM+DDR3 was the superior approach, how come no high-end GPU has used such a setup? Also, we already have real-world evidence that eDRAM does not make up for memory bandwidth bottleneck --> Xbox 360's GPU failed to deliver on the "free" 4x Anti-aliasing, promised by eDRAM.

ethomaz said:

@BlueFalcon

It have 32 ROPs and 72 TMUs... a true GCN architecture... there is no fake specs.

The specs say 8 ROPs and 18 TMUs. Make no sense unless they forgot to multiply those by 4. Also, the idea of splitting the GPU's compute units into a 14+4 setup makes no sense whatsoever because all compute units inside GCN are equal.

 

ethomaz said:
2008 news... "Presently Qimonda has 512Mb (16Mx32) GDDR5 chips at 3.60GHz, 4.0GHz and 4.50GHz clock-speeds in PG-TFBGA-170 packages in production."

http://www.xbitlabs.com/news/graphics/display/20080510113121_GDDR5_in_Production_New_Round_of_Graphics_Cards_War_Imminent.html

 

You are confusing MB and Mb. In 2008 they have 512Mb chips or /8 bits per 1 Byte = 64MB chips, not 512MB chips. The largest density GDDR5 chip is 256MB right now. You need 16 of those to get 4GB of GDDR5.

ethomaz said:

The PS3 uses 8x512MB GDDR5 (or 4Gbit like you said) to get ~170GB/s.... GTX 680 uses 16 chips because they need a 512bits bus width for near 380GB/s brandwidth... it's easy to make maths.

8 x 32bits = 256bits bus width = 190GB/s at high speeds
16 x 32bits = 512bits bus width = 380GB/s at high speeds

nVidia choose 16 x 256MB chips because they need to reach 380GB/s bandwidth... not because there is no 512MB chip.

 

GTX680 has a 256-bit bus with 192GB/sec bandwidth not a 512-bit bus width with 380GB/sec.

Also, you are way oversimplying things. Adding more chips doesn't automatically increase bandwidth. That's not how GPUs work:

GTX680 256-bit 2GB of GDDR5

GTX660Ti 192-bit 2GB of GDDR5

http://www.anandtech.com/show/6159/the-geforce-gtx-660-ti-review

HD7870 has 4 memory controllers, 64-bit each, that support dual-channel. 

4x 64-bit channels = 256-bit bus

4x channels in Dual-channel = 4x (2x256MB) = 8x 256MB chips = 2048MB in total. Because GTX680 also has 4-memory controllers with 64-bit width, it also has 2048MB of GDDR5 by default. To get to 4GB, 8 more 256MB chips are added to the back, but the bus-width doesn't grow from 256-bit to 512-bit. If HD7870 (HD7970M) ships with 4GB of GDDR5, it doesn't mean the bus width will grow from 256-bit to 512-bit. The maximum bus width available is dictated by the GPU's internal memory controllers, not how many memory chips it ships with. The design of Pitcairn GPU (HD7870) dictates that it has 4x64-bit = 256-bit bus. The only other way to increase memory bandwidth is to replace GDDR5 4800mhz chips on it with say GDDR5 6000mhz ones, bumping up the memory bandwidth from ~154GB/sec to 192GB/sec. This doesn't matter though as HD7870 is not memory banwidth limited by design. How do we know?

Because when HD7870 gets 17% more shading power (1536 SP vs. 1280) and memory speed increases from 4.8Ghz to 6Ghz, performance only increases 9% (100% vs. 92%):

http://www.techpowerup.com/reviews/VTX3D/Radeon_HD_7870_XT_Black/28.html

Also, HD7950 has 1792 Shaders and a 384-bit bus, or 50% more memory bandwidth over HD7870, but performance is just 11% faster (same graph 102% for 7950 vs. 92% for 7870).

Getting above HD7870's standard memory bandwidth of 154GB/sec to say 176GB/sec (bumping GDDR5 from 4.8Ghz to 5.5Ghz) is honestly of waste of power since the GPU is bottlenecked somewhere else more. What it needs are more CUs (and thus more shaders) & TMUs. If developers will use the compute functionality of GCN on PS4/Xbox 720, then the # of Compute Units and the GPU clock will matter more than for the GPU to have > 154GB/sec memory bandwidth.

You can see this in a compute heavy game like Dirt Showdown:

HD7950 Black = 900mhz 28 CUs, 264 GB/sec bandwidth

HD7970 = 925mhz 32 CUs, 264 GB/sec bandwidth

HD7970 Ghz = 1050mhz 32 CUs, 288 GB/sec bandwidth

That's why I said earlier, the difference between a 12 CU AMD GPU in Xbox 720 and an 18 CU one could be a substantial one. That's a 50% deficit for the 720, even before we talk about PS4's GDDR5 vs. Xbox 720's DDR3 setups that only exacerbate the performance disadvantage for the Xbox 720's GPU (of course these are just rumors still). If PS4 runs on Linux, they might use OpenGL or OpenCL for games. If developers use OpenCL compute for PS4 games, the amount of compute units its GPU has will matter a lot more than if PS4 was running a Windows OS with a traditional DX11 API. That kinda would make sense why Sony would want a more powerful GPU given their OS choice is unlikely to be a Windows one.

http://en.wikipedia.org/wiki/OpenCL

Also, AMD's GPUs are faster in both OpenGL and OpenCL, which gives yet another reason why an NV GPU was not chosen for PS4 if Sony aims to focus around them.