By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Wii U vs PS4 vs Xbox One FULL SPECS (January 24, 2014)

superchunk said:
Updated OP and 2nd post with better details and clarifying some items.

So much going wrong with what I was hoping for. Its becoming painfully clear that while WiiU will be able to do the same types of technology (GPU focus, shaders, etc) its weak CPU and tiny/slow memory may cripple its capability to get quality 3rd party support.

I still think that unlike the Wii, it will be simply a factor of scaling the game visuals down while keeping the rest of the game the same. But, 2013/2014 will clearly prove one way or another.

More than likely Rol is right and it won't matter as 3rd parties just don't want to see Nintendo succeed and will ignore it regardless of anything else.

This is what I've been fearing this whole time... And no matter if Rol is right or wrong, this will not help Nintendo getting any meaningful 3rd party support.

Hopefully it's all a matter of scalability, but I doubt it... Seems like we'll have another Wii situation ahead of us, and this time Wii U won't be selling too well...



I'm on Twitter @DanneSandin!

Furthermore, I think VGChartz should add a "Like"-button.

Around the Network
ethomaz said:

Orbis final specs... inline with what I expected and powerful enough.

GPU:


GPU is based on AMD’s “R10XX” (Southern Islands) architecture
DirectX 11.1+ feature set
Liverpool is an enhanced version of the architecture
18 Compute Units (CUs)
Hardware balanced at 14 CUs
Shared 512 KB of read/write L2 cache
800 Mhz
1.843 Tflops, 922 GigaOps/s
Dual shader engines
18 texture units
8 Render backends

100% fake. 

By definition of the building blocks around Graphics Core Next architecture, 1 CU is coupled with 4 TMUs (ratio of 1:4). For 18 CUs, you must have 72 Texture Units. Also, the specs for TMUs and ROPs are too weak for even the worst GPU in R1000 series. Even HD7750 has 16 ROPs and 32 TMUs. Also, the whole point beihind Graphics Core Next architecture revolves around the Compute Units being designed to perform compute work and graphics well. Why in the world would a GPU have 14+4 setup with 4 CUs allocated for compute? That's illogical because by definition of how GCN works, each CU was made for Compute work and for graphics from the ground-up. In other words, any CU can do GPGPU and you do not need to "allocate" separate CUs for GPGPU.

2 Asynchronous Compute Command Engines feed the Compute Units inside GCN to perform Compute work. Every CU is exactly the same.

...You don't need to do some weird split of 14+4 CU for compute = 18. Why is that?

"AMD’s new Asynchronous Compute Engines serve as the command processors for compute operations on GCN. The principal purpose of ACEs will be to accept work and to dispatch it off to the CUs for processing. As GCN is designed to concurrently work on several tasks, there can be multiple ACEs on a GPU, with the ACEs deciding on resource allocation, context switching, and task priority. One effect of having the ACEs is that GCN has a limited ability to execute tasks out of order. As we mentioned previously GCN is an in-order architecture, and the instruction stream on a wavefront cannot be reodered. However the ACEs can prioritize and reprioritize tasks, allowing tasks to be completed in a different order than they’re received. This allows GCN to free up the resources those tasks were using as early as possible rather than having the task consuming resources for an extended period of time in a nearly-finished state."

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5 

Sounds like the person is making up specs as they go....or doesn't actually understand what they mean!! The whole point of moving from VLIW-4/5 to GCN is that each CU was made from the ground-up to perform compute work. This idea of splitting the CUs into a 14+4 group is ludicrious because the ACEs do this automatically.  

Also, the rumor later states: "4 additional CUs (410 Gflops) “extra” ALU as resource for compute. Minor boost if used for rendering."

^ This statement is non-sensical. By definition of a CU, the ALUs can be used for graphics rendering or compute. If the source says the 4 extra CUs provide a minor boost for rendering, the source doesn't even understand what a CU is.   

zarx said:

Sea Islands it's self is a slightly modified version of Southern Islands. Whatever custom chip that AMD have cooked up for Sony bassed on VLIW5 will likely incorperate some of the same improvements they were working on for Sea Islands. 

Southern Islands and Sea Islands are SIMD design, Graphics Core Next, not VLIW5. 

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/4 

GCN (aka HD7000-8000 series) is the biggest architectural change for AMD graphics cards since 2007's HD2000 series.

Also, people these rumors calling it "12 Shader Cores" or "18 Shader Cores" is flat out incorrect. It's 12 or 18 Compute Units. Each Compute Unit houses 64 Stream/Shader processors. So if you are talking about Shaders, it would be 768 or 1152. A Compute Unit is not a "Shader Core" as it houses 64 Shader cores inside of it.

1 Compute Unit has 64 ALUs (shader cores). To say a GPU has "12 Shader Cores" is like saying a car's horsepower is 8 cylinders....Interchanging Compute Units with Shader Cores is flat out incorrect. 



So basically it's 7th gen all over again when comparing specs?

Wii low, X360 mid, PS3 high



Gilgamesh said:
So basically it's 7th gen all over again when comparing specs?

Wii low, X360 mid, PS3 high

Not exactly because PS3's graphics card was inferior to Xbox 360's. Also, the CPU inside the PS3 was worse when the 6 SPE engines weren't properly optimized. Based on specs, the GPU in PS4 is superior and the CPU will be at least at part. That would make PS4 more powerful, unlike Xbox 360 vs. PS3. PS3 was not the more powerful console despite fanboys claiming it to be. The PS3 is really just a 1 core CPU with 6 companion engines. The 1 Core CPU inside PS3 is similar to each of the 3 cores in the Xbox 360. Without extensive optimization, the CPU in the PS3 was worse, not better. To make up for the lack of GPU horsepower, the SPEs were then used to accelerate certain graphical effects in games. The memory sub-system in PS3 was also inferior to Xbox 360 with the GPU only having access to 256MB. Again, it looks like PS4 has a clear edge if it ends up having dedicated GDDR5 vs. shared DDR3 of Xbox 720. Based on specs, PS4 will be easily more powerful but the aggregate performance difference between PS3 and 360 is actually pretty evenly matched. Uncharted 3 doesn't look better than Halo 4 either, which lands evidence that PS3 and 360 were very close when PS3 was fully optimized. Without optimizations, Xbox 360 was the more powerful console (hence why most console ports look better and run faster on the 360).



BlueFalcon said:

Also, people these rumors calling it "12 Shader Cores" or "18 Shader Cores" is flat out incorrect. It's 12 or 18 Compute Units. Each Compute Unit houses 64 Stream/Shader processors. So if you are talking about Shaders, it would be 768 or 1152. A Compute Unit is not a "Shader Core" as it houses 64 Shader cores inside of it.

1 Compute Unit has 64 ALUs (shader cores). To say a GPU has "12 Shader Cores" is like saying a car's horsepower is 8 cylinders....Interchanging Compute Units with Shader Cores is flat out incorrect. 

 

Good info, thank you. I'll update OP to remove all that talk as its clearly being misrepresented.



Around the Network
BlueFalcon said:
Gilgamesh said:
So basically it's 7th gen all over again when comparing specs?

Wii low, X360 mid, PS3 high

Not exactly because PS3's graphics card was inferior to Xbox 360's. Also, the CPU inside the PS3 was worse when the 6 SPE engines weren't properly optimized. Based on specs, the GPU in PS4 is superior and the CPU will be at least at part. That would make PS4 more powerful, unlike Xbox 360 vs. PS3. PS3 was not the more powerful console despite fanboys claiming it to be. The PS3 is really just a 1 core CPU with 6 companion engines. The 1 Core CPU inside PS3 is similar to each of the 3 cores in the Xbox 360. Without extensive optimization, the CPU in the PS3 was worse, not better. To make up for the lack of GPU horsepower, the SPEs were then used to accelerate certain graphical effects in games. The memory sub-system in PS3 was also inferior to Xbox 360 with the GPU only having access to 256MB. Again, it looks like PS4 has a clear edge if it ends up having dedicated GDDR5 vs. shared DDR3 of Xbox 720. Based on specs, PS4 will be easily more powerful but the aggregate performance difference between PS3 and 360 is actually pretty evenly matched. Uncharted 3 doesn't look better than Halo 4 either, which lands evidence that PS3 and 360 were very close when PS3 was fully optimized. Without optimizations, Xbox 360 was the more powerful console (hence why most console ports look better and run faster on the 360).

Will the PS4 be much easier to work with compared to how the PS3 was?



Gilgamesh said:
So basically it's 7th gen all over again when comparing specs?

Wii low, X360 mid, PS3 high


Take a good look at the games out for X360 and PS3. Do you seriously think that on a technical level they differ much?

The X360 and Ps3 are 90% comparable and both slightly execeeds the other in some areas but that's about it.



Gilgamesh said:

Will the PS4 be much easier to work with compared to how the PS3 was?

PS3 was unique in its design.

PS4 looks to be a lot like a standard PC in almost every way. There should be no reason PS4 isn't the easiest to develop for unless MS utilizing actual DirectX makes a huge difference for them. MS's use or DDR3 shared memory creates the abnormal situation in this case (same with WiiU plus its unique SDK).



Gilgamesh said:
Will the PS4 be much easier to work with compared to how the PS3 was?

Any x86 AMD CPU will be easier to program for than the Cell. Any modern unified shader GPU will be superior in every way to the fixed pixel/fragment pipeline RSX. 

The white areas are in-game moments where the Cell's SPEs (or called SPUs in the diagram) are being unused for Killzone 2. It doesn't look pretty since the PS3's CPU is unusued nearly 40-50% of the time due to programmers being unable to schedule workload for all 6 SPEs effectively:

So yes, the closer PS4's CPU and GPU are to off-the-shelf PC components, the better it will be for developers to extract maximum performance and do so a lot sooner. For instance, you could just port Crysis 3 from the PC directly to PS4 with minimal optimizations. A custom-designed CPU could be better in theory but so far Sony tried twice and struck out with PS2 and PS3 in this area. Developers vocally complained about how difficult it was to develop for the Cell. I think it's a good decision that Sony is moving away from very custom-based expensive CPU designs. It saves them $ not spending $3-4 billion on R&D for a CPU that may or may not outperform the AMD's off-the-shelf CPUs. Why take such a large risk when they failed to make a fast custom CPU for 2 generations in a row? Also, if Sony's CPU is x86 based, it would make porting PC games to consoles much easier. If Xbox720 also has an x86 CPU, then cross-platform titles would work well across the PC and both next gen consoles, but it would hurt the Wii U the most. The Wii U would not only be left with the slowest CPU but the only non-x86 CPU, requiring additional work to port games to it. If Xbox 720 and PS4 have x86 CPUs, that would be one of the biggest blows to Wii U regarding its future 3rd party support. (Just my opinion).

What has me scratching my head is why go for an 8-core 1.6ghz Jaguar AMD CPU instead of a 65W A10-6700 Richland quad-core APU clocked at 4.3ghz?
http://techreport.com/news/24277/leaked-richland-specs-reveal-higher-clock-speeds

Since each Richland core is faster per clock than a Jaguar core, 4 Richland cores @ 4.3ghz is a much better setup than an 8-core Jaguar CPU clocked at 1.6ghz. The GPU inside Richland is also faster than inside Jaguar. I really don't understand why MS and Sony would not go for a quad-core Richland, unless they think 65W TDP is too high for a CPU, or it's too expensive for them.



ethomaz said:

zarx said:

Sea Islands it's self is a slightly modified version of Southern Islands. Whatever custom chip that AMD have cooked up for Sony bassed on VLIW5 will likely incorperate some of the same improvements they were working on for Sea Islands. I would imagine anyway. If they go for unified GDDR5 there is no way they could go past 4GB, I mean 4GB GDDR5 is already 16 chips, unless they are really going for 3D stacking that is already pushing the limit. GDDR5 is half the density of DDR3 and costs more per chip if I'm not mistaken as well.

16 chips? 256MB is the small chip of GDDR5... there is 512MB and 1GB too (I guess the 2GB too)... so 16 x 512MB is 8GB.

Anyway 16 x 32 bits = 512bits bus width... it's almost twice the rumored bandwidth for GDDR5... I think they are using 8x512MB for 4GB = 256bits bus width... 160-190GB/s depending of the final memory clock.

16 chips (~380 GB/s) I think is out of question... 8 chips is what Sony will use.


Link? The largest GDDR5 chips I can find reference to are 2Gbit. http://www.elpida.com/en/products/gddr5.html, http://www.skhynix.com/products/graphics/graphics.jsp?info.ramCategory=&info.ramKind=26&info.eol=NOT&posMap=graphicsGDDR5, http://www.samsung.com/global/business/semiconductor/product/graphic-dram/catalogue

Are you sure you are not getting that mixed up with DDR3 which does go to 4Gbit?

The latest device with 4GB of GDDR5 would be the GTX680 which uses 16 memory chips

"Galaxy looks to be the first with a custom designed PCB in order to fit the massive sixteen memory chips required to get a full 4GB."

http://www.legitreviews.com/news/12716/

So unless you have a source that 4Gbit GDDR memory chips will be available in the next few months.



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!