By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Wii U vs PS4 vs Xbox One FULL SPECS (January 24, 2014)

Pemalite said:
Multimedialover said:
Pemalite said:

What the hell is a permclock cycle? I've never heard that term used.

Regardless, to put performance into perspective though, Anandtech did a run down of Kabini/Jaguar and it works out that a Quad Core Kabini chip is half the speed of a Dual Core Core i3/i5 at the same clock.
So, an 8 Core 1.6ghz Jaguar CPU that's found in the consoles should be relatively similar in performance to a Dual Core 1.6Ghz Core i3/i5. (Actually it should be less, scaling beyond multiple cores isn't a linear increase, it's a theoretical scenario.)

Poor performance? You bet, the PS4 and Xbox One are identical in this regard with their respective CPU's.

As for the DDR3, it will outperform GDDR5 in general purpose tasks because of latency, it will loose out in high-bandwidth hungry tasks like graphics. - However this is where the SRAM will provide assistance.

I'm *very* dissapointed in the Xbox One's GPU, even more so than the PS4's as it's only a Radeon 7770/7790 class in terms of compute. :(

LOL. Damn tablet typing.

Meant to say PS4 4 operations per clock cycle. And Microsoft claim their CPU is doing 6 operations per clock cycle. Many tech heads on forums like Beyond3d are stating that Microsoft have actually done wonders with the CPU as its not supposed to be possible on the Jaguar/Kabini chipset. Going onto saying that makes the power of PS4 and Xbox One significantly blurred as to what is more powerful overall. They state that PS4 should have the edge in visuals but not by much thanks to DDR5 which is great for graphics. But the One should have the edge in processes like multitasking, App switcing, AI, well basically any process the CPU deals with as the extra operations per cycle and the DDR3 will help it there.


Well, from what I have read both the PS4 and Xbox One does 2 instructions per clock per hardware thread, thus best-case-scenario is 16 for both consoles, without the use of extensions or loopbacks.
Reason being is that the front-end of Jaguar is unchanged from the Bobcat architecture, it was a sacrifice of power/complexity vs performance in the end.

To put that in perspective, the first generation (Nahelem) Core i7 could do 4+1 instructions per hardware thread, but that's not going to make it twice as fast, allot more comes into play in regards to CPU design and thus performance.

Also, why isn't anyone peeved about the lack of GPU grunt this coming generation? :P

http://arstechnica.com/gaming/2013/05/microsoft-talks-about-xbox-ones-internals-while-disclosing-nothing/

Microsoft claim the One has 8 times more graphics capabilities over 360. 360 had 12 over Xbox. So I'm happy with that. Also the potential of the recent news that devs can if tey chose to, run things like AI, Physics through Microsofts Cloud. Freeing up memory and CPU for other things. 

Interesting.



Around the Network
Pemalite said:

Also, why isn't anyone peeved about the lack of GPU grunt this coming generation? :P

I don't think there's actual lack of GPU grunt (at least in PS4) - absolute jump in performance from last generation is as big as it should be - problem is that relative jump, considering length of the last cycle, asks for 7950-7970 level GPU, and we're not getting that.



DieAppleDie said:
Pemalite said:

As for the DDR3, it will outperform GDDR5 in general purpose tasks because of latency, it will loose out in high-bandwidth hungry tasks like graphics. - However this is where the SRAM will provide assistance.

I'm *very* dissapointed in the Xbox One's GPU, even more so than the PS4's as it's only a Radeon 7770/7790 class in terms of compute.

Exactly, why is people always leaving out of the equation those high band 32mb eSRam?

Because it is 32MByte only? How much can you do with 32MByte? The first batch of games will use the eSRam as framebuffer and call it a day. What if the game uses 1G of textures? The better developers will start to use parts of the eSRam for data cache and variable storage purposes, but it will take time to figure out how to do this and the developers will have to program the technique for it and manage all the data.

The latency problem.. oh my, how many time has this horse beaten to death now? Do people really think that Sony engineers weren't aware of higher gddr latencies? The not so surprising answer is: Of course they were. That is why they added lots of transistors into the SoC. Some of the added features we know (direct command bus), some we don't. And fortunately for us, some engineer had a bright moment years ago and invented something called "cache" and "cache controller".  The end result is roughly the following: a) for code fetches: the latency problem simply does not exist at all. b) for data fetches: Here, we might see the latency problem sometimes. How often depends on how well the programmers have defined the data structures. With some good thinking, I'd guess than in less than 1-5% of all fetches we might see a latency. it is clear, assuming the unveiled specs are correct, that the 8G gddr5 outperforms the 8G ddr3 + 32MeSRam.

The gpu's were chosen to do what they are supposed to do: deliver 1080p games (or tvtvtvtvtvsportssportssportstvtvtv for the XBox One). They contain the best parts available for the money alotted to graphics. Anybody expecting a GeForce680 in a console lives in a dreamworld.

And one last request: please stop the "the cloud will save it" messages. Not.going.to.happen.



Personally, I don't believe in confirm specs in the Wii U, because I'm still hearing different things about the specs. Like Sony and Microsoft, I won't believe it until Nintendo tell me there self. I'm also hearing that these updates are secretly optimizing the Wii U to make it run better.

I'm eager to see the Summer update and see what happen, supposedly, it's suppose to be huge compare to the Spring Update.



Don’t follow the hype, follow the games

— 

Here a little quote I want for those to keep memorize in your head for this coming next gen.                            

 By: Suke

Multimedialover said:

http://arstechnica.com/gaming/2013/05/microsoft-talks-about-xbox-ones-internals-while-disclosing-nothing/

Microsoft claim the One has 8 times more graphics capabilities over 360. 360 had 12 over Xbox. So I'm happy with that. Also the potential of the recent news that devs can if tey chose to, run things like AI, Physics through Microsofts Cloud. Freeing up memory and CPU for other things. 

Interesting.


Yeah, but even so... The Xbox 360 and PS3 look like utter crap today from a graphics perspective, plus the PS4 and Xbox One will have the same hardware for it's entire life, by the time the consoles are 4-5 years old the same thing will occur as this generation, games will just look dated which then means the multi-platforms that are released on PC will also look horrible.

Plus, last generation had relatively high-end graphics in comparison to the PC, this time around the Xbox One will have Mid/Low-End Graphics, the PS4? Mid-range with the Wii U having the lowest of the lowst.

I just want games to look great and push the PC harder, not stagnate.

drkohler said:

Because it is 32MByte only? How much can you do with 32MByte? The first batch of games will use the eSRam as framebuffer and call it a day. What if the game uses 1G of textures? The better developers will start to use parts of the eSRam for data cache and variable storage purposes, but it will take time to figure out how to do this and the developers will have to program the technique for it and manage all the data.

The latency problem.. oh my, how many time has this horse beaten to death now? Do people really think that Sony engineers weren't aware of higher gddr latencies? The not so surprising answer is: Of course they were. That is why they added lots of transistors into the SoC. Some of the added features we know (direct command bus), some we don't. And fortunately for us, some engineer had a bright moment years ago and invented something called "cache" and "cache controller".  The end result is roughly the following: a) for code fetches: the latency problem simply does not exist at all. b) for data fetches: Here, we might see the latency problem sometimes. How often depends on how well the programmers have defined the data structures. With some good thinking, I'd guess than in less than 1-5% of all fetches we might see a latency. it is clear, assuming the unveiled specs are correct, that the 8G gddr5 outperforms the 8G ddr3 + 32MeSRam.

The gpu's were chosen to do what they are supposed to do: deliver 1080p games (or tvtvtvtvtvsportssportssportstvtvtv for the XBox One). They contain the best parts available for the money alotted to graphics. Anybody expecting a GeForce680 in a console lives in a dreamworld.

And one last request: please stop the "the cloud will save it" messages. Not.going.to.happen.


It's clear you don't know how the SRAM works.
Microsoft will split the rendering frame into smaller "tiles" which is perfect as it doesn't need much memory to achieve good results and it will also fit perfectly inside that paltry 32Mb.
The Xbox 360 did something similar, in order to fit the frame inside the EDRAM, they cut the frame up into smaller pieces so the EDRAM die could work on it in smaller pieces.
The assumption that you are going to get Gigabytes of information inside of it is just silly, that's not how it functions or what it's purpose was to begin with.



--::{PC Gaming Master Race}::--

Around the Network
superchunk said:
DevilRising said:
It's always been my understanding that outside of the GamePad tech itself, the GPU was the one bit of the Wii U's hardware that actually was at least reasonably impressive, and that the gap between it and the new consoles wouldn't be AS huge this generations.

You're absolutely right.

WiiU is 4x to 6x lower in power than PS4 (strongest console).

Wii was 15x to 20x lower in power to PS3 (arguably strongest console).

Additionally, WiiU can utilize all the same graphical technoligies the others will be able to do where as Wii missed quite a few.

There is no technical reason for WiiU to be skipped this gen. The new engines are designed to be greatly scalable and it really should be a difference of "low" PC settings vs "high" while still being the identical game overall.


Thanks for answering! That was my thought as well.



DieAppleDie said:
Pemalite said:
Multimedialover said:
Hmmmm been trying to research about the 2 cpus. I keep coming back to many articles saying ps4 4 permclock cycle, and The One 6 per clock cycle. People are scratching their heads at how microsoft achieved this.

Some tech guys saying also that ddr3 should outperform ddr5 for all generalmpurpose tasks for cpu.

What the hell is a permclock cycle? I've never heard that term used.

Regardless, to put performance into perspective though, Anandtech did a run down of Kabini/Jaguar and it works out that a Quad Core Kabini chip is half the speed of a Dual Core Core i3/i5 at the same clock.
So, an 8 Core 1.6ghz Jaguar CPU that's found in the consoles should be relatively similar in performance to a Dual Core 1.6Ghz Core i3/i5. (Actually it should be less, scaling beyond multiple cores isn't a linear increase, it's a theoretical scenario.)

Poor performance? You bet, the PS4 and Xbox One are identical in this regard with their respective CPU's.

As for the DDR3, it will outperform GDDR5 in general purpose tasks because of latency, it will loose out in high-bandwidth hungry tasks like graphics. - However this is where the SRAM will provide assistance.

I'm *very* dissapointed in the Xbox One's GPU, even more so than the PS4's as it's only a Radeon 7770/7790 class in terms of compute. :(



Exactly, why is people always leaving out of the equation those high band 32mb Sdram?

Well, based on past experience on 360 with its eDRAM, that eSRAM will likely be used for "free" anti-aliasing in most games.

For comparison sake, bandwidth for each RAM:

In Xbone:

32Mb eSRAM: approx. 166GB/s

8Gb DDR3: 34.132

Add them together and you get 200Gb/s as MS marketed at their reveal. However, in real-world terms they'll be used in parallel and need to be taken separately. The 200 figure is typical marketing crap.

In PS4:

8Gb GDDR5: 176GB/s

Source: http://semiaccurate.com/2013/05/22/microsoft-subtly-admits-losing-with-xbox-one/



Scoobes said:

Well, based on past experience on 360 with its eDRAM, that eSRAM will likely be used for "free" anti-aliasing in most games.

For comparison sake, bandwidth for each RAM:


Of course, but it's not just because of the memory in that piece of silicon, it's because of the extra logic included with the eDRAM that performs the Anti-Aliasing.

Remember the Xbox 360 had 10Mb of eDRAM, which is far from enough to hold an entire image at 1280x720 with 4x Anti-Aliasing, so the work was split up into smaller tiles to fit it into that tiny space, which works rather well, the same thing is going to be employed in the Xbox One as it's efficient.

I still think the console is far to underpowered though, same goes for the PS4, but that's just my opinion.



--::{PC Gaming Master Race}::--

superchunk said:

Updated OP with little tidbits like:

XOne controller still using batteries (AA, not built in)
XOne 3 USB ports
XOne only HDMI others with HDMI/Composite (guessing for sony)
XOne additional links etc


Teh fuq?



4 ≈ One

Dgc1808 said:
superchunk said:

Updated OP with little tidbits like:

XOne controller still using batteries (AA, not built in)
XOne 3 USB ports
XOne only HDMI others with HDMI/Composite (guessing for sony)
XOne additional links etc


Teh fuq?

Yep. XOne expects you to have to buy batteries or likely a new charge-n-play pack. Additionally, if your TV doesn't have HDMI connections... well you better get a new TV or buy WiiU/PS4.