By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - TRUE XB1 vs PS4 spec comparison

Aura7541 said:
fallen said:

I see a lot of so called spec comparison on places like Neogaf that are incomplete, or extremely biased towards PS4 to the point of being nonsense, so I wanted make a true, unbiased reference. The PS4 will go second and be compared to the X1 baseline in terms of percentages.

X1:

Disc: 6X Blu Ray

Disc capacity: 50 GB

HDD: 500 GB 5400 RPM

RAM quantity: 8GB

CPU: 8 core (6 game usable) 1.75 ghz Jaguar (Plus SHAPE audio chip likely ~1 CPU core of audio processing, +cloud support which we will ignore for now but could offload things like AI from X1 CPU in the future)

GPU FLOPS 1.31 teraflops

Texture units 48 at 853 mhz

Triangle setup engines: 2 at 853 mhz

ROPS 16 at 853 mhz

Peak bandwidth: 204+68=272 GB/s

Peak BW per GFLOP= 272,000 MB/S/1310=207.6 MB/S/GFLOP

 

PS4:

Disc: 6X Blu Ray (+0%)

Disc capacity: 50 GB (+0%)

HDD 500GB 5400 RPM (+0%)

RAM quantity: 8GB (+0%)

CPU 8 core (6 game usable) Jaguar CPU 1.6 ghz (-9% raw, -24% after removing 1 core for audio processing to match SHAPE, ignoring X1's cloud capability for now)

GPU FLOPS: 1.843 (+41%)

Texture units: 72 at 800 mhz (+41%)

Triangle setup engines: 2 at 800 mhz (-7%)

ROPS: 32 at 800 mhz (+88%) (However, the real world impact will be much lower as 16 ROPS typically enough at 1080P and less)

Peak bandwidth: 176 GB/s (-35%)

Peak BW per GFLOP: 176,000 MB/S/1843=95.5 MB/S/GFLOP (-54%)

 

Specs PS4 is superior in: ROPS (+88%), GFLOPS (+41%) Texture Units (+41%)

Specs X1 is superior in: CPU (+9%) Geometry setup (+7%) Peak bandwith + (55%) Peak BW per GFLOP (+117%)

Specs they are equal in: Ram capacity, Optical Disc capacity, Optical disc read speed, Hard drive read speed.

 

 

There you have the complete picture of baseline specs, subject to countless other variables we know nothing about...

 

From a high level, PS4 has a stronger GPU while X1 has a sigjnificant edge in peak bandwidth, and a slight to moderate advantage in CPU, which could grow if developers begin to leverage the cloud in the future. The disc read subystems and RAM quantities are identical.

Man...where to start. The clock speed of PS4's CPU has not been confirmed. Rumors back in February claim that the PS4's CPU may be clocked up to 2.0 Ghz, which is 0.25 Ghz higher. Also, you cannot add the bandwidth of DDR3 and ESRAM together. The ESRAM doesn't touch the DDR3 and magically increase the bandwidth. It is 8 gigabytes of DDR3 running at 68 gb/s and 32 megabytes of ESRAM running at 204 gb/s. This means that 8 gigabytes of the PS4's GDDR5 runs faster than the X1's 8 gigs of DDR3 by 108 gb/s and slower than the X1's 32 megs of ESRAM by 28 gb/s. However, the ESRAM can only achieve its theoretical peak if it is doing both read and write operations. As a result, developers won't achieve the theoretical peak right off the bat. In addition, because the amount of ESRAM is so small, huge pieces of data need to be chopped up into several pieces of 32 megabyte pieces, which is a pain. Add on to the fact that the X1 only has 5 gigs of RAM available for games while the PS4 has 6 to 7 gigs of RAM available for games, the winner is a no brainer.

Your statement that 16 ROPS is typically enough at 1080p is highly questionable. The amount of ROPS measures the number of pixels a video card can render and write to video memory per second. Add to the fact that Ryse is only running at 900p, Killer Instinct at 720p, and Battlefield (X1 edition) at 720p, I call shenanigans. The PS4 having twice the ROPS is significant because it means that its video card can render pixels at a much faster pace.

You also forgot mention the Aynchronous Compute Engines (ACES), which are useful for physics and lighting simulation. Usually, CPUs do these tasks, but the PS4's GPU is not like a traditional GPU. The PS4 has 8 ACES with each ACE being able to manage up to 8 compute queues. Overall, the PS4 has 64 compute commands. The X1, on the other hand, only has 2 ACES wich each one being able to manage the same amount of compute queues. Thus, the X1 have 16 compute commands. This is a very significant difference because the PS4 has 4 times the compute granularity. Assuming that the PS4's CPU run at 2.0 Ghz, that means that the PS4's CPUs will run faster than the X1's with a much lighter workload because its GPU has very high compute granularity. On the other hand, the X1's CPUs will run slower and will have a larger workload because its GPU has low compute granularity.

Lastly, the PS4 has other slight modifications to its hardware. For instance, it has an addition 20 gb/s bus that allows the GPU to bypass L1 and L2 caches. This allows the GPU to access the system memory directly, thus reducing synchronization issues for the ACES. And of course, hUMA.

*sigh* Once again just ignore what the OP spouts. It's not like he knows anything about hardware. 



Around the Network



Pemalite said:
fatslob-:O said:

If both DDR3 and GDDR5 had the same clocks and bus width i'd say never because the most fundamental difference is the fact that GDDR5 can do both read and writes at the same cycle compared to DDR3 which can only do either one of them at a cycle. 


Well of course if you keep everything equal, GDDR5 is going to be faster.

But you don't have to keep everything equal.
Microsoft could have wen't with 3ghz DDR3 on a 512bit bus and it would have been faster than the GDDR5 in the Playstation 4, unfortunatly however being a very cost sensitive device, it would have driven up the PCB layers, added more traces and probably required a more complex memory controller amongst other things.


512b DDR would only bump top speed to around 140gb/s which is still slower, at the cost of pay transistor space, higher memory cost and more heat produced, the most likely would have had to drop the esram to fit the controller in and the end result is its still slower

Microsoft bet their horses on ddr3 remaining cheap and continuing to drop in price as the industry moves to ddr4 so in the long term its more about lining pockets than providing performance 

Everyone says its about the games but are hung up on launch titles, launch titles never showcase a systems strengths but its rather telling that at such an early stage with such similar architectures the ps4 versions are mostly running at higher resolutions 

Give it a year and nobody will be claiming silly 'not so different' threads anymore as the difference is going to be a hell of a lot more obvious than ps3 and 360.



Frequency said:
Pemalite said:
fatslob-:O said:

If both DDR3 and GDDR5 had the same clocks and bus width i'd say never because the most fundamental difference is the fact that GDDR5 can do both read and writes at the same cycle compared to DDR3 which can only do either one of them at a cycle. 


Well of course if you keep everything equal, GDDR5 is going to be faster.

But you don't have to keep everything equal.
Microsoft could have wen't with 3ghz DDR3 on a 512bit bus and it would have been faster than the GDDR5 in the Playstation 4, unfortunatly however being a very cost sensitive device, it would have driven up the PCB layers, added more traces and probably required a more complex memory controller amongst other things.


512b DDR would only bump top speed to around 140gb/s which is still slower, at the cost of pay transistor space, higher memory cost and more heat produced, the most likely would have had to drop the esram to fit the controller in and the end result is its still slower

Microsoft bet their horses on ddr3 remaining cheap and continuing to drop in price as the industry moves to ddr4 so in the long term its more about lining pockets than providing performance 

Everyone says its about the games but are hung up on launch titles, launch titles never showcase a systems strengths but its rather telling that at such an early stage with such similar architectures the ps4 versions are mostly running at higher resolutions 

Give it a year and nobody will be claiming silly 'not so different' threads anymore as the difference is going to be a hell of a lot more obvious than ps3 and 360.

Your also forgetting about them having the same clocks. The DDR3 memory is clocked at 1066mhz compared to the PS4s 1375mhz so that's also another difference that should be closed in order for DDR3 to have the same bandwidth. I agree about the difference being more obvious and bigger though.



Frequency said:


512b DDR would only bump top speed to around 140gb/s which is still slower, at the cost of pay transistor space, higher memory cost and more heat produced, the most likely would have had to drop the esram to fit the controller in and the end result is its still slower


512bit DDR3 would bump it up to 136Gb/s.
If Microsoft opted for DDR3 Ram that was running at 2750mhz on a 512bit bus it would be equal to the Playstation 4.
If Microsoft opted for DDR3 Ram that was running at 3000mhz on a 512bit bus, it would be faster than the Playstation 4.

Heat and Memory cost isn't a drama, I've played around with Sammy's DDR3 green sticks overclocked to 3ghz and they run perfectly happy at low volts and without any heatsinks, the bonus? They're dirt bloody cheap.

Obviously there are reasons for not going that route, but the heat and memory cost itself isn't one of them.





--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:
Frequency said:


512b DDR would only bump top speed to around 140gb/s which is still slower, at the cost of pay transistor space, higher memory cost and more heat produced, the most likely would have had to drop the esram to fit the controller in and the end result is its still slower


512bit DDR3 would bump it up to 136Gb/s.
If Microsoft opted for DDR3 Ram that was running at 2750mhz on a 512bit bus it would be equal to the Playstation 4.
If Microsoft opted for DDR3 Ram that was running at 3000mhz on a 512bit bus, it would be faster than the Playstation 4.

Heat and Memory cost isn't a drama, I've played around with Sammy's DDR3 green sticks overclocked to 3ghz and they run perfectly happy at low volts and without any heatsinks, the bonus? They're dirt bloody cheap.

Obviously there are reasons for not going that route, but the heat and memory cost itself isn't one of them.



Are you talking about effective clocks ? The actual clock would be 1375. By god I hate the term "effective clock" since that shit originated from nvidia. :P

Another question, what is the effect of bigger bus on the heat generated by the DRAM ? (Well y'know PC DRAM runs at a bus width of 64bit.) 



fatslob-:O said:

Are you talking about effective clocks ? The actual clock would be 1375. By god I hate the term "effective clock" since that shit originated from nvidia. :P

Another question, what is the effect of bigger bus on the heat generated by the DRAM ? (Well y'know PC DRAM runs at a bus width of 64bit.) 


Yeah I am talking about "effective clocks".

As for heat, it will  be Bucklies, most of the extra heat would be generated by the memory controller not the actual DRAM IC's.



--::{PC Gaming Master Race}::--

Pemalite said:
fatslob-:O said:

Are you talking about effective clocks ? The actual clock would be 1375. By god I hate the term "effective clock" since that shit originated from nvidia. :P

Another question, what is the effect of bigger bus on the heat generated by the DRAM ? (Well y'know PC DRAM runs at a bus width of 64bit.) 


Yeah I am talking about "effective clocks".

As for heat, it will  be Bucklies, most of the extra heat would be generated by the memory controller not the actual DRAM IC's.

Your right but wouldn't that make the APU hotter ? 



fatslob-:O said:
Pemalite said:
fatslob-:O said:

Are you talking about effective clocks ? The actual clock would be 1375. By god I hate the term "effective clock" since that shit originated from nvidia. :P

Another question, what is the effect of bigger bus on the heat generated by the DRAM ? (Well y'know PC DRAM runs at a bus width of 64bit.) 


Yeah I am talking about "effective clocks".

As for heat, it will  be Bucklies, most of the extra heat would be generated by the memory controller not the actual DRAM IC's.

Your right but wouldn't that make the APU hotter ? 


Sure, but you could also drop the 1.6~ Billion transisters for the eSRAM because it wouldn't be required. :P
Whether they would break even in heat and powerconsumption and cost, I would actually have no clue, not many people in the world would as we have no idea on the APU's costs.



--::{PC Gaming Master Race}::--

MonstaMack said:

Truthfact: People bought 360s over the PS3 because of the graphics powah! /sarcasm

Truthfact: No one ultimately cares about graphics, look at Wii and 3DS sales compared to 360/PS3 and Vita sales.

Very true, people car about features, games, marketing, etc.  Granted, graphics are a part of that, but a very SMALL part of the overall "features" sell.