Forums - Microsoft Discussion - Penello specifies why X1vsPS4 performance difference is overstated

I see my statements the other day caused more of a stir than I had intended. I saw threads locking down as fast as they pop up, so I apologize for the delayed response.

I was hoping my comments would lead the discussion to be more about the games (and the fact that games on both systems look great) as a sign of my point about performance, but unfortunately I saw more discussion of my credibility.

So I thought I would add more detail to what I said the other day, that perhaps people can debate those individual merits instead of making personal attacks. This should hopefully dismiss the notion I'm simply creating FUD or spin.

I do want to be super clear: I'm not disparaging Sony. I'm not trying to diminish them, or their launch or what they have said. But I do need to draw comparisons since I am trying to explain that the way people are calculating the differences between the two machines isn't completely accurate. I think I've been upfront I have nothing but respect for those guys, but I'm not a fan of the mis-information about our performance.

So, here are couple of points about some of the individual parts for people to consider:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.
• Adding to that, each of our CU's is running 6% faster. It's not simply a 6% clock speed increase overall.
• We have more memory bandwidth. 176gb/sec is peak on paper for GDDR5. Our peak on paper is 272gb/sec. (68gb/sec DDR3 + 204gb/sec on ESRAM). ESRAM can do read/write cycles simultaneously so I see this number mis-quoted.
• We have at least 10% more CPU. Not only a faster processor, but a better audio chip also offloading CPU cycles.
• We understand GPGPU and its importance very well. Microsoft invented Direct Compute, and have been using GPGPU in a shipping product since 2010 - it's called Kinect.
• Speaking of GPGPU - we have 3X the coherent bandwidth for GPGPU at 30gb/sec which significantly improves our ability for the CPU to efficiently read data generated by the GPU.

Hopefully with some of those more specific points people will understand where we have reduced bottlenecks in the system. I'm sure this will get debated endlessly but at least you can see I'm backing up my points.

I still I believe that we get little credit for the fact that, as a SW company, the people designing our system are some of the smartest graphics engineers around – they understand how to architect and balance a system for graphics performance. Each company has their strengths, and I feel that our strength is overlooked when evaluating both boxes.

Given this continued belief of a significant gap, we're working with our most senior graphics and silicon engineers to get into more depth on this topic. They will be more credible then I am, and can talk in detail about some of the benchmarking we've done and how we balanced our system.

Thanks again for letting my participate. Hope this gives people more background on my claims.

SOURCE



Around the Network
For someone that says specs dont matter, hes so desperate to prove that its almost equal in power.

Interesting..

Xenostar said:
For someone that says specs dont matter, hes so desperate to prove that its almost equal in power.



My god, give it a break.



I don't get this specs wars... who cares? Just buy your console of choice and be done with it!
They are probably going to have equal power so most games will look the same on both consoles. There is no need to see who is better. Both are going to be good consoles.

"I've Underestimated the Horse Power from Mario Kart 8, I'll Never Doubt the WiiU's Engine Again"

Around the Network
The whole specs-discussion is useless as actual performance has many more variables than just specs.

DJEVOLVE said:
Xenostar said:
For someone that says specs dont matter, hes so desperate to prove that its almost equal in power.



My god, give it a break.


Hes right though



DJEVOLVE said:
Xenostar said:
For someone that says specs dont matter, hes so desperate to prove that its almost equal in power.



My god, give it a break.


Im not the one giving constant press releases to prove something that he thinks is unimportant in the first place. 



Adinnieken said:

• 18 CU's vs. 12 CU's =/= 50% more performance. Multi-core processors have inherent inefficiency with more CU's, so it's simply incorrect to say 50% more GPU.


Sorry, you lost all credibility with this line.

CPU's and GPU's process information completely differently.



Around the Network
Is it really true that 68gb/sec on 8 GB DDR3 + 204gb/sec on 32 MB ESRAM = 272gb/sec in the same way as 176gb/sec on 8GB GDDR5?

I don't know how RAM capacity and RAM speed relate. Intuitively it feels like you just can't simply add the 2 speeds together and that's your total RAM speed in an apples to apples comparison with 1 unified RAM speed. Can someone who actually knows stuff please explain?

“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt.” - Bertrand Russell

"When the power of love overcomes the love of power, the world will know peace."

Jimi Hendrix