Ok, people, calm down for a while.
Pretty much everything that was written in the past three or so posts is dead wrong. The situation is a lot more complex than what your posts seem to imply but it would take a very lengthy and technical post to clear up the mess we are currently steering into. (And the same errors would pop up again in one or two weeks in another thread...)
Then you need to elaborate.
No point claiming someone is "wrong" and not even bothering to back up such claims, you just wasted my time with reading that for no gain or benefit to any parties involved.
ok, I will explain just one point most people got wrong (I have no hope this point will stick, though). I'm trying to be as "non-technical" as possible so that everybody can understand the conclusions.
One of the big question is: Who "does graphics" faster, the XBox One or the PS4 ? (without specifying what "doing graphics" exactly encompasses).
The correct answer is simple: It is the XBox One. Now this will confuse a lot of people. So in my endless patience, I'm going to tell you why and you will hopefully see why the esram is there. (As a sidenote, it is esram and not edram simply because the manufacturer doesn't have the 28nm technology, same reason the WiiU was designed in 40nm).
First we do the numbers game: We know the PS4 gpu has a specified bandwidth of 176GB/s into gddr5, and the XBox One gpu has a bandwidth of 109GB/s into esram or a combination of esram/ddr3 (! a crucial point of the design that often gets overlooked, but that would be anther long lecture...). Now these numbers mean one thing, and one thing only: AMD's engineers tell us there is an absolute guarantee that in no way you can shuffle more GB/s than those numbers. What these numbers do NOT tell us how fast you actually shuffle memory around. So at first sight, we all agree: 176 > 109.
Now we take a second look and find that in reality: 176 < 109.
Now that may seem rather puzzling, but reality says it's true. Those numbers 176 and 109 come from the idea that you can shuffle data with every clock cycle. Unfortunately, noone is able to do this but in a mode that lasts just a few clock cycles. This mode is called burst mode and both apus can handle burst lengths of 8 (which means for eight clock cycles you get the stated maximum GB/s). After that, all hell has broken loose in the memory and you have to do a "calm down fellows detour". Now, without going into complicated explanations of how diffferent memory types are addressed, the "calm down detour" for the gddr5 chips (which in their hearts are just ddr memories) is much longer than the detour for esram. In a rather clumsy comparison, this looks like:
XBox: Grab with 109 GB/s / wait a little / Grab with 109GB/s / wait a little / ...
PS4: Grab with 176GB/s / wait quite some long time / Grab with 176GB/s / wait some long time / ....
As you can see, the PS4's line is much longer in the end (in this cheesey comparison). The overal effect is that, on averafe, you grab less GB in the second case than in the first case.
Now it seems I have just written that XBox "does graphics" faster than PS4. Unfortunately there is another key element not yet mentioned. "Does graphics" mainly means doing texture stuff and raster stuff. The PS4 has more of those than the XBox One. Twice as many rasterisers (very generous and maybe overkill) and 50% more texture units in the PS4 more than offset the slightly higher clockrate of the XBox One. What you want in the end is nicer/more pixels than the other guy and there is no way the XBox One can overcome the lack of TMUs/ROPs. So while it really can "do graphics" faster, it just doesn't have enough of those "does praphics" units to beat the PS4.
edit: corrected since the editor auto-removed some important info..