By using this site, you agree to our Privacy Policy and our Terms of Use. Close
AkimboCurly said:

AkimboCurly said:

On the technical level the only concern I have about the S is that the memory is not unified. The biggest reason the Xbox One lagged so far behind the base PS4 last gen was that rendering targets needed to be squished into that tiny 32MB of ESram, meanwhile only 5 (patched to 6) GB of the shared system memory was available to used for games. Big multiplatforms had non-trivial amounts of memory sitting idle because frankly they couldn't access it without a lot of effort that quite often wasn't worth exerting on the One S. Meanwhile PS4 had unified RAM and it was faster. This complexity brought the PS4's theoretical 25%~ lead in graphics performance up to a 50% increase in resolution and often higher framerates to boot.

My point is just that the more complicated Xbox make their memory architecture, the worse they hold up against Playstation equivalents, because multiplatforms tend to be developed there first. The Series X is sexy and unified, except for some of the RAM running slower. The Series S isn't, and I see a repeat. The Series S CPU is not a problem. It actually appears to be faster than the PS5's. The GPU may begin to struggle at 1440p/30 or 1080p/60 but graphics can be scaled without damaging core gameplay. But yeah. Dev tools are important and Series S needs some love from devs

I don't remember 50% gap in resolution/performance on PS4 and X1 being common (very few games were 720p vs 1080p, most would be 900 vs 1080 or 720 vs 900), and don't think that was due to the edram.

So yes empirically you're absolutely right most were not the full 720p vs. 1080p. I didn't intend it to be read like that was the norm, only that in extremis you get a 50% resolution hit. 

The reason I say it has to do with the RAM (and by extension the ESram) is because of the way it was partitioned up. I'm not a developer obviously but to my limited understanding, unless you can fit your render target into the 32MB buffer of extra-speedy RAM, you're forced to relegate it into the DDR3. This meant that, especially in multiplats which seldom used the buffer, the Xbox One would construct its frame with DDR3 (which is supposed to be system RAM) while the PS4 was able to use its GDDR5. The bandwidth differences then become seriously 2.0. The new Series S has a similar tiered memory architecture which people suspect the slower 2GB will be used for the OS. But even the faster memory (8GB GDDR6) has less bandwidth than the slowest tier memory in the Series X. So to my mind, unless developers scale their rendering targets nicely, both for the GPU but ALSO for the memory bandwidth, the Series S will get the short end of the stick. In practice that means that cutting texture resolutions AS WELL as internal resolution is basically non-negotiable. Watch Dogs got it right and AC Valhalla got it wrong

Understood. My technical understanding in this is also limited, but sure it is very reasonable expectation. Will just point out what Pemalite brought on the pixel count rendering isn't that much affected by the RAM amount (and perhaps speed), but textures and other elements are. But sure if your assets are being limited by the speed/amount of RAM making the render higher would just make things unbalanced. Let's see how things will roll during the rest of the gen.

On the 50% difference on some extreme cases I don't remember what were the games, but were very few, I think CoD was one of them on the release, did it got patched later? And from what I remember even those titles that had 50% difference in pixel count while playing weren't so much worse.

duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

Azzanation: "PS5 wouldn't sold out at launch without scalpers."