By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Middle-earth: Shadow of Mordor is 720P30FPS on XBONE confirmed by a Belgium team.

rccsetzer said:
So, is this the reason why batman: arkham knight was delayed?

I think it's the reason a lot of games were delayed.



 

The PS5 Exists. 


Around the Network

Glad to be on the winning side again (I have a PS4) just like with the 360 last gen being the undisputed winner for third party games.



ethomaz said:

walsufnir said:

While this is true esram should still be better performance wise as it should have a way lower latency. But it really is way too small.

It is true... ESRAM have way better latency (it is on die at all) but latency don't beneficiates graphics processing... it can beneficiate post-processing AA for example but GPU tasks are not affected by high or low latency.

ESRAM performance is fine... the issue is the size... 32MB is really small but I undestand MS because more could be costed A LOT.

It's not really the cost, but the way the ESRAM is integrated into the GPU/CPU die. MS can increase the amount of ESRAM, but if they do, they have to shrink one of the other two components. In this case, the GPU will be shrunk because the CPU is already small. MS is between rock and hard place with this type of configuration. You either sacrifice GPU power or you sacrifice ESRAM buffer. Neither choice is good.



MoHasanie said:
Yikes. Will there be 720p games on X1 throughout the gen?!

even on PS4 devs might have to drop the res in the future to get better graphical results, so yes, I do expect more 720p games down the line for X1



Aura7541 said:
ethomaz said:

walsufnir said:

While this is true esram should still be better performance wise as it should have a way lower latency. But it really is way too small.

It is true... ESRAM have way better latency (it is on die at all) but latency don't beneficiates graphics processing... it can beneficiate post-processing AA for example but GPU tasks are not affected by high or low latency.

ESRAM performance is fine... the issue is the size... 32MB is really small but I undestand MS because more could be costed A LOT.

It's not really the cost, but the way the ESRAM is integrated into the GPU/CPU die. MS can increase the amount of ESRAM, but if they do, they have to shrink one of the other two components. In this case, the GPU will be shrunk because the CPU is already small. MS is between rock and hard place with this type of configuration. You either sacrifice GPU power or you sacrifice ESRAM buffer. Neither choice is good.

There is a third option don't use ESRAM.

Not giving the credits to Sony but if what they showed are true they tested many EDRAM configs for PS4 (even 1GB/s) and the results are not good enought... so why MS didn't faced it in project time? What they are expecting?



Around the Network
ethomaz said:
Aura7541 said:

It's not really the cost, but the way the ESRAM is integrated into the GPU/CPU die. MS can increase the amount of ESRAM, but if they do, they have to shrink one of the other two components. In this case, the GPU will be shrunk because the CPU is already small. MS is between rock and hard place with this type of configuration. You either sacrifice GPU power or you sacrifice ESRAM buffer. Neither choice is good.

There is a third option don't use ESRAM.

Not giving the credits to Sony but if what they showed are true they tested many EDRAM configs for PS4 (even 1GB/s) and the results are not good enought... so why MS didn't faced it in project time? What they are expecting?

Actually, that's a very good suggestion

And yeah, I remember seeing Cerny' talk on using 256-bit GDDR5 vs. 128-bit GDDR5 + EDRAM.



ethomaz said:

There is a third option don't use ESRAM.

Not giving the credits to Sony but if what they showed are true they tested many EDRAM configs for PS4 (even 1GB/s) and the results are not good enought... so why MS didn't faced it in project time? What they are expecting?

well, the common theory is that MS always planned to have a pretty big OS in the X1 and therefore absolutely needed 8GB ram, which is why they decided on DDR3 ram pretty early on and included ESRAM to make it work, as GDDR5 modules were only 256MB each and putting 32 of that into a console would need a hellishly complicated/expensive mainboard design

there was no guarantee 512MB modules would become available in time - Sony on the other hand decided on GDDR5 ram and 4GB (small footprint OS), but the 512MB modules came just in time to replace the original 256MB ones in the plan making the PS4 have 8GB GDDR5 ram



ethomaz said:

There is a third option don't use ESRAM.

Not giving the credits to Sony but if what they showed are true they tested many EDRAM configs for PS4 (even 1GB/s) and the results are not good enought... so why MS didn't faced it in project time? What they are expecting?


ESRAM is much more energy efficient and gives more flexbility in optimisation when you are dealing with a SOC configuration.

Basically all smartphones and tablets are using one kind of embeded RAM.

GPU cards are strugling to get over the diminishing returns of adding a lot of VRAM and keep throughtput up . They are wasting tons of energy moving dada back and forth main RAM, VRAM and GPU/CPU. Low level APIs can help now on PC but problem remains.

For the XOne a bigger ESRAM meant a bigger GPU otherwise all the extra room would be waste, and also a bigger CPU as most of the bus would be idle most of the time.

This is all the balance stuff that Penallo was talking last year.

Developers never demand for Microsoft they remove the ESRAM o even a much more powerfull GPU. They were mostly happy with the 8GB of RAM and 8 core CPU.



Aura7541 said:

It's not really the cost, but the way the ESRAM is integrated into the GPU/CPU die. MS can increase the amount of ESRAM, but if they do, they have to shrink one of the other two components. In this case, the GPU will be shrunk because the CPU is already small. MS is between rock and hard place with this type of configuration. You either sacrifice GPU power or you sacrifice ESRAM buffer. Neither choice is good.

After looking it up, it seems you're right, and a lot of people are of the opinion that there wasn't enough room on the SoC for more than 32MB of ESRAM.  There isn't much wiggle room here for Microsoft, it seems.  They're going to just have to keep working on other techniques and work-arounds but it will probably always be an issue.



Is this for real cause that sure is a hugeee difference



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850