By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Hellblade II is 30fps on Xbox Series consoles

40fps/120Hz performance mode would be nice.



Around the Network
Conina said:

40fps/120Hz performance mode would be nice.

THIS. I can understand Xbox fans making excuses for no 60fps, but it's strange to not even ask for a 40fps mode at least. You would rather blindly defend not having 60fps so hard to the point that you just ignore the possibility of a 40fps mode. Maybe they fear that if they bring up not having 40FPS it will just make the XBSX look even more incapable than it currently does. Will be very interesting when it comes out on PC, what the XBSX equivalent in PC components can achieve in terms of FPS.



40 Fps should be mandatory addition if game doesn't offer 60 fps mode, I want it in GTA VI as well.



Graphics, Resolution and FPS - I like having the freedom to choose whether or not I want some of these settings reduced to enhance the others. While progress has been made in recent years with regards to this when it comes to console gaming, much more can still be done.



I honestly don't see this as a bad thing, personally.

If it's anything like the first game, Hellblade 2's primary focus will be on immersion rather than precise action, so it makes sense to prioritize detail and fidelity. Besides requiring technical compromise, 60fps also takes a hell of a lot of extra work optimizing, which would delay the game.

It's also cool to see a game really push the limits of what the Xbox Series is capable of.

They may yet add a performance mode post-launch, like Plague Tale Requiem did, but in the meantime, I think 30fps is fine for a game like this; it certainly didn't inhibit my enjoyment of the first game.



Around the Network

Until 2012 I had a GeForce 9500GT with 512 Mb of VRAM. I'm used to 30 FPS.

Not a problem for me.



For this kind of game 60fps is not needed especially with the type of visuals we are getting. There's no game better looking then this



curl-6 said:

I honestly don't see this as a bad thing, personally.

If it's anything like the first game, Hellblade 2's primary focus will be on immersion rather than precise action, so it makes sense to prioritize detail and fidelity. Besides requiring technical compromise, 60fps also takes a hell of a lot of extra work optimizing, which would delay the game.

It's also cool to see a game really push the limits of what the Xbox Series is capable of.

They may yet add a performance mode post-launch, like Plague Tale Requiem did, but in the meantime, I think 30fps is fine for a game like this; it certainly didn't inhibit my enjoyment of the first game.

Yeah, combat (especially due to ability to feint), is about the only thing I liked in Hellblade, and I don't remember having problem with it cause of 30fps.



Hardstuck-Platinum said:
Chrkeller said:

Not surprised. 60 fps takes a large memory bandwidth, 120 fps even more so. I think the series x and ps5 are around 450 gb/s. The 4090 is 1000 gb/s. Consoles can't do high graphics and high fps.

No it's definitely CPU related. As the GPU in the XBSX is 3x better than the one in the XBSS, any game that's GPU limited to 30 on the S would hit 60 no problem on the X. Problem is though, they have the same CPU so any game that's CPU limited to 30 on the S will also be CPU limited to 30 on the X. MS should have designed the X with more powerful CPU to avoid this 30FPS problem

I agree this is most likely a CPU bottleneck but Xbox simply could not make the CPU stronger on Series X than it is.

First, because this would have created way more issues than anything else. Keeping the CPU very close ensures that games are the same feature-wise and that the divergence happens almost exclusively with the presentation. There is some leniency thought because when rendering scenes with lessened resolution/and simplified assets the CPU would be slightly less used (things like fewer draw calls would be expected) and it's actually what we see with series S vs X CPU where the X is about 8% faster.

Secondly, the CPU does offer the best of what AMD could offer in 2020 for an 8-core variant when designing a ~200W max system and avoiding chip yield issues. Going above it would have meant either:

a) going to 12/16 core.

This would only have given mixed results, especially with 3rd party as the game must be written to take advantage of extra cores, and not everything is as easy to parallelize. This would also have increased die size significantly, reduced yields, and increased power requirement leading to a costlier, louder, even bulkier system.

b) Increasing Frequency

There is some headroom but it would only have been marginal, best case scenario would be +~20%, and it would either lead to a significant power consumption increase leading to a louder and/or bulkier system or a more drastic chip binning approach leading to reduced yields and significantly increase cost.



EpicRandy said:
Hardstuck-Platinum said:

No it's definitely CPU related. As the GPU in the XBSX is 3x better than the one in the XBSS, any game that's GPU limited to 30 on the S would hit 60 no problem on the X. Problem is though, they have the same CPU so any game that's CPU limited to 30 on the S will also be CPU limited to 30 on the X. MS should have designed the X with more powerful CPU to avoid this 30FPS problem

I agree this is most likely a CPU bottleneck but Xbox simply could not make the CPU stronger on Series X than it is.

First, because this would have created way more issues than anything else. Keeping the CPU very close ensures that games are the same feature-wise and that the divergence happens almost exclusively with the presentation. There is some leniency thought because when rendering scenes with lessened resolution/and simplified assets the CPU would be slightly less used (things like fewer draw calls would be expected) and it's actually what we see with series S vs X CPU where the X is about 8% faster.

Secondly, the CPU does offer the best of what AMD could offer in 2020 for an 8-core variant when designing a ~200W max system and avoiding chip yield issues. Going above it would have meant either:

a) going to 12/16 core.

This would only have given mixed results, especially with 3rd party as the game must be written to take advantage of extra cores, and not everything is as easy to parallelize. This would also have increased die size significantly, reduced yields, and increased power requirement leading to a costlier, louder, even bulkier system.

b) Increasing Frequency

There is some headroom but it would only have been marginal, best case scenario would be +~20%, and it would either lead to a significant power consumption increase leading to a louder and/or bulkier system or a more drastic chip binning approach leading to reduced yields and significantly increase cost.

Imagine how more future proof consoles would be if they launched in 2021 with Zen 3 CPU's instead, my Ryzen 5600X is so much more powerful in games compared to PS5 and XSX.