By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Microsoft Discussion - Starfield will be 30 fps on Xbox Series X and S.

curl-6 said:

Not everything can be time sliced and still work well though.

Things like tracking a massive number of dynamic objects doesn't have a fixed cost either, comparing it to Morrowind would be like saying reflections can't be demanding in modern games because some N64 games had them. These things scale.

PS5/Xbox Series are not infinitely powerful, their CPU/GPUs are not made of magic pixie dust, they offer a limited amount of power and some devs will want to do more with their games than current consoles can handle at 60fps.

My point was that simulations usually are updated by the precision required, not necessarily the frames rendered, and at times they could scale constantly with frame-rate while other times they could scale super-linearly with frame-rate. 

As to the second point,

Sure, these things scale. That's not in dispute. What has also scaled since Morrowind are memory capacity, memory bandwidth, storage space, CPU speed, etc.  Even if the number of objects (and their storage size) scaled by 1000 times Morrowind (for a game-world that is 8 times larger than Morrowind ) all of this hardware has also scaled by at least that much, if not more than that and Bethesda has likely learned many more tricks since then (i.e using more efficient data structures, putting a time-limit of 30 days on object permanence, etc.) 

Object-permanence mostly is a storage/memory management problem. The CPU does have a role in it (writing from memory to the disk/pulling to the memory from the disk) but it isn't some magic workload that is especially taxing. 

Most developers don't implement it in their games because it is a system that is developmentally costly to pull off correctly, not necessarily runtime costly (if it is pulled off correctly.) 

Last edited by sc94597 - on 17 June 2023

Around the Network
sc94597 said:
curl-6 said:

Not everything can be time sliced and still work well though.

Things like tracking a massive number of dynamic objects doesn't have a fixed cost either, comparing it to Morrowind would be like saying reflections can't be demanding in modern games because some N64 games had them. These things scale.

PS5/Xbox Series are not infinitely powerful, their CPU/GPUs are not made of magic pixie dust, they offer a limited amount of power and some devs will want to do more with their games than current consoles can handle at 60fps.

My point was that simulations usually are updated by the precision required, not necessarily the frames rendered, and at times they could scale constantly with frame-rate while other times they could scale super-linearly with frame-rate. 

As to the second point,

Sure, these things scale. That's not in dispute. What has also scaled since Morrowind are memory capacity, memory bandwidth, storage space, CPU speed, etc.  Even if the number of objects (and their storage size) scaled by 10000 times Morrowind (for a game-world that is 8 times larger than Morrowind ) all of this hardware has also scaled by at least that much, if not more, than that and Bethesda has likely learned many more tricks since then (i.e using more efficient data structures, putting a time-limit of 30 days on object permanence, etc.) 

Object-permanence mostly is a storage/memory management problem. The CPU does have a role in it (writing from memory to the disk/pulling to the memory from the disk) but it isn't some magic workload that is special. 

Most developers don't implement it in their games because it is a system that is developmentally costly to pull of correctly, not necessarily runtime costly (if it is pulled off correctly.) 

I'm aware, but that doesn't change the fact that running at 60 gives you half as much frame time to work with. There may be technicalities with the update rate of different tasks and systems or how your load is spread across cores/threads/components, but ultimately 33.3ms is double 16.7ms.

And world scale doesn't tell you much, it's what's in it that will determine how demanding it is. You can make a single room unable to hit 60 if you fill it with enough processing tasks. 

As Digital Foundry points out, 30fps vs 60fps is a design choice. Developers have a vision, and limited amount of power to try to achieve it. There are some things current hardware simply can't do at 60.



curl-6 said:
sc94597 said:

My point was that simulations usually are updated by the precision required, not necessarily the frames rendered, and at times they could scale constantly with frame-rate while other times they could scale super-linearly with frame-rate. 

As to the second point,

Sure, these things scale. That's not in dispute. What has also scaled since Morrowind are memory capacity, memory bandwidth, storage space, CPU speed, etc.  Even if the number of objects (and their storage size) scaled by 10000 times Morrowind (for a game-world that is 8 times larger than Morrowind ) all of this hardware has also scaled by at least that much, if not more, than that and Bethesda has likely learned many more tricks since then (i.e using more efficient data structures, putting a time-limit of 30 days on object permanence, etc.) 

Object-permanence mostly is a storage/memory management problem. The CPU does have a role in it (writing from memory to the disk/pulling to the memory from the disk) but it isn't some magic workload that is special. 

Most developers don't implement it in their games because it is a system that is developmentally costly to pull of correctly, not necessarily runtime costly (if it is pulled off correctly.) 

I'm aware, but that doesn't change the fact that running at 60 gives you half as much frame time to work with. There may be technicalities with the update rate of different tasks and systems or how your load is spread across cores/threads/components, but ultimately 33.3ms is double 16.7ms.

And world scale doesn't tell you much, it's what's in it that will determine how demanding it is. You can make a single room unable to hit 60 if you fill it with enough processing tasks. 

As Digital Foundry points out, 30fps vs 60fps is a design choice. Developers have a vision, and limited amount of power to try to achieve it. There are some things current hardware simply can't do at 60.

Sure it gives you double as much time to finish a calculation, my point is that it doesn't necessarily mean double the available processing power. 

Here is how item locations are implemented in Skyrim. Do you think Starfield will work fundamentally differently? I doubt it and nothing shown so far tells us otherwise. I suppose the location space itself is fungible because of the procedural generation, that might be a difference that affects calculations once (as the world is procedurally generated.)  

https://en.uesp.net/wiki/Skyrim_Mod:Save_File_Format

globalDataTable1 Global Data[fileLocationTable.globalDataTable1Count] Types 0 to 8.
globalDataTable2 Global Data[fileLocationTable.globalDataTable2Count] Types 100 to 114.
changeForms Change Form[fileLocationTable.changeFormCount]
globalDataTable3 Global Data[fileLocationTable.globalDataTable3Count] Types 1000 to 1005.

You might also have longer object permanence (in Skyrim if the tile wasn't touched in 30 in-game days, then the object is removed from the table.) Or you might have more variety of items remain permanent. 

But that will mostly affect the save file size. Nothing that has been shown so far hints, for example, that if you are located on planet X there are some physics going on - on planet Y's objects while you are gone and the objects' locations might change due to that. "Permanence" in Elder Scrolls/Fallout/etc. is mostly permanence of the object's location and form, which are stored in a table and pulled when you load a save file. 

This is contrastingly different from say a game like Tears of the Kingdom where there are interactions going on when you leave an object (which is why there is a 20 object and maximum distance limit, overall producing very little permanence in that game.) 



sc94597 said:
curl-6 said:

I'm aware, but that doesn't change the fact that running at 60 gives you half as much frame time to work with. There may be technicalities with the update rate of different tasks and systems or how your load is spread across cores/threads/components, but ultimately 33.3ms is double 16.7ms.

And world scale doesn't tell you much, it's what's in it that will determine how demanding it is. You can make a single room unable to hit 60 if you fill it with enough processing tasks. 

As Digital Foundry points out, 30fps vs 60fps is a design choice. Developers have a vision, and limited amount of power to try to achieve it. There are some things current hardware simply can't do at 60.

Sure it gives you half as much time, my point is that it doesn't necessarily mean double the available processing power. 

Here is how item locations are implemented in Skyrim. Do you think Starfield will work fundamentally differently? I doubt it and nothing shown so far tells us otherwise. I suppose the location space itself is fungible because of the procedural generation, that might be a difference that affects calculations once (as the world is procedurally generated.)  

https://en.uesp.net/wiki/Skyrim_Mod:Save_File_Format

globalDataTable1 Global Data[fileLocationTable.globalDataTable1Count] Types 0 to 8.
globalDataTable2 Global Data[fileLocationTable.globalDataTable2Count] Types 100 to 114.
changeForms Change Form[fileLocationTable.changeFormCount]
globalDataTable3 Global Data[fileLocationTable.globalDataTable3Count] Types 1000 to 1005.

You might also have longer object permanence (in Skyrim if the tile wasn't touched in 30 in-game days, then the object is removed from the table.) Or you might have more variety of items remain permanent. 

But that will mostly affect the save file size. Nothing that has been shown so far hints, for example, that if you are located on planet X there are some physics going on - on planet Y's objects while you are gone and the objects' locations might change due to that. "Permanence" in Elder Scrolls/Fallout/etc. is mostly permanence of the object's location and form, which are stored in a table and pulled when you load a save file. 

This is contrastingly different from say a game like Tears of the Kingdom where there are interactions going on when you leave an object (which is why there is a 20 object and maximum distance limit, overall producing very little permanence in that game.) 

But it does give you more time; no matter how much power you have, 30fps gives you more time.

And that's only one of countless jobs the CPU has to handle. For the kind of big and complex games Bethesda makes, they may not want to compromise their design goals just for the sake of 60fps.

On fixed hardware, you simply can't do as much at 60 as you can at 30.

Last edited by curl-6 - on 17 June 2023

curl-6 said:

But it does give you more time; no matter how much power you have, 30fps gives you more time.

And that's only one of countless jobs the CPU has to handle. For the kind of big and complex games Bethesda makes, they may not want to compromise their design goals to just for the sake of 60fps.

On fixed hardware, you simply can't do as much at 60 as you can at 30.

Yes, running the game at 30fps gives you more time. That doesn't necessarily mean "you will always be able to process twice as much per frame at 30fps as you can at 60fps."

"Big and complex" doesn't necessarily mean that performance will be affected. It does matter in which ways it is big and complex, and that is why it also matters what Bethesda is doing to optimize the title. If they are targeting 30fps and leaving performance on the table (because for example their engine and development team aren't utilizing the full CPU resources)and then using "big and complex" as an excuse for the performance target then that is a very different scenario from optimizing the game and realizing 30fps is the necessary target. 

Given their track record and how Todd Howard framed his statement, I think the first scenario isn't entirely unlikely. 



Around the Network
sc94597 said:
curl-6 said:

But it does give you more time; no matter how much power you have, 30fps gives you more time.

And that's only one of countless jobs the CPU has to handle. For the kind of big and complex games Bethesda makes, they may not want to compromise their design goals to just for the sake of 60fps.

On fixed hardware, you simply can't do as much at 60 as you can at 30.

Yes, running the game at 30fps gives you more time. That doesn't necessarily mean "you will always be able to process twice as much per frame at 30fps as you can at 60fps."

"Big and complex" doesn't necessarily mean that performance will be affected. It does matter in which ways it is big and complex, and that is why it also matters what Bethesda is doing to optimize the title. If they are targeting 30fps and leaving performance on the table (because for example their engine and development team aren't utilizing the full CPU resources) then that is a very different scenario from optimizing the game and then using "big and complex" as an excuse for the performance target. 

Given their track record, I think the latter scenario isn't entirely unlikely. 

It's possible, but I wouldn't jump straight to blaming the team (or the engine, since frankly we know little about its current makeup) when it's entirely likely we're simply seeing the limits of what now 2 and a half year old hardware can do at 60fps. Xbox Series isn't cutting edge any more, there was always going to come a time when the games started to move passed what it could comfortably breeze through.



curl-6 said:
sc94597 said:

Yes, running the game at 30fps gives you more time. That doesn't necessarily mean "you will always be able to process twice as much per frame at 30fps as you can at 60fps."

"Big and complex" doesn't necessarily mean that performance will be affected. It does matter in which ways it is big and complex, and that is why it also matters what Bethesda is doing to optimize the title. If they are targeting 30fps and leaving performance on the table (because for example their engine and development team aren't utilizing the full CPU resources) then that is a very different scenario from optimizing the game and then using "big and complex" as an excuse for the performance target. 

Given their track record, I think the latter scenario isn't entirely unlikely. 

It's possible, but I wouldn't jump straight to blaming the team (or the engine, since frankly we know little about its current makeup) when it's entirely likely we're simply seeing the limits of what now 2 and a half year old hardware can do at 60fps. Xbox Series isn't cutting edge any more, there was always going to come a time when the games started to move passed what it could comfortably breeze through.

We'll of course know upon release of the PC version. I suspect that a Ryzen 3700x (non-overclocked) + RX 5700xt will be able to reach a 60fps target at 1080p (internal resolution) with medium settings, either before or after mods release to improve performance. That's hardware that is about (slightly worse than) the Xbox Series X's. 

1.If it can do it without a CPU bottleneck, then the game is probably GPU bound and 60fps was definitely doable with enough effort and likely very little loss in perceived visuals. (Note: The internal resolution of Starfield on Series X is 1296p, only about 1.44 times the pixel-count of 1080p.)

2. If it can't do it, but very little of the CPU is utilized (note: many console -> PC ports scale very well with core count these days), then yes we can blame the development team and/or Bethesda leadership's insistence on sticking to their decades old proprietary engine. 

3. If it can't do it and the CPU is well-utilized, then it is genuinely an optimized, CPU-bound game. 

Hopefully if it is #2, Bethesda gets the right criticism before they go full-production on ESVI, and Microsoft gives them more resources to fundamentally revamp their engine expediently so that ESVI doesn't get delayed. Microsoft has the resources to enable Bethesda to increase head-count significantly. 



sc94597 said:

Sure it gives you double as much time to finish a calculation, my point is that it doesn't necessarily mean double the available processing power. 

Here is how item locations are implemented in Skyrim. Do you think Starfield will work fundamentally differently? I doubt it and nothing shown so far tells us otherwise. I suppose the location space itself is fungible because of the procedural generation, that might be a difference that affects calculations once (as the world is procedurally generated.)  

https://en.uesp.net/wiki/Skyrim_Mod:Save_File_Format

globalDataTable1Global Data[fileLocationTable.globalDataTable1Count]Types 0 to 8.
globalDataTable2Global Data[fileLocationTable.globalDataTable2Count]Types 100 to 114.
changeFormsChange Form[fileLocationTable.changeFormCount]
globalDataTable3Global Data[fileLocationTable.globalDataTable3Count]Types 1000 to 1005.

You might also have longer object permanence (in Skyrim if the tile wasn't touched in 30 in-game days, then the object is removed from the table.) Or you might have more variety of items remain permanent. 

But that will mostly affect the save file size. Nothing that has been shown so far hints, for example, that if you are located on planet X there are some physics going on - on planet Y's objects while you are gone and the objects' locations might change due to that. "Permanence" in Elder Scrolls/Fallout/etc. is mostly permanence of the object's location and form, which are stored in a table and pulled when you load a save file. 

This is contrastingly different from say a game like Tears of the Kingdom where there are interactions going on when you leave an object (which is why there is a 20 object and maximum distance limit, overall producing very little permanence in that game.) 

In case of TotK it's also a memory limit I guess. The fact items animate / interact doesn't make it more complex for permanence either. Just like AI it will only be active in a bubble around you. Minecraft (at least on console) doesn't keep animating Red stone either when you move out of range. Once you're out of range the game can simple store the object and its state for when you come back in range again.

TotK has plenty permanence as it remembers every single thing you picked up / mined / killed. Of course that's just simple bit flags, delete what has already been done while everything else resets. (Which leads to weird situations where crates return yet the loot inside them remains gone). But it does not remember stuff you dropped or build. I guess the time limit of animated items is to not make the game too easy, and everything else resets so you can build stuff again. There's little reason for the game to have a problem with rendering items in a different spot then where they are when streaming that segment in. Yet maybe applying the change log when streaming in the next cell took too much time or memory. The game at release had severe frame rate drops already. (Better after the patches)

As long as you store your changes by cell in a quad tree it doesn't have to take much more time than applying the 'removal' process with the bit flag table. Instead of delete item, move item / change type. And this only has to be done when streaming in the next data segment, completely independent of frame rate or physics calculations.

In case of complex machines that you leave behind to keep going, abstractions work. If it produces a certain ore / money all you need to do is determine the production rate. Then come back to it later and it can update with a simple time passed equation.

Anyway object permanence doesn't affect frame rate, but does affect streaming rate and memory needed. If you already max that out for the best possible screenshots, then yeah it's hard to hit 60fps without introducing stutter / pop in.

Since it's also a PC title, I suspect it won't be using all the cpu cores and multi-threading to the best of the Series' abilities. It's much easier to make an engine that works on more hardware configurations by taking a more generic approach. Interestingly the minimum requirements are 6 core processors. Question is will they make full use of them. For example FS2020 is bottle necked by one core doing most of the heavy lifting while the rest are doing far less resulting in overall 50% CPU utilization on a 6 core CPU. Plus Series has a 7th core available for games, but it's hard to offload stuff to another core. Hence it's still often the max speed of a single core that determines how fast an engine can go, not how many cores you have available to do work :/

It's a good sign 6 core CPUs are a minimum requirement now. Will it also use them all is the question. Will it still run on an 4 core i5 or simply refuse to start at all. Or will it combine threads of lower used cores to the same core to still work (defeating the purpose of using 6 cores)



sc94597 said:
curl-6 said:

It's possible, but I wouldn't jump straight to blaming the team (or the engine, since frankly we know little about its current makeup) when it's entirely likely we're simply seeing the limits of what now 2 and a half year old hardware can do at 60fps. Xbox Series isn't cutting edge any more, there was always going to come a time when the games started to move passed what it could comfortably breeze through.

We'll of course know upon release of the PC version. I suspect that a Ryzen 3700x (non-overclocked) + RX 5700xt will be able to reach a 60fps target at 1080p (internal resolution) with medium settings, either before or after mods release to improve performance. That's hardware that is about (slightly worse than) the Xbox Series X's. 

1.If it can do it without a CPU bottleneck, then the game is probably GPU bound and 60fps was definitely doable with enough effort and likely very little loss in perceived visuals. (Note: The internal resolution of Starfield on Series X is 1296p, only about 1.44 times the pixel-count of 1080p.)

2. If it can't do it, but very little of the CPU is utilized (note: many console -> PC ports scale very well with core count these days), then yes we can blame the development team and/or Bethesda leadership's insistence on sticking to their decades old proprietary engine. 

3. If it can't do it and the CPU is well-utilized, then it is genuinely an optimized, CPU-bound game. 

Hopefully if it is #2, Bethesda gets the right criticism before they go full-production on ESVI, and Microsoft gives them more resources to fundamentally revamp their engine expediently so that ESVI doesn't get delayed. Microsoft has the resources to enable Bethesda to increase head-count significantly. 

We already know from the devs that the game can hit 60fps on Xbox some of the time, and that the reason it is capped at 30 is for consistency. 

So the performance that will tell us whether 60fps was viable possible will be how well it holds up in the most demanding areas and moments. If it's 60 most of the time, but flucuates in cities or heavy combat, then a 30fps cap makes sense.



SvennoJ said:

Since it's also a PC title, I suspect it won't be using all the cpu cores and multi-threading to the best of the Series' abilities. It's much easier to make an engine that works on more hardware configurations by taking a more generic approach. Interestingly the minimum requirements are 6 core processors. Question is will they make full use of them. For example FS2020 is bottle necked by one core doing most of the heavy lifting while the rest are doing far less resulting in overall 50% CPU utilization on a 6 core CPU. Plus Series has a 7th core available for games, but it's hard to offload stuff to another core. Hence it's still often the max speed of a single core that determines how fast an engine can go, not how many cores you have available to do work :/

It's a good sign 6 core CPUs are a minimum requirement now. Will it also use them all is the question. Will it still run on an 4 core i5 or simply refuse to start at all. Or will it combine threads of lower used cores to the same core to still work (defeating the purpose of using 6 cores)

Did you really just assert that the PC is going to hold back the Xbox port?

The amount of cores isn't what is important here, it's total CPU performance.

An engine can have 8 threads, but a quad-core can still out-perform an 8-core CPU if each Quad-Core's individual core is 3x faster in a game that has 8 threads.
Total CPU performance > Number of cores.

Which is why the Ryzen 5600, despite being only a 6-core CPU is roughly equivalent to the 8-cores found in the consoles, each individual core is simply better.



--::{PC Gaming Master Race}::--