By using this site, you agree to our Privacy Policy and our Terms of Use. Close
EpicRandy said:
SvennoJ said:

Yes, and 1440p is perfectly fine from a regular sitting distance. 30 fps is also perfectly fine for many genres.

So you could make a fully dynamic open world with deformable terrain like From Dust (but not restricted to a tiny world), using all available RAM and resources running 1440p30 on PS5 and Series X. But how would that run on Series S. RAM and memory bandwidth are not only for 4K textures and higher fps. Sure, if used only for rendering at higher resolutions, then it's no problem. To make more interactive games you need more fast memory. Split-screen requires more RAM and high memory bandwidth to basically run two instances of the game.

RAM restricts what you can do when it comes to dynamic worlds and building games. RAM also restricts what you can do in terms of optimizations. Memory is the biggest resource for development. 4K textures is just a small part of it.

Anyway 16GB is already cramped, I wouldn't want anything less than 32GB on my gaming laptop. It's cpu and gpu are far weaker than Series X, yet thanks to 32GB RAM (+another 6GB video ram) I can crank up FS2020 to draw distances the Series X can never compete with, while draw distance on Series S in FS2020 is pretty awful. For a flight sim draw distance is immersion. It's incredible they got it working yet would have been better with more RAM.

Series S has benefited from the much longer cross generation period than usual, yet now it's turning into a boat anchor for new game innovation, or will get left behind with new game innovations.

Meshes deformation are all handled by the CPU, all games logics are, a feature rich title will make extensive use of the CPU and will likely be CPU bottlenecked. that's why it was important For the series S to have the same capacity CPU wise. Mesh are simply not that demanding on memory and memory bandwidth. for instance a single uncompressed full 4k texture channel is literally 48MB in memory, for the same size you can have meshes with 16M+ vertices and up to the same amount of tris (max is same amount -1). On Series S your likely to also work with mesh with less LoD so should even be easier on the CPU.

Not always done on the CPU.
And considering with RDNA you aren't geometry limited like Graphics Core Next, it's not going to be the bottleneck it once was.


In Unity for example, you can make use of the Tessellator on a modern GPU to deform the mesh rather than require a high vertex count at all times.
See here: https://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html


But it all comes down to the developer, the more you shift the load onto the GPU, the better.

SvennoJ said:

Yet FS2020 is memory bottlenecked, followed by cpu and gpu. You can use CPU to augment ram by calculating, generating stuff just in time, or you can use RAM to augment CPU by using pre-calculated tables kept in memory, or simply keep more in memory, load/prepare stuff ahead etc.

Are you talking about procedural generation?

SvennoJ said:

For example FS2020 first kept everything around you in RAM, used over 20GB of system ram with maximum draw distance. Turning the camera around was smooth, worked great. Then the XBox update came, memory use was massively reduced. How? By aggressively culling everything that's not in the current view. Result, massive stuttering when turning the camera around since all that geometry and detail had to be loaded again. At the time I made a work around by using the game's cache on a ram disk, basically keeping it in memory in a roundabout way.
https://forums.flightsimulator.com/t/rollingcache-revisited-essential-with-aggressive-culling-or-set-terrain-pre-caching-to-ultra/433841
You can see the difference already. It didn't really solve it as you still see it build up even fetching it from RAM disk to RAM. Luckily Asobo later gave the option to turn off the aggressive culling and on PC you can swing the camera around again without stuttering and pop up everywhere.

Culling has been a tactic that has been used in some form for decades now... Most prominently since the Radeon 7500 in 2000 with ATI's HyperZ technology which was a set of technologies that worked together.

Obviously today it's more advanced where culling can be "predicted" ahead of time.

It helps no doubt.

Texture and Mesh streaming started to gain more traction during the 7th gen, most notably with Modern Warfare 2 in order to circumvent the ram limits of those consoles at the time... With SSD's that can obviously be taken to the next level, especially with lots of small random textures streamed. (Optical/Mechanical drives hated small random reads.)

In the unreal engine, if you don't have enough high quality textures in dram, the textures will be mip to the lowest level, until the quality versions of the texture are streamed into memory... Hence the "texture pop-in" effect you often see with Unreal powered games... It's more or less because the developers tried to push things out to far and didn't have the memory to cache.

Norion said:
Pemalite said:

As a Series S, X, PS5, PC and Switch owner... I would have absolutely zero objection if GTA5 targets say... 900P and 30fps on the Series S.

My expectation for the Series S is that it will be a console where games are compromised compared to it's bigger brothers. - Just dropping down to 30fps doubles your render time window.
For simpler titles like Ori or Rayman, I do expect true 4k or better on the Series S.

I would also like to see Backwards compat games run in their "One X" mode on Series S. (Except Xbox One titles of course due to the Ram difference.)

People getting a cheap option to play stuff like GTA 6 is nice even if it runs poorly but I am concerned about it being a headache for a lot of developers a few years from now. It's kinda like if developers were forced to have their games run properly on a 2060 till like 2031 if they were making a PC version. 

Shouldn't run "poorly". - That tends to be all relative though. - I have seen people who are perfectly happy playing their games at 15-20fps.

Grand Theft Auto hasn't exactly pushed the graphics envelope for a long time now, so Rockstar tend to be a little more conservative on hardware requirements, prioritising gameplay over visual effects with that title.

Even if the game ends up being 900P, 30fps... I think that is more than acceptable for the Series S.

Kyuu said:

I think the first generation Series S will be dropped (as a mandated SKU) in favor of a more powerful Series S+ (hopefully 75% more powerful, 12GB of RAM, bigger SSD). Unless Series S succeeds in appealing to a very large non-gamer or non-console-gamer demographic, I'd rather it not be mandated, because it would suck to see the more ambitious games in the early to mid 2030's being held back by Series S tech. It's bad enough that we're still stuck to Xbox One specs in the in early 2020's thanks to crossgen overstaying its welcome. I want the minimum spec to have as high a floor as economically possible. Series S can still get a ton of support without the need of being mandated, because the Switch 2 exists and it's going to be huge.

The reason Series S is the best selling Xbox is because the X isn't being produced in large enough volumes.

My primary concern with the S is that we're probably getting mid-gen upgrades in a few years. Mid-gen upgrades will allow developers to go crazier with games, that even Series X and PS5 will often struggle at sub 1080p/40 fps. There would be a limit to how much a developer can scale back on Series S before it gets ridiculous (challenging, unplayable, costly, time wasting, etc).

I understand and agree fully, but Consoles tend to be supported for the entire generation.

I always want the graphics bar to be raised more, I would have liked to have seen the Series X and Playstation 5 to offer more hardware, but the current climate didn't allow for it, hence the mid range hardware.

The Series S has been a success for Microsoft, so it is not going away.



--::{PC Gaming Master Race}::--