EpicRandy said:
Meshes deformation are all handled by the CPU, all games logics are, a feature rich title will make extensive use of the CPU and will likely be CPU bottlenecked. that's why it was important For the series S to have the same capacity CPU wise. Mesh are simply not that demanding on memory and memory bandwidth. for instance a single uncompressed full 4k texture channel is literally 48MB in memory, for the same size you can have meshes with 16M+ vertices and up to the same amount of tris (max is same amount -1). On Series S your likely to also work with mesh with less LoD so should even be easier on the CPU. |
Not always done on the CPU.
And considering with RDNA you aren't geometry limited like Graphics Core Next, it's not going to be the bottleneck it once was.
In Unity for example, you can make use of the Tessellator on a modern GPU to deform the mesh rather than require a high vertex count at all times.
See here: https://docs.unity3d.com/Manual/SL-SurfaceShaderTessellation.html
But it all comes down to the developer, the more you shift the load onto the GPU, the better.
SvennoJ said: Yet FS2020 is memory bottlenecked, followed by cpu and gpu. You can use CPU to augment ram by calculating, generating stuff just in time, or you can use RAM to augment CPU by using pre-calculated tables kept in memory, or simply keep more in memory, load/prepare stuff ahead etc. |
Are you talking about procedural generation?
SvennoJ said: For example FS2020 first kept everything around you in RAM, used over 20GB of system ram with maximum draw distance. Turning the camera around was smooth, worked great. Then the XBox update came, memory use was massively reduced. How? By aggressively culling everything that's not in the current view. Result, massive stuttering when turning the camera around since all that geometry and detail had to be loaded again. At the time I made a work around by using the game's cache on a ram disk, basically keeping it in memory in a roundabout way. |
Culling has been a tactic that has been used in some form for decades now... Most prominently since the Radeon 7500 in 2000 with ATI's HyperZ technology which was a set of technologies that worked together.
Obviously today it's more advanced where culling can be "predicted" ahead of time.
It helps no doubt.
Texture and Mesh streaming started to gain more traction during the 7th gen, most notably with Modern Warfare 2 in order to circumvent the ram limits of those consoles at the time... With SSD's that can obviously be taken to the next level, especially with lots of small random textures streamed. (Optical/Mechanical drives hated small random reads.)
In the unreal engine, if you don't have enough high quality textures in dram, the textures will be mip to the lowest level, until the quality versions of the texture are streamed into memory... Hence the "texture pop-in" effect you often see with Unreal powered games... It's more or less because the developers tried to push things out to far and didn't have the memory to cache.
Norion said:
People getting a cheap option to play stuff like GTA 6 is nice even if it runs poorly but I am concerned about it being a headache for a lot of developers a few years from now. It's kinda like if developers were forced to have their games run properly on a 2060 till like 2031 if they were making a PC version. |
Shouldn't run "poorly". - That tends to be all relative though. - I have seen people who are perfectly happy playing their games at 15-20fps.
Grand Theft Auto hasn't exactly pushed the graphics envelope for a long time now, so Rockstar tend to be a little more conservative on hardware requirements, prioritising gameplay over visual effects with that title.
Even if the game ends up being 900P, 30fps... I think that is more than acceptable for the Series S.
Kyuu said: I think the first generation Series S will be dropped (as a mandated SKU) in favor of a more powerful Series S+ (hopefully 75% more powerful, 12GB of RAM, bigger SSD). Unless Series S succeeds in appealing to a very large non-gamer or non-console-gamer demographic, I'd rather it not be mandated, because it would suck to see the more ambitious games in the early to mid 2030's being held back by Series S tech. It's bad enough that we're still stuck to Xbox One specs in the in early 2020's thanks to crossgen overstaying its welcome. I want the minimum spec to have as high a floor as economically possible. Series S can still get a ton of support without the need of being mandated, because the Switch 2 exists and it's going to be huge. The reason Series S is the best selling Xbox is because the X isn't being produced in large enough volumes. My primary concern with the S is that we're probably getting mid-gen upgrades in a few years. Mid-gen upgrades will allow developers to go crazier with games, that even Series X and PS5 will often struggle at sub 1080p/40 fps. There would be a limit to how much a developer can scale back on Series S before it gets ridiculous (challenging, unplayable, costly, time wasting, etc). |
I understand and agree fully, but Consoles tend to be supported for the entire generation.
I always want the graphics bar to be raised more, I would have liked to have seen the Series X and Playstation 5 to offer more hardware, but the current climate didn't allow for it, hence the mid range hardware.
The Series S has been a success for Microsoft, so it is not going away.
--::{PC Gaming Master Race}::--