Pemalite said:
On optical media the game developer has the power to determine what data can be duplicated and where.
A game developer doesn't get to choose what chunk of sectors they will occupy in a file system.. So they can never guarantee a block of data will exist next to each other, they cannot demand data be evicted from one area and moved to another for performance reasons. Now consoles like the Xbox 360 leveraged XTAF in order to *try* and ensure data is available sequentially, Xbox One moved over to exFat which relied on indexing to keep track of chunks of data spread across the entire drive... And because of that, fragmentation of data is a massive issue on Xbox One which can result in performance degradation over time. |
If you put related data contiguously, I bet you the chances are it'll be together on the disk as well. The file system is free to do whatever it wants, but most sensible file systems probably try to not scatter data around when they know it belongs together (e.g. if it's a single file, even if that single file comprises of a large number of individual assets). That to me seems way different than e.g. storing each asset as an indivual file, which the file system is much more likely to scatter around because it doesn't know they're related - especially if they're in different directories and could be used in a number of different situations. If you need, say, 2,000 assets for a scene (often a map), and it takes 10 ms to load each one, that's already 20 s in total just for seeking, let alone actually loading the assets. I don't know how many assets you need to load at once in a modern game, but I don't think 2,000 sounds way off for an AAA game. Regardless, even if it's just 200 assets, it's still 20 s just for seeking, then some for loading, some for actually running code etc., and I don't think it's hard to see why loading times might be high if data is scattered all over the place.
Developers do all kinds of stuff that's not guaranteed to work, and I'm sure game developers do it just as much if not more than others. I know that in low-level programming languages it's common to consider how exactly data is aligned in memory, how the CPU is likely to cache it etc. even though there might not be any actual guarantees. If it works now and has seemingly little reason to ever change on the target platform, why worry about it not being guaranteed? Stuff like that makes me uneasy personally, because I like guarantees, but that's also something that happens and works to my best knowledge.







