Isn't the whole point of Nanite and Lumen to ease the burden on developers, allowing them do more in the time they have compared to current gen? From what I understand they're taking advantage of the SSD so you can import HQ assets like Zbrush sculptures directly into your game. This makes for much quicker and far more detailed world building, and Lumen is there to further cut development time by automatically lighting these crazy detailed environments. I'm sure you can also use Lumen for any game but it's main purpose is obviously to make things easier it in combination with Nanite.
Nanite and Lumin most certainly will ease the burden on developers through streamlined process and simplification. (I.E. Not building multiple LOD assets.)
But their intrinsic technologies are not strictly tied to any hardware features, they may not even be used by every developer and game on Unreal Engine 5, it's a tool at the developers disposal.
Lumin reduces the development time by removing the need to do pre-baked global illumination, it's the not the only method of real-time lighting next gen will employ, nor is it even the best.
On ps4, developers are far more limited by bandwidth and polygon budgets and need to use all kinds of "tricks" like baked lighting, LOD models etc. Not only is there no loss of quality on ps5, but more importantly, it would also take way more time and resources to have it run on ps4. That's why I'm guessing Nanite isn't supported on current gen as it would defeat its whole purpose. I mean, lets say you're making a multiplatform game and it takes 2 months to design a level on ps5, but having that same level run on ps4 takes 12 months because you have to do a lot of extra work on the scaling and programming side. What would be the point of using it? Next gen will be a lot less about raw horse power and a lot more about what developers can build in the fixed time frame and budget they're working with.
The Playstation 4 is more than capable of using real-time Global Illumination... In-fact games already exist that do use it via leveraging a middleware known as "Enlighten" via the Unity Engine for example. (CryEngine can do it as well... And so on.)
So no, you don't "need" to do baked lighting on the Playstation 4, developers opt for that approach to save on computational resources in order to bolster image quality elsewhere.
This is just another case where people are taking on-board the "advertising" aspect of the tech demo, rather than actually understanding what technologies we use currently and how the demo achieves what it does and how it improves on what it already does.
The lighting approach, aka. Global Illumination using Voxels is *not* new, it's actually been around for years.
Of course, Unreal 5 is a scalable engine and it will no doubt end up working even better on pc when newer SSD's hit the market, but Lumen and Nanite are definitely not designed to work on current gen consoles with a HDD. The demo was about one thing, and one thing only, to show off the new core features of the engine that were designed around the ps5's SSD tech. It's also compatible with Series X, of course, the only thing we don't know is what the difference in performance will be with half the SSD throughput, but a bit faster gpu.
We don't know what extent the Unreal 5 demo is using the Playstation 5's hardware.
We don't know if it even needs an SSD or whether a couple of mechanical disks in RAID is sufficient.
But we do know it left the Playstation 5's Ray Tracing processing cores un-used, which is extra image quality/performance/processing that can be tapped into in the future.
Lumin doesn't give a crap about what kind of storage you use either, it's not loading gigabytes of data, it's bottleneck lays in computational throughput, it is all about calculating lighting.
In terms of advantages, the Xbox Series X should be better at handling the Lumin side of the tech demo than the Playstation 5, where-as the Playstation 5 should show advantages in Nanite. In theory anyway, will be interesting when I can play around with the engine and do some profiling.
So if that's whats currently available, any idea about what could be available for a release in 2023?
Apple crossed/matched XBox One level performance two years ago with the Apple A12X chip ... Nvidia prior to that was fairly even with Apple's big ticket offerings, the Tegra X1 is pretty equivalent to the Apple A9X that launched that same year for example. Since then Nvidia has gone quiet on future Tegra X processors likely because Nintendo has asked them to as they are the main vendor for said chip and if Nvidia was talking years in advance about it everyone and their grandma on the internet would be saying it's the Switch 2 chip.
By 2023 they should be able to do something in raw power that I think is beyond a PS4, the same way the Switch Tegra X1 is beyond a PS3/360 (especially docked).
But when you factor in DLSS 2.0 or 3.0 ... that performance jumps massively, that is something that wasn't available in the past and means the same chip now only has to render like 1/4-1/8th the pixels or even less. That's a huge game changer.
So suddenly a chip that's PS4+ becomes close to an actual PS5 in terms of the games it can run.
Remember the Tegra X1 launch in May 2015, so that is 18 months after the PS4/XB1 and it is able to run PS4/XB1 games, and some pretty beefy ones at that ... Witcher 3 and DOOM are not low end PS4 titles. With no advantage of DLSS at all.
PS5 in November 2020 versus a new Switch 2 in say summer/fall 2023 is actually a bigger time gap by over a year long than the gap between PS4 and Tegra X1. Nvidia has way better graphics engineers than AMD ... AMD can't even beat Nvidia's Turing architecture which is 2 years old now even on a smaller node (7nm vs. 12nm) which is sad.
nVidia's focus has shifted from SoC's for phones/tablets to DRIVE/Vehicles/A.I.
Hence why Nintendo has been quiet on mobile chips, it doesn't play in those markets anymore technically.
As for performance, A12X is a potent chip, it's CPU cores might even be faster than the Xbox One, but in terms of bandwidth the A12X still relies on LPDDR4X which tops out at 34GB/s in Apples configuration... So that will most certainly limit fillrate.
FP32 is a good 40% or more faster on the Xbox One S's GPU as-well, which gives it an advantage, geometry and texturing throughput should definitely be in the Xbox One S's favor as well.
Either way, the 12X is getting fairly close and will win in some cases (I.E. CPU), but no cigar just yet.
Ports of games that are downgraded is one thing, running games of equivalent visuals is an entirely different one, the Switch variant of the Witcher 3, Doom and so forth is impressive no doubt, but it still comes up short against the Xbox One version in every regard, lets not kid ourselves.