By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Phil Spencer Says Xbox Series X Games Aren't Being Held Back By Xbox One

goopy20 said:
Pemalite said:

Lumin is an "approximation". - Lumin is a "collection" of lighting techniques all working to augment each other in order to get the best result with a low overhead, it's not the best approach to lighting, shit... It's not even the most efficient, but it's damn well impressive for the hardware it's being showcased on.

For larger objects the Global Illumination uses voxels which are cubes of light.
For medium sized objects they use signed distance fields, which is basically grid sampling.. Another use for it is cloth simulation collision detection.
And for smaller sized objects they use screen-space data.. Which is similar to the approach that Gears of War 5 used for it's lighting.


Yeah it did look impressive, but how do you think it compares to the Nvidia RTX implementation? And which version of RT do you think will make more sense for developers to use (assuming that AMD's Ray Tracing cores perform about the same as on the RTX cards)?

Personally I'm still hoping to see Path Tracing! Not in 4k, of course, but in 1080p with something like DLLS enabled, who knows :)

Depends on the developer goals, developers aren't all making the same games... For a game that has allot of metal/shiny surfaces, it would make absolute sense to use Ray Tracing for reflections. (Think: Cyberpunk.)

For a game that has allot of outdoor vistas with dense foliage and so forth, then using Ray Tracing for shadowing would probably be the best bang for buck.

We aren't at a point where we can Ray Trace everything so we need to still pick and choose where it offers the best bang-for buck, there were some games on the Playstation 4 that used Ray Tracing for their lighting... Because it made artistic sense for the developers to go down that path.

Take Battlefield 5 for example, it's Ray Tracing is pretty understated, in my opinion they didn't get the best bang-for-buck, they might have been better served using it for Global Illumination rather than reflections.

SvennoJ said:

Whut? SSDs are a huge benefit in low powered laptops with integrated graphics.

In what way? If you are suggesting it somehow increases framerates...

goopy20 said:
Pemalite said:

Lumin is an "approximation". - Lumin is a "collection" of lighting techniques all working to augment each other in order to get the best result with a low overhead, it's not the best approach to lighting, shit... It's not even the most efficient, but it's damn well impressive for the hardware it's being showcased on.

For larger objects the Global Illumination uses voxels which are cubes of light.
For medium sized objects they use signed distance fields, which is basically grid sampling.. Another use for it is cloth simulation collision detection.
And for smaller sized objects they use screen-space data.. Which is similar to the approach that Gears of War 5 used for it's lighting.


Yeah it did look impressive, but how do you think it compares to the Nvidia RTX implementation? And which version of RT do you think will make more sense for developers to use (assuming that AMD's Ray Tracing cores perform about the same as on the RTX cards)?

Personally I'm still hoping to see Path Tracing! Not in 4k, of course, but in 1080p with something like DLLS enabled, who knows :)

Depends on the developer goals, developers aren't all making the same games... For a game that has allot of metal/shiny surfaces, it would make absolute sense to use Ray Tracing for reflections. (Think: Cyberpunk.)

For a game that has allot of outdoor vistas with dense foliage and so forth, then using Ray Tracing for shadowing would probably be the best bang for buck.

We aren't at a point where we can Ray Trace everything so we need to still pick and choose where it offers the best bang-for buck, there were some games on the Playstation 4 that used Ray Tracing for their lighting... Because it made artistic sense for the developers to go down that path.

Take Battlefield 5 for example, it's Ray Tracing is pretty understated, in my opinion they didn't get the best bang-for-buck, they might have been better served using it for Global Illumination rather than reflections.

SvennoJ said:

RAM use is linked to storage I/O speed. PC compensates for slow storage speeds by using lots of RAM. My laptop has 22 GB of ram even though it has an SSD. 16GB of system ram, 53% in use just for browsing. Cache, cache and more cache. When ram was still an issue on PC swap files were used. But oh boy, when system RAM ran low and the swap file (on HDD) got used the system crawled to a halt.

More power means you need more RAM for bigger screen buffers and bigger textures. Thus by any means, MS didn't think to give it enough RAM or I/O speed to lessen the burden on RAM to make full use of the power.

Swap files are still used in 2020 on the latest and greatest PC's. They never stopped being used, even when you change your storage. - Currently my PC is using about 3GB of swap.

More power doesn't necessarily mean you need more memory, there are operations that are pretty memory insensitive all things considered, things like texturing need more space sure, but using the compute overhead to bolster an effect somewhere else may not.

SvennoJ said:


A useless comparison

7th Gen, avg 100 MB/s HDD speed, 22.4 GB/s RAM speed (360)    RAM is 230 times faster
8th Gen, avg 100 MB/s HDD speed, 176 GB/s RAM speed (ps4)     RAM is 1800 times faster
9th Gen, PS5 upto 9 GB/s SSD speed, 448 GB/s                           RAM is 50 times faster
9th Gen, SX upto 5 GB/s SSD speed, up to 560 GB/s                    RAM is 112 times faster

Looks like PS5 is in a great position to augment RAM (the biggest bottleneck in any game design) with SSD but Series X still has a better ratio than 7th gen.

The 7th and 8th gen drives were 60MB/s tops on a good day, often 30-40MB/s with random access.

SvennoJ said:

Skyrim for example wouldn't have run into trouble, using the SSD as cache for the changes made to the world.

Skyrim didn't run into any trouble on the Xbox 360 despite it only using DVD+Mechanical HDD.

Skyrims issue was more or less a game engine issue... Despite Bethesdas claims that the Creation Engine is "ground up" brand-new, it's still based on Gamebryo which in turn is based on Net Immerse... And one characteristic of that engine is that it is extremely powerful with scripting and cell loading, but it does mean it comes with the occasional hitching irrespective of the underlying hardware.

Would an SSD have helped? Sure. No doubt. But so would more Ram. - The 7th gen was held back by that 512Mb memory pool.

I did do some nitty gritty work on oblivions engine at some point, working to get it to run on less than optimal hardware configurations... I.E. 128MB of Ram even.

SvennoJ said:

Don't you do any fact checking???

https://www.kotaku.co.uk/2017/03/14/breath-of-the-wild-was-never-meant-to-be-the-wii-us-swansong

The Legend of Zelda: Breath of the Wild is a superb launch title for Nintendo Switch, but it was never intended as such. For almost all of its development, Breath of the Wild was a Wii U game through and through.

Development of BotW started 5 years before the Switch came out! Besides that, WiiU to Switch isn't much of a jump at all.

https://www.theverge.com/2018/3/3/17070664/nintendo-switch-technology-bayonetta-2-anniversary

The Switch is roughly as powerful as the Wii U — a little speedier, sure, but they’re not in different galaxies.

There is a big jump in hardware feature sets and efficiency between Wii U and Switch. - The fact that the Switch can take every WiiU game and run them at higher resolutions and framerates with little effort is a testament to that.

Keep in mind that Breath of the Wild was the best looking WiiU game... Where-as on the Switch, it still hasn't had it's best looking games yet.

In saying that... It's not the same kind of jump between the Playstation 3 > Playstation 4 or Xbox 360 > Xbox One, it's a much smaller jump in overall capability, but it's there. (And I do own both the Switch and WiiU so can compare in real time.)

only777 said:

Phil is lying here and his statement is provable false.  He's used a classic bait and switch tactic too.

First, we know that lower power PC's hold back game development.  Ex-EA games game engine dev "The Cherno" said this one his YouTube Channel. ( https://www.youtube.com/channel/UCQ-W1KE9EYfdxhL6S4twUNw ) I think the video he said in was this one: https://www.youtube.com/watch?v=erxUR9SI4F0 - Although I've not watched it back to check.

Also Phil says how PC lower end rigs are not holding back Higher end rigs.  Let's pretend that this statement is true.  But he is clearly saying this in response to Sony and how Sony showed with games like Ratchet and Clank (the warp mechanic in particular) that PS5 allows them to create games that was impossible on previous hardware.

Well the problem with Phil saying that PC isn't held back by lower end rigs is that he is comparing apples to oranges.  A PC games HAS to be made with lower specs in mind, but also the hardware of a lower spec machine and high spec machine is pretty much the same really.  it's just faster/larger numbers of the same thing.

As we saw in the "road to PS5" video, the way data is moved around in the console is much different to a PC.  With the custom decompression and pipe line differences, it's not the same as just PS4 but with bigger numbers.

I thought Phil would be better than this.

Considering that the consoles are low-end PC rigs... They tend to be the lowest common denominator.

Developers don't *need* to build their games to target low-end PC's, heck. Crysis didn't.

zero129 said:

It will have no problems playing all nextgen games in a lower res and i already showed you how nvidia has tech that can take a 540P image and upscale it to 1080P while also looking better then a native 1080P image and a 720P image up to 1440P. MS and AMD have also been working on such tech and it makes perfect sense why with the Series S targeting 1080-1440P and series X Targeting 4K as such tech will allow both console to hit that target at a fraction of the cost easy.

Next gen, target resolutions are going to be unimportant, heck they almost are now.

Microsoft has DirectML and AMD has Radeon Image Sharpening which should provide some impressive image reconstruction, it's going to make pixel counting redundant for allot of games.

A game like Call of Duty Modern Warfare 2019 has a very filmic presentation due to the image reconstruction and anti-aliasing and other post-process techniques being employed.

If a game is 1080P, I am okay with that, but it better be an impressive 1080P.

zero129 said:

But Like i said Series S is a next gen console but for 1080P tv owners.

The Series X is also for 1080P owners, the super sampling of the Xbox One X on a 1080P display does bring with it a ton advantages... Plus the higher-end console will have better texturing, shadowing, lighting which will pop far better on a 1080P display.

The Series S is more or less for those on a budget who don't really give a crap about specifications... They just wanna game and have fun.








--::{PC Gaming Master Race}::--

Around the Network
zero129 said:
goopy20 said:

The point I'm trying to make is that it isn't just the 2 years of Xone support that will be a pain in the ass. The remainder of next gen developers will need to work around the limitations of the 4Tflops Lockhart. Of course MS will say Lockhart won't be holding Series X back. But how are we supposed to believe that when they say even the ancient Xone isn't holding it back?

To me it sounds the Series S will actually be MS's main next gen console and Series X is their mid-gen console coming early. Its the only logical explanation why MS is talking so much about 4k/60fps.

Except the Series S is still a next gen system. Its still using the new RDNA2 chipset, its still using the same CPU as Series X, same SSD speeds etc.

It will have no problems playing all nextgen games in a lower res and i already showed you how nvidia has tech that can take a 540P image and upscale it to 1080P while also looking better then a native 1080P image and a 720P image up to 1440P. MS and AMD have also been working on such tech and it makes perfect sense why with the Series S targeting 1080-1440P and series X Targeting 4K as such tech will allow both console to hit that target at a fraction of the cost easy.

But i understand why your downplaying this a lot goopy as you have a lot to be worried about. Like i said in another thread if parents, low income gamers, casual gamers (The biggest majority of gamers pretty much)  see Series S multi-plat games such as GTA6, CoD, Fifa etc running in 1080P on Series S and looking just as good (To them) as the PS5 versions i mean you did say before too that people wont care about a few extra pixels or more shadow detail etc if the game looks pretty much the exact same to the casuals. And if Series S cost $199-249 like the rumors vs a $449 ps5 i know what console they will be picking up and i think so do you. MS doesnt care if you buy Series S or Series X so it doesnt matter to them if Series X is mostly for the Hardcore gamer who wants the best graphics.

But Like i said Series S is a next gen console but for 1080P tv owners.

I find it very unlikely that the DLSS2 or any other technique upscaling an 720p image can look better than the same image in native resolution of 1080p or higher. Or do you mean having your system running to the limit to generate a 720p instead of a 1080p and then with the upscalling your result be better?

Still MS already have machine learning and similar technique, plus Sony have been using temporary reconstruction (not that much different) for a lot of PS4Pro titles.

The upscaling technique isn't superior to native, it is less hw expensive though. And with that power can be used to improve other aspects of image quality.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

I'm still skeptical of DirectML. At least with Big/Navi GPU's. There is no tensor core or AI cores on any current AMD GPU or further roadmaps. Without it, ML and PP will have to be done in software, not hardware and there will be a tonne of latency to come with it, rendering it useless for actually playing games. DirectML was introduced 2 years ago and there is still no mainstream title support for it. I think its pretty much vapourware at this point.

At least we have checkerboarding, TAA and ai sharpening, I suppose :P

Last edited by hinch - on 13 July 2020

Pemalite said:

In what way? If you are suggesting it somehow increases framerates...

Regarding to SSD in low powered laptops, swap file and general browsing (the amount of temporary files that comes with browsing is staggering) But I'm not sure what the context was for that. It also helps frame stutter. Train simulator for example, old game with complex scenery updates for an 8 year old engine. Without SSD the stuttering while loading in the next section would be much worse.

Swap files are still used in 2020 on the latest and greatest PC's. They never stopped being used, even when you change your storage. - Currently my PC is using about 3GB of swap.

Swap file is the first thing I disable and delete when I get a new laptop. I rather spend extra on RAM than sacrifice the SSD disk space to a last resort measure that windows has never been able to manage well. Maybe it's better now, I've been disabling the swap file since windows XP, blue screen of death was better than heavy swap file use. Sacrificing 6.8 GB for sleep mode is enough (hiberfil.sys)

More power doesn't necessarily mean you need more memory, there are operations that are pretty memory insensitive all things considered, things like texturing need more space sure, but using the compute overhead to bolster an effect somewhere else may not.

Higher resolution does need more memory, native 4K means all screen buffers are 2.25x larger than 1440p. More power is expected to deliver native 4K. Maybe not a lot more comparatively, but more than sticking to lower rendering resolutions.

The 7th and 8th gen drives were 60MB/s tops on a good day, often 30-40MB/s with random access.

Ugh, no wonder it takes 35 minutes to make a copy of GTS for patching (103 GB) That's about 50 MB/s on ps4 pro.
(yet read and write on the same HDD, so not that bad actually)

Skyrim didn't run into any trouble on the Xbox 360 despite it only using DVD+Mechanical HDD.

360 had 512 MB unified RAM though, plus an extra 10MB edram. ps3 had 256 MB system ram, minus what the OS used. Skyrim afaik kept track of all the changes you make in the world (anything you displace gets flagged and remembered in a change file, you know better probably having worked on it) So at first the game runs fine (fresh save) yet the further you get the worse it gets. There must have been memory leaks as well since I had to disable auto save and restart the game every 15 minutes to be able to finish it on ps3. The frame rate would slowly degrade to unplayable.

Skyrims issue was more or less a game engine issue... Despite Bethesdas claims that the Creation Engine is "ground up" brand-new, it's still based on Gamebryo which in turn is based on Net Immerse... And one characteristic of that engine is that it is extremely powerful with scripting and cell loading, but it does mean it comes with the occasional hitching irrespective of the underlying hardware.

Would an SSD have helped? Sure. No doubt. But so would more Ram. - The 7th gen was held back by that 512Mb memory pool.

I did do some nitty gritty work on oblivions engine at some point, working to get it to run on less than optimal hardware configurations... I.E. 128MB of Ram even.

The SSD could have hosted a swap file to keep track of the changes :)

There is a big jump in hardware feature sets and efficiency between Wii U and Switch. - The fact that the Switch can take every WiiU game and run them at higher resolutions and framerates with little effort is a testament to that.

Keep in mind that Breath of the Wild was the best looking WiiU game... Where-as on the Switch, it still hasn't had it's best looking games yet.

In saying that... It's not the same kind of jump between the Playstation 3 > Playstation 4 or Xbox 360 > Xbox One, it's a much smaller jump in overall capability, but it's there. (And I do own both the Switch and WiiU so can compare in real time.)

I have the WiiU as well, played it a lot more than the Switch actually. In docked mode Switch does look better but indeed not really a generational leap. BotW was a bit blurry on my 1080p projector, but still great looking.

Considering that the consoles are low-end PC rigs... They tend to be the lowest common denominator.

Developers don't *need* to build their games to target low-end PC's, heck. Crysis didn't.

Next gen, target resolutions are going to be unimportant, heck they almost are now.

Microsoft has DirectML and AMD has Radeon Image Sharpening which should provide some impressive image reconstruction, it's going to make pixel counting redundant for allot of games.

A game like Call of Duty Modern Warfare 2019 has a very filmic presentation due to the image reconstruction and anti-aliasing and other post-process techniques being employed.

If a game is 1080P, I am okay with that, but it better be an impressive 1080P.

The Series X is also for 1080P owners, the super sampling of the Xbox One X on a 1080P display does bring with it a ton advantages... Plus the higher-end console will have better texturing, shadowing, lighting which will pop far better on a 1080P display.

The Series S is more or less for those on a budget who don't really give a crap about specifications... They just wanna game and have fun.

Agreed, enough vgcharting, time for some Tlou2!



zero129 said:
SvennoJ said:

My bad, I wrongly assumed that was the Switch version. I thought it look rather rough, thought it was just a bad screenshot

Anyway, it does prove the point that tailor made ports >>>>>>> scaling!

How so? porting and scaling is pretty much the same thing as thats what porting is doing is scaling the engine to the hardware of the device. You and goopy seem to be under some impression that it takes a master class of a team to scale said engines and is impossible.

The Witcher 3 can look the exact same as the switch version or the PS4 version or it can look even a gen above PS4.

The is no reason why MS cant have the XBSX version looking a gen above the XB1 version if they so want while even using the same engine as long as it scales well.

So they spend 12 months moving a slider down for the Switch version...



Around the Network
hinch said:

I'm still skeptical of DirectML. At least with Big/Navi GPU's. There is no tensor core or AI cores on any current AMD GPU or further roadmaps. Without it, ML and PP will have to be done in software, not hardware and there will be a tonne of latency to come with it, rendering it useless for actually playing games. DirectML was introduced 2 years ago and there is still no mainstream title support for it. I think its pretty much vapourware at this point.

At least we have checkerboarding, TAA and ai sharpening, I suppose :P

False.
What a tensor core does is multiply's two 4x4 Quarter Precision Floating point matrices, then adds an additional Quarter/Half Precision calculation to the result using a fused multply-addition operation. - That then provides a half precision floating point result that can be downgraded to a quarter precision floating point if needed.

This can be done entirely on AMD's shader pipelines... Since... Well. Terascale.
And you can do a similar thing on the SSE units on CPU's.

nVidia's approach is to invest in dedicated cores to handle this task, AMD tends to have more overall compute than nVidia and thus can get away with spending some of that compute time to do these kinds of tasks.

SvennoJ said:

Regarding to SSD in low powered laptops, swap file and general browsing (the amount of temporary files that comes with browsing is staggering) But I'm not sure what the context was for that. It also helps frame stutter. Train simulator for example, old game with complex scenery updates for an 8 year old engine. Without SSD the stuttering while loading in the next section would be much worse.

Windows is extremely proficient at memory management. I would rather retain the use of a swap file.

The stuttering can be resolved entirely by using a faster mechanical disk... Sadly we don't see the velociraptor drives anymore. 10k rpm anyone?

SvennoJ said:

Swap file is the first thing I disable and delete when I get a new laptop. I rather spend extra on RAM than sacrifice the SSD disk space to a last resort measure that windows has never been able to manage well. Maybe it's better now, I've been disabling the swap file since windows XP, blue screen of death was better than heavy swap file use. Sacrificing 6.8 GB for sleep mode is enough (hiberfil.sys)

I disable hibernation and thus remove hiberfil.sys.

Standby is enough. Each to their own... I would rather use more disk space and keep more Ram available, but I will sometimes be working with data sets that exceed typical video game requirements...

SvennoJ said:

Higher resolution does need more memory, native 4K means all screen buffers are 2.25x larger than 1440p. More power is expected to deliver native 4K. Maybe not a lot more comparatively, but more than sticking to lower rendering resolutions.

Not as much as you think.

You only need 7MB for 720P, 1080P will fit in 16MB and so on... As the Framebuffer is a function of the resolution of the output signal, color depth and/or palette size. 
So for example you will have 24-bit of colour information, with an alpha channel on top of that per pixel.

SvennoJ said:

Ugh, no wonder it takes 35 minutes to make a copy of GTS for patching (103 GB) That's about 50 MB/s on ps4 pro.
(yet read and write on the same HDD, so not that bad actually)

Yeah. It's not good. I have a pair of 12 Terabyte 7200rpm drives via USB on the Xbox One X which helps alleviate some of the I/O bottlenecks... But due to how expansive my game library is, it can still take a significant amount of time to build my game database once I have turned the console on.

Next gen... That all goes away.

In saying that, Install times are here to stay, because the mechanical hard drives aren't the limiting factor there, it's the optical disks and our internet infrastructure that holds us back.

SvennoJ said:

360 had 512 MB unified RAM though, plus an extra 10MB edram. ps3 had 256 MB system ram, minus what the OS used. Skyrim afaik kept track of all the changes you make in the world (anything you displace gets flagged and remembered in a change file, you know better probably having worked on it) So at first the game runs fine (fresh save) yet the further you get the worse it gets. There must have been memory leaks as well since I had to disable auto save and restart the game every 15 minutes to be able to finish it on ps3. The frame rate would slowly degrade to unplayable.

The Xbox 360's unified memory system is what saved it with the performance degradation in Skyrim as some of Skyrims scripts and database recording will chew through significant amounts of memory, the Xbox 360 had the superior memory set up.

SvennoJ said:

The SSD could have hosted a swap file to keep track of the changes :)

Hard drives can do that as well. Many games in fact do just that.
Morrowind on the OG Xbox had a database/cache where it would record the location of all objects in the game world and update accordingly... So for example you could kill everyone in Balmora and dedicate every home to a specific item type and go from there.

It does mean you have less throughput to say... Stream textures and meshes, something which became common with games starting with Modern Warefare 2 and later.

SvennoJ said:

I have the WiiU as well, played it a lot more than the Switch actually. In docked mode Switch does look better but indeed not really a generational leap. BotW was a bit blurry on my 1080p projector, but still great looking

Yeah. If I were to compare a copy of Breath of the Wild on WiiU and Switch... Basically the WiiU is a match for the Switch's handheld mode, there are a few advantages on the Switch, but you would be hard pressed to tell the difference.

There isn't a generational leap between versions, it's the same game, same assets, same performance targets (for the most part).

Wii U is just 480P handheld, 720P display and switch is 720P handheld, 1080P display. It's pixel counting differences only.

SvennoJ said:

Agreed, enough vgcharting, time for some Tlou2!

Enjoy!

zero129 said:
SvennoJ said:

So they spend 12 months moving a slider down for the Switch version...

They must of spend 12 months porting their engine to a lower end device using mobile hardware.

As you can see the engine itself when already running on a device can scale down to work on a potato pc without any porting since its running the exact same engine as a high end pc versions. This is how scaling works when the engine is build to work on multi devices.

Majority of AAA games are built using engines like CryEngine (With additional forks like Dunia), Unity, Unreal Engine, IdTech, Source (Derivative of Quake/IdTech), Frostbite, IW (Derivative of Quake/IdTech), Anvil engines and more... And they are, you guessed it. Designed to scale across multiple generations and multiple levels of hardware...

Even the engine that powers Spiderman on the Playstation 4/Playstation 5 is based on Insomniacs propriety engine, which also scales across hardware configurations having started life with the game called "fuse" on the Xbox 360 and Playstation 3.

Game engines typically aren't written from scratch anymore, rather they are re-used, re-written and overhauled to obtain better results iteratively... And that often has the benefit of scaling between different sets of hardware.



--::{PC Gaming Master Race}::--

SvennoJ said:
zero129 said:

How so? porting and scaling is pretty much the same thing as thats what porting is doing is scaling the engine to the hardware of the device. You and goopy seem to be under some impression that it takes a master class of a team to scale said engines and is impossible.

The Witcher 3 can look the exact same as the switch version or the PS4 version or it can look even a gen above PS4.

The is no reason why MS cant have the XBSX version looking a gen above the XB1 version if they so want while even using the same engine as long as it scales well.

So they spend 12 months moving a slider down for the Switch version...

You know, I'm not a developer myself, but I'm gonna go out on a limb and say that developing a game to run in a range of GPUs from GTX 2080 down to GT 1050 might be a little bit different than porting a game from PC to Switch.



chakkra said:
SvennoJ said:

So they spend 12 months moving a slider down for the Switch version...

You know, I'm not a developer myself, but I'm gonna go out on a limb and say that developing a game to run in a range of GPUs from GTX 2080 down to GT 1050 might be a little bit different than porting a game from PC to Switch.

Anyone who thinks a developer builds a game to take advantage of each and every single GPU in a product stack ranging from something like a GTX 1050 to a RTX 2080 Super... Really doesn't understand PC hardware, development and gaming.

A developer builds a single game. That is it.

The individual game engines then have a plethora of settings that can be set, those settings continue to exist irrespective of hardware, whether it's an Xbox One S or a PC with 4x Titan GPU's and 256GB of Ram and a 64-Core CPU.

Say for example a developer builds a console game, the engine out of the box will support Screen Space Ambient Occlusion and Horizon Based Ambient Occlusion. (SSAO and HBAO respectively). - They are supported in the engine regardless.

On a console a developer may choose SSAO over HBAO on the base Xbox One and Playstation 4 because it is cheaper, but will use HBAO on the Xbox One X and Playstation 4 Pro due to the extra hardware at their disposal.

On a PC the developer doesn't choose it. - They simply expose those settings for the individual user to use as they see fit, essentially handing the work over to the customer rather than the developer.

Sometimes a developer will build a script that will detect which GPU is being used and set some settings to suit, but that isn't always a guarantee... And obviously will not support GPU's that it has never heard of. (Rather common occurrence with older games on newer systems.)





--::{PC Gaming Master Race}::--

zero129 said:
SvennoJ said:

My bad, I wrongly assumed that was the Switch version. I thought it look rather rough, thought it was just a bad screenshot

Anyway, it does prove the point that tailor made ports >>>>>>> scaling!

How so? porting and scaling is pretty much the same thing as thats what porting is doing is scaling the engine to the hardware of the device. You and goopy seem to be under some impression that it takes a master class of a team to scale said engines and is impossible.

The Witcher 3 can look the exact same as the switch version or the PS4 version or it can look even a gen above PS4.

The is no reason why MS cant have the XBSX version looking a gen above the XB1 version if they so want while even using the same engine as long as it scales well.

Everything is scalable but what you're forgetting is that these people are trying to sell us a game. Sure, they could have scaled down Uncharted 2 to the PsP but then it would have been an unplayable mess and no one would buy it. That's why they released a completely different Uncharted game for the PsP.

Parity is a thing developers need to work around as they need to hit certain performance targets on the lowest common denominator. Halo Infinite for example can't be running in 360p and look like dog shit on the Xone. It has to hit at least 1080p/30fps and with all the visuals and core game elements intact from the Series X version.

Down ports are different as they are essentially a completely different game developed by a different team. That's not the same as scalable graphics like we're seeing on pc and Series x. They take up a ton time and resources and aren't always financially feasible. Witcher 2 for example never got a ps3 port because it already took over a year to port it to the 360. 

Last edited by goopy20 - on 14 July 2020

zero129 said:
goopy20 said:

Everything is scalable but what you're forgetting is that these people are trying to sell us a game. Sure, they could have scaled down Uncharted 2 to the PsP but then it would have been an unplayable experience and no one would buy it. That's why they released a completely different Uncharted game for the PsP.

Parity is a thing developers need to work around as they need to hit certain performance targets on the lowest common denominator. Halo Infinite for example can't be running in 360p and look like dog shit on the Xone. It has to hit at least 1080p/30fps and with all the visuals and core game elements intact from the Series X version.

Down ports are different as they are essentially a completely different game developed by a different team. That's not the same as scalable graphics like we're seeing on pc and Series x. 

What your forgetting is your the one trying to make a point that a game running on the same engine cant look a gen apart.

And Just like i showed you with the witcher 3 images the is a lot that can be scaled to make a game fit any target.

Pretty sure I always said the opposite. Unreal 5 scales all the way down to mobiles, doesn't mean they could get that ps5 tech demo running on a iphone. I mean technically they could, but then it would look like ass and defeat the whole purpose of showcasing the graphics capabilities of  both the engine and the iphone.

Like I said, scalability of the engines and hardware won't hold Series X back, parity with Xbox, Lockhart and low-end pc's will. With every design idea they will have to ask themselves "will this run at 30fps/1080p on a jaguar cpu, 1,3Tflops gpu and HDD too?" If the answer is no, then they just have scale it down across the board or remove it all together.