By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - VGC: Switch 2 Was Shown At Gamescom Running Matrix Awakens UE5 Demo

Norion said:
Soundwave said:

The performance cost for truly accurate lighting will always be enormous. 

Hollywood movies still need hours to render a single frame largely because several GPUs need that much time to accurately account for the light bounces. And these are workstations that put a PS5 to shame, probably even today have more performance than a PS6 will have. If they could do that at even 3 frames per second, they obviously would do that instead. 

A game console in real time is always going to have fake it and even that will absolutely tank its performance. 

Yeah Mario 64, great cool reflections on the water (cherry picked an area to show it off), but this is a game from 1996 that needs a $1500 GPU to run like that, lol, which kinda proves the point. 

How does needing hours to render a single frame for big budget films change the fact that path tracing is a huge leap over traditional methods in video games and will cause a big increase in visual fidelity when it becomes standard in them?

The performance cost is going to decline overtime, you just need to compare how much better 4000 series cards are at ray tracing compared to 2000 series ones so eventually it won't be difficult for even mid-range hardware to do path tracing.

It's far more than just the water, it's the entire environment that gets a big boost to fidelity and it doesn't prove your point at all since you don't need a GPU anywhere near that expensive to run it. It's literally a clickbait title, come on. A game as demanding as Cyberpunk has path tracing now and you don't need a top end GPU to run it thanks to DLSS. The point is that even games with simple visuals due to a low budget will benefit a lot from it by having far better lighting than anything on the PS4.

It's a nice to have, it's not a must have for me. 

To be honest too I would not be surprised at all if Nvidia comes up with a way where the AI can look at a frame of a ray traced environment and eventually learn to simulate a similar look with baked effects and reflections to just bypass the whole real time light bounce issue entirely. You can create a shiny/reflective environmental look even with baked lighting, the human brain is not really wired to understand how light bounces off objects exactly. Yes a person can say headlights from a car should illuminate outwards in a cone pattern, but do they know how exactly it would reflect off a sign or a window 15 feet away at 27 degree angle? No. You have a lot of leeway to be able to cheat.  

Because you'll never resolve that, not any time soon, even a highest end GPU cannot in Blender run in real time even 5 frames per second when you push the light bounces to a certain level of accuracy even on a scene with like N64 level of geometry and textures. This is why movies, because they push to the maximum detail, have to be rendered frame by individual frame sometimes taking hours on end to just render one second of usable footage.

If anything this wouldn't be a bad use case scenario for the "external Switch dock that has hardware inside of it". For people who really want ray tracing, let it be an option to buy an external dock with a beefier GPU that maybe handles all the lighting/reflections when docked. It wouldn't change the game play, it would just add some maybe more processor intensive lighting and reflection effects. 

Here's another example too from Blender (a 3D computer graphics program), EEVEE is basically baked lighting, Cycles is ray tracing basically (real time light bounces accurately processed). First look how closely EEVEE is able to match Cycles. Secondly understand the EEVEE version took like 1/7th the time to render as the Cycles version. 

In a video game where you are moving fast, are you really going to notice the difference here that much? Sure in a Hollywood movie that's blown up and meant to be shown on a 200 foot screen, they will go for max fidelity ... but for a video game. I don't know if it's necessary to take the massive processing hit. 

Last edited by Soundwave - on 08 September 2023

Around the Network
Soundwave said:
zeldaring said:

I agree with you for the most part but 60fps/4k is worth the difference for me lol.  if you don't care about 4k and 60fps in multiplatform games you will be very happy, and nintendo games should look amazing at 30fps.

If you care about 4K 60 fps (or why stop there, why not 90 fps? 120 fps?) the only platform you should accept is a PC. 

Get that weak ass PS5 and XSX shit outta here. 

But no to me, playing PS4-tier graphics just at a higher resolution and double the frame rate isn't blowing my mind. I remember when generational leaps used to be like ... actual generational leaps. 

If the PS2 came out and it just played a PS1 game MGS at double the frame rate and resolution, like the system would've straight up tanked, lol. People expected a lot more from generational shifts back then. 

Even with Switch 2, they will I'm sure inevitably release BoTW/ToTK probably with 4K DLSS and 60 frames second, it's not like I'm going to fall off my couch in amazement, it'll be more like "eh, yeah, nice I guess". 

I would love to get a powerful pc but this is what I feel comfortable spending on gaming hardware that fits my budget. At the same time as you get older and seen everything nothing really blows your mind anymore demishing returns are real. Like 360/ps3 were huge leaps but never did it blow my mind it was really beautiful graphics same with ps4 huge leap for but just beautiful graphics 



Norion said:
Soundwave said:

The performance cost for truly accurate lighting will always be enormous. 

Hollywood movies still need hours to render a single frame largely because several GPUs need that much time to accurately account for the light bounces. And these are workstations that put a PS5 to shame, probably even today have more performance than a PS6 will have. If they could do that at even 3 frames per second, they obviously would do that instead. 

A game console in real time is always going to have fake it and even that will absolutely tank its performance. 

Yeah Mario 64, great cool reflections on the water (cherry picked an area to show it off), but this is a game from 1996 that needs a $1500 GPU to run like that, lol, which kinda proves the point. 

How does needing hours to render a single frame for big budget films change the fact that path tracing is a huge leap over traditional methods in video games and will cause a big increase in visual fidelity when it becomes standard in them?

The performance cost is going to decline overtime, you just need to compare how much better 4000 series cards are at ray tracing compared to 2000 series ones so eventually it won't be difficult for even mid-range hardware to do path tracing.

It's far more than just the water, it's the entire environment that gets a big boost to fidelity and it doesn't prove your point at all since you don't need a GPU anywhere near that expensive to run it. It's literally a clickbait title, come on. A game as demanding as Cyberpunk has path tracing now and you don't need a top end GPU to run it thanks to DLSS. The point is that even games with simple visuals due to a low budget will benefit a lot from it by having far better lighting than anything on the PS4.

Maybe with PS5 pro but path tracing and ray tracing are so taxing.



Chrkeller said:

I know.  The point is Nintendo and only Nintendo sees the value in DLSS and everyone else is stupid?  You don't find it odd literally nobody is going this route?  

I leave it at this, given there is no need to repeat myself...  if this tech was as good as people are making it out to be....  more than just Nintendo would be lining up.  

You are ignoring the economics of this issue. 

Nintendo has a deal with Nvidia that Asus, Valve, etc likely aren't able to get because 1. they will sell an order of magnitude fewer units (Steam Deck sold 1.6 million in 2022) and 2. they are smaller companies with less brand recognition limited to the x86 platform. Nintendo is about five times the size of Valve and 25 times the size of Asus in net-worth. The Switch will sells 10-50 times as many units as a Steam Deck/ROG Ally in a given year. 

There is also the matter that an APU + discrete GPU costs (both in terms of money and power) a lot more than the unit package that is an Nvidia Tegra chip. These companies don't go ARM because  Linux for ARM (and Valve's client) is in its infancy. Other platforms don't even support it. 

Also DLSS is hardly a Nintendo exclusive. There is a reason why Nvidia is dominating the GPU market despite being far more costly -- it's the feature-set that Nvidia GPU's have, whether we are talking about DLSS (for gaming) or CUDA (for GPGPU compute in video editing, 3d-modeling, and Generative AI.) 

Last edited by sc94597 - on 08 September 2023

sc94597 said:
Chrkeller said:

I know.  The point is Nintendo and only Nintendo sees the value in DLSS and everyone else is stupid?  You don't find it odd literally nobody is going this route?  

I leave it at this, given there is no need to repeat myself...  if this tech was as good as people are making it out to be....  more than just Nintendo would be lining up.  

You are ignoring the economics of this issue. 

Nintendo has a deal with Nvidia that Asus, Valve, etc likely aren't able to get because 1. they will sell an order of magnitude fewer units (Steam Deck sold 1.6 million in 2022) and 2. they are smaller companies with less brand recognition limited to the x86 platform. Nintendo is about five times the size of Valve and 25 times the size of Asus in net-worth. The Switch will sells 10-50 times as many units as a Steam Deck/ROG Ally in a given year. 

There is also the matter that an APU + discrete GPU costs (both in terms of money and power) a lot more than the unit package that is an Nvidia Tegra chip. These companies don't go ARM because  Linux for ARM (and Valve's client) is in its infancy. Other platforms don't even support it. 

Also DLSS is hardly a Nintendo exclusive. There is a reason why Nvidia is dominating the GPU market despite being far more costly -- it's the feature-set that Nvidia GPU's have, whether we are talking about DLSS (for gaming) or CUDA (for GPGPU compute in video editing, 3d-modeling, and Generative AI.) 

DLSS is gonna be useless for switch 2 on most games. people keep talking about DSLL and they don't even know how it works.



Around the Network
zeldaring said:

DLSS is gonna be useless for switch 2 on most games. people keep talking about DSLL and they don't even know how it works.

One of the biggest use-cases for super-sampling on PC handhelds (using FSR) is to save battery life, and to improve image-quality while using less GPU power. DLSS is even better at this than FSR. 

DLSS 2.0 is not some marginal feature, but rather the core of Nvidia's "DLSS" feature-suite. The fact that DLSS 3.0 likely won't be part of Switch's feature-set (because of the Ampere architecture more than anything else) doesn't mean that the features included are "useless." 

By the way, I am a Data Scientist who has implemented CNN's (Convolutional Neural Networks) in both the workplace, academic setting (I have a MSDS) and personal projects. I know how DLSS works on a much deeper level than most people, given that I've actually implemented convolutional autoencoders using both PyTorch and Tensorflow.

In fact I am planning to start a doctorate (D.Eng) next year, and the thesis I am leaning towards likely will incorporate CNN's in comparison to Bayesian vector time-series analyses of a certain sub-set of business problems. 

Last edited by sc94597 - on 08 September 2023

sc94597 said:
zeldaring said:

DLSS is gonna be useless for switch 2 on most games. people keep talking about DSLL and they don't even know how it works.

One of the biggest use-cases for super-sampling on PC handhelds (using FSR) is to save battery life, and to improve image-quality while using less GPU power. DLSS is even better at this than FSR. 

DLSS 2.0 is not some marginal feature, but rather the core of Nvidia's "DLSS" feature-suite. The fact that DLSS 3.0 likely won't be part of Switch's feature-set (because of the Ampere architecture more than anything else) doesn't mean that the features included are "useless." 

By the way, I am a Data Scientist who has implemented CNN (Convolutional Neural Networks) in both the workplace, academic setting (I have a MSDS) and personal projects. I know how DLSS works on a much deeper level than most people, given that I've actually implemented convolutional autoencoders using both PyTorch and Tensorflow.

In fact I am planning to start a doctorate (D.Eng) next year, and the thesis I am leaning towards likely will incorporate CNN's in comparison to Bayesian vector time-series analyses of a certain sub-set of business problems. 

Most everything i read is that it needs 1440p/50/60fps to really make a difference and  impressions are all over the place, with many saying it useless and others say it's great. many saying it make the image worse and the input lag sucks.



Nintendo was already using FSR for Xenoblade 3. They will use DLSS.



Bite my shiny metal cockpit!

zeldaring said:

Most everything i read is that it needs 1440p/50/60fps to really make a difference and  impressions are all over the place, with many saying it useless and others say it's great. many saying it make the image worse and the input lag sucks.

DLSS 3.0 is an entirely different beast from DLSS 2.0. DLSS 2.0 improves image quality beyond the internal render resolution regardless of that render resolution and even is better than native target resolution in numerous cases. Even FSR does this, which is why there are countless videos of how best to use FSR with the Steam Deck (native resolution of 1280 x 800) to improve battery life with minimal image-quality loss (if any.) 

DLSS 3.0, because the goal is temporal interpolation (producing missing frames, and not just missing pixels) has a few quirks that will have to be worked out. But the Switch 2 likely won't even support DLSS 3.0 anyway. Ampere's Optical Flow Accelerator (OFA) isn't fast enough. That's why the 3000 series Nvidia GPU's don't have DLSS 3.0 (or at least according to Nvidia that is why, which seemed suspect until they announced 3.5 for all RTX GPU's.) 

One of the things that has hindered the original Switch is that games have to work well in both portable and docked mode, and in portable mode you can't have a battery life of 30 minutes because the game is pushing the platform to its limits. DLSS 2.0 allows for far more flexibility in this sphere, even for Nintendo themselves. You'll rarely have situations where resolution falls to 360p (i.e Xenoblade 2), for example, like we've seen on the Switch. 

My guess is that the internal resolution will always be between 720p - 1080p, and the effective resolution (after applying DLSS) will be between 1080p and 1440p when docked, with many games targeting 60fps that wouldn't otherwise have. If Nintendo were savvy, they'd put a VRR screen in the device, and many games could even target stable 40fps or 50fps without much screen-tearing. 

While PS5 level visuals/framerates is indeed hyperbolic, being able to meet or even exceed the Series S in many cases (or at least there will be trade-offs where there is no clear better version) is certainly doable. That is all the Switch 2 needs to do to keep up with the current generation. 



sc94597 said:
zeldaring said:

Most everything i read is that it needs 1440p/50/60fps to really make a difference and  impressions are all over the place, with many saying it useless and others say it's great. many saying it make the image worse and the input lag sucks.

DLSS 3.0 is an entirely different beast from DLSS 2.0. DLSS 2.0 improves image quality beyond the internal render resolution regardless of that render resolution and even is better than native target resolution in numerous cases. Even FSR does this, which is why there are countless videos of how best to use FSR with the Steam Deck (native resolution of 1280 x 800) to improve battery life with minimal image-quality loss (if any.) 

DLSS 3.0, because the goal is temporal interpolation (producing missing frames, and not just missing pixels) has a few quirks that will have to be worked out. But the Switch 2 likely won't even support DLSS 3.0 anyway. Ampere's Optical Flow Accelerator (OFA) isn't fast enough. That's why the 3000 series Nvidia GPU's don't have DLSS 3.0 (or at least according to Nvidia that is why, which seemed suspect until they announced 3.5 for all RTX GPU's.) 

One of the things that has hindered the original Switch is that games have to work well in both portable and docked mode, and in portable mode you can't have a battery life of 30 minutes because the game is pushing the platform to its limits. DLSS 2.0 allows for far more flexibility in this sphere, even for Nintendo themselves. You'll rarely have situations where resolution falls to 360p (i.e Xenoblade 2), for example, like we've seen on the Switch. 

My guess is that the internal resolution will always be between 720p - 1080p, and the effective resolution (after applying DLSS) will be between 1080p and 1440p when docked, with many games targeting 60fps that wouldn't otherwise have. If Nintendo were savvy, they'd put a VRR screen in the device, and many games could even target stable 40fps or 50fps without much screen-tearing. 

While PS5 level visuals/framerates is indeed hyperbolic, being able to meet or even exceed the Series S in many cases (or at least there will be trade-offs where there is no clear better version) is certainly doable. That is all the Switch 2 needs to do to keep up with the current generation. 

Aright let's make a bet then. Most ports will be inferior to series S. Winner get's to put a signature on the bottom of each post of what ever he wants.