By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - VGC: Switch 2 Was Shown At Gamescom Running Matrix Awakens UE5 Demo

Oneeee-Chan!!! said:

The GPU of the PS3 is equivalent to a 7600GT.
If there is a PS3 pro, it will be equivalent to 7800GTX or 7900GTX.

It's not. The RSX is a G70/G71 hybrid, in-between a 7800 GTX and a 7900 GT.

A PS3 Pro would have released at least with a Tesla-based GPU like the GTX 260. At 1.4B transistors, that is already almost as big as the Tegra X1 APU in the Switch (2B transistors) while clocking much higher.

The Switch is definitely not a PS3 (~0.55B transistors between CPU and GPU) but it's still quite far from the PS4/XB1 (~5B transistors with higher clocks). Any mid-generation console back then would likely have overperformed it.



 

 

 

 

 

Around the Network
Cobretti2 said:
zeldaring said:

yup here is GOW using FSR 2.0 upscaling PS5 secret sauce confirmed. I don't really understand why people making this a  big deal out of this when every single PC gamer says its doesn't look great upscaling from low resolutions. judging off youtube videos is useless.

I watched the video and the results are comparable, not identical but good enough.

Look at it this way. In once scene it guessed and added too much water reflections, however how would a general gamer know that it did that unless they had a PS5 version of the game and a Switch 2 version of the game side by side and stopped moving and compare pixel by pixel?

I'll use a TV show example, many people are mind blown by Buffy 1080p amazon cut for how great it looks, however for those who understand that the original source is 4:3, and they basically cut out a 16:9 image randomly from it and scaled it up, know there is information missing. However the general consumer just remembers watching buffy on VHS and blurry picture now its sharp and crisp but don't realised it is zoomed it and cropped out detail. To make things worse they didn't even reintroduce all the colour corrections to it that were done postprocessing so night scenes look almost daytime, but again the general consumer is none the wiser.

Back to your video.

In reality there is nothing wrong with the two FSR images unless you are comparing it to the native source. Ok so in one scene the dragon on the wooden box had less detail? big deal, most gamers don't stand around admiring the detail on a wooden box.

In the parts where he walked around, they looked comparable in motion and I be happy playing any of those 3 versions.

Now the reason PC gamers say it doesn't look great is because of something called consumer bias.

Think of it like chocolate, buy a $100 block vs a $5 block. If you know which cost what your brain will automatically bias you into thinking the $100 one tastes better. Same applies to graphics card. In my head I just spent $1500 bux on the latest and greatest, so in my head I'm like oh yer this is much better then upscaling on my old graphics card. Now in reality we know we have reached diminishing returns, so our brain tricks us into thinking it is much better because of the money we spent to justify the purchase in our head. Is the new card better? absolutely, but to say the old one using FSR isn't comparable is just fooling yourself as that video demonstrated how they all look about the same, especially in motion. The image may not be an accurate representation of the native image, but it has managed to construct something a general gamer woudl be happy with as they non the wiser.

To each their own but not only do I think PC gamers figured out resolution diminishing returns but they figured it way before console gamers did.  Think about it.  The ps5 and Xbox are pushing for 4k....  PC gamers have long locked at 1440p and are pushing for 120 fps....  because 1440p to 4k is minimal.  Meanwhile 30 fps to 120 fps is the grand canyon.  PC gamers are also chasing strong effects as seen by Ray tracing.

 Though perhaps we are saying the same thing.  Resolution has hit a ceiling.  The future is high fps, RT (when done properly), etc.  



Chrkeller said:
Cobretti2 said:

I watched the video and the results are comparable, not identical but good enough.

Look at it this way. In once scene it guessed and added too much water reflections, however how would a general gamer know that it did that unless they had a PS5 version of the game and a Switch 2 version of the game side by side and stopped moving and compare pixel by pixel?

I'll use a TV show example, many people are mind blown by Buffy 1080p amazon cut for how great it looks, however for those who understand that the original source is 4:3, and they basically cut out a 16:9 image randomly from it and scaled it up, know there is information missing. However the general consumer just remembers watching buffy on VHS and blurry picture now its sharp and crisp but don't realised it is zoomed it and cropped out detail. To make things worse they didn't even reintroduce all the colour corrections to it that were done postprocessing so night scenes look almost daytime, but again the general consumer is none the wiser.

Back to your video.

In reality there is nothing wrong with the two FSR images unless you are comparing it to the native source. Ok so in one scene the dragon on the wooden box had less detail? big deal, most gamers don't stand around admiring the detail on a wooden box.

In the parts where he walked around, they looked comparable in motion and I be happy playing any of those 3 versions.

Now the reason PC gamers say it doesn't look great is because of something called consumer bias.

Think of it like chocolate, buy a $100 block vs a $5 block. If you know which cost what your brain will automatically bias you into thinking the $100 one tastes better. Same applies to graphics card. In my head I just spent $1500 bux on the latest and greatest, so in my head I'm like oh yer this is much better then upscaling on my old graphics card. Now in reality we know we have reached diminishing returns, so our brain tricks us into thinking it is much better because of the money we spent to justify the purchase in our head. Is the new card better? absolutely, but to say the old one using FSR isn't comparable is just fooling yourself as that video demonstrated how they all look about the same, especially in motion. The image may not be an accurate representation of the native image, but it has managed to construct something a general gamer woudl be happy with as they non the wiser.

To each their own but not only do I think PC gamers figured out resolution diminishing returns but they figured it way before console gamers did.  Think about it.  The ps5 and Xbox are pushing for 4k....  PC gamers have long locked at 1440p and are pushing for 120 fps....  because 1440p to 4k is minimal.  Meanwhile 30 fps to 120 fps is the grand canyon.  PC gamers are also chasing strong effects as seen by Ray tracing.

 Though perhaps we are saying the same thing.  Resolution has hit a ceiling.  The future is high fps, RT (when done properly), etc.  

We are talking the same thing. I think the whole argument about comparable is talking from visuals perspective only and not performance. As we saw in that video they look similar, and that might be what the dev may have meant. 

Now looking the same but running at 30fps, 60fps, 120fps is the difference that we are actually contemplating. I don't think comparable even factored that aspect of the game in. Visually it may look close as was reported. Hell I saw another post in here earlier showing FFVII running on a steamdeck and that looks pretty good to me visually.

The other thing that hasn't really been talked about is depth is optimisation. What the hardware can do and what developers choose to do are two different things. The demo would have been optimised to the Nth degree to show case the hardware for a sales pitch, however one developer may choose to take the time to do it well and get comparable visual results to PS5, but we know most devs (or more so their publishers) from past history, will not want to invest too much money and time into porting to a Nintendo platform, we have seen some shoddy efforts on Switch. We can only hope that with DLSS features, that they may put some effort in to at least get it working well with it.

People also forget that the PS5 games of today aren't exactly mind blowing because they are cross gen or perhaps it's purely financial and devs don't want to invest too much into it. There was some impressive demos shown for the PS5, but most games are yet to look anything like those demos, so what is that reason? So the comparable statement could be accurate for what is out today.

Another good example of effort can be seen in say h265 video encodes. If you use hardware acceleration of the gpu to do he encoding, you loose quality vs the much slower CPU encoding, if you are targeting the same file size as an example. That is why you always see a rushed release file that in reality could be 3x smaller if encoded the slow way at a much smaller bitrate.



 

 

haxxiy said:
Oneeee-Chan!!! said:

The GPU of the PS3 is equivalent to a 7600GT.
If there is a PS3 pro, it will be equivalent to 7800GTX or 7900GTX.

It's not. The RSX is a G70/G71 hybrid, in-between a 7800 GTX and a 7900 GT.

A PS3 Pro would have released at least with a Tesla-based GPU like the GTX 260. At 1.4B transistors, that is already almost as big as the Tegra X1 APU in the Switch (2B transistors) while clocking much higher.

The Switch is definitely not a PS3 (~0.55B transistors between CPU and GPU) but it's still quite far from the PS4/XB1 (~5B transistors with higher clocks). Any mid-generation console back then would likely have overperformed it.

Really?
I had heard that the GPU specs were changed just before the PS3 launch.

Besides, aren't the GPUs of PS 3 and Xbox 360 almost equivalent?

If the PS 3 GPU is equivalent to 7900 GT, then which GPU is equivalent to the Xbox 360 GPU?

I was thinking about x800.



Getting back to the actual topic, I'm going to take a shot at what I think the Switch 2 specs are based on the Nvidia leak from a year+ ago and merge that with the news from CVG that The Matrix Awakens demo is running on said chip. So just as a quick recap ... (you look Google this stuff yourself if you want more detail on it).

About 18+ months ago a very, very reliable Nvidia leaker named Kopite (has leaked multiple Nvidia GPU things that were 100% correct) basically said the next-gen Switch chipset would was designated as Tegra T239. Tegra T239 didn't really exist at that point, it wasn't an officially announced product. Few months later an Nvidia employee accidentually lets it slip that a Tegra T239 indeed exists.

Then there was a data hack of Nvidia, and sure enough the Tegra T239 is in Nvidia files and people were able to glean a bunch of data about the chip. For starters the big thing was the graphics API for this Tegra T239 chip is listed in the Nvidia data as NVN2, which is a new graphics API. The Nvidia graphics API for the Switch 1 (and only the Switch) is NVN1. So there you go. But from the leak also basically shows the CPU and GPU configuration of this chip as follows

Tegra T239 codename: Drake
8-core ARM A78 (A78AE?) CPU cores
12SMs, 1536 CUDA cores (6x increase over Switch 1)
DLSS and RTX (ray tracing) hardware support
Graphics API is known as "NVN2" (Switch 1 API is "NVN").

This is from Nvidia's own Linux kernel data leak.

This is from Famiboards, but a rough outline of what the Tegra T239 would theoretically output at different clock speeds (thanks to Z0m3le)

Portable
400MHz ->1.228TFLOPs
500MHz ->1.536TFLOPs
600MHz ->1.843TFLOPs

Docked
768MHz (Switch docked clock) ->2.36TFLOPs
1GHz -> 3.07TFLOPs
1.152GHz -> 3.539TFLOPs
1.3GHz -> 3.993TFLOPs
1.5GHz -> 4.6TFLOPs
1.8GHz -> 5.53TFLOPs
2GHz -> 6.14TFLOPs

The current Tegra X1 clocks at 1 GHz for the top end ... but because the 20nm version of the chip was inefficient and ran a bit hot way back in 2015, Nintendo downclocked that to 768 MHz docked. The Tegra X1 in Nvidia's Shield miniconsole can still output at the full 1000 MHz. When the Tegra X1 was die shrunk to 16nm, the high end clock speed of the GPU increased again to 1.27 GHz max for the Mariko Switch/Lite models. Steam Deck by comparison clocks its GPU at 1600 MHz (1.6GHz).

So if this Matrix Awakens report is true, then I am guessing the Tegra T239 is running at 1 GHz docked. That was the original clock speed of the OG Switch and is still lower clocked than the more modern Mariko Tegra X1 in Switch Mariko/OLED + Lite. So it's not an absurdly high clock. It's a lot lower than Steam Deck's GPU clock, it's a good deal lower than the Mariko 16nm Tegra X1's top end too. 

But at 1 GHz docked you're getting 3 teraflops performance ... 3 teraflops is knocking on the door of the XBox Series S (4 teraflops) and when you add DLSS on top of that, could a chip like that run the Matrix Awakens demo at a quality that is worth demoing? With native res at 600p or something DLSSed to 1440p lets say? I would say yeah. That sounds actually right on. 

Last edited by Soundwave - on 10 September 2023

Around the Network
Pemalite said:
sc94597 said:

1. While it is true we have no idea how the Switch 2's Tegra will be like, a low TDP mobile 3050 level of performance (like the one in the video) is in line with the upper-end rumors, especially if they switched from Ampere to Lovelace as some of the more recent rumors allude to. Furthermore, it doesn't invalidate the point I was making that DLSS is a significant improvement even for the lowest-end Ampere chips. The 25W mobile 2050 (which is technically GA107, despite the 20 title) also benefits significantly from DLSS despite being a significant cut down relative to the 3050 mobile. It's often the difference between a game being unplayable or being lockable to 30fps (or 60fps.)  

Just trying to keep peoples expectations in check.

The 3050 mobile chip is still 2048 functional units which can boost to 1.7Ghz and is backed by upwards of 192GB/s.

That is still going to out-perform a fully-equipped AGX Orin 64GB, let alone a cut down variant like the AGX Orin 32GB or even the Orin NX 16GB.

Part of the reason is TDP, the 3050 can clock higher and more consistently as it's not sharing TDP with a CPU core cluster and has a higher default TDP to start with.

Provided Nintendo opts for Orin in the first place.

1.7Ghz boost is only possible in the top-powered model (80W.) The lower TDP models (the one in the video was a 45W model with a max boost clock of 1.35 Ghz) have much lower base and boost rates.

https://www.notebookcheck.net/NVIDIA-GeForce-RTX-3050-Laptop-GPU-Benchmarks-and-Specs.513790.0.html

Also the RTX 3050 mobile is VRAM limited with only 4GB. In a game like Final Fantasy 7 Intergrade (among many other more recent releases) 4GB is a significant bottleneck at 1080p.

A Lovelace Tegra with a TDP of about 15-20W and 8GB of allocated video memory available could be competitive with an RTX 3050 mobile @35W-45W. Especially in a closed-platform where game optimization can be targeted. I'd expect it to be even more true as games continue to become VRAM hogs. 

"The GPU is expected to have between 12 and 16 Stream Multiprocessors, which for Ada Lovelace house 128 cores each, for a total of 1,536 to 2,048 CUDA cores.

All told, the Switch successor is expected to have a pair each of Arm Cortex X4 and Cortex A720 cores backed by a quartet of Cortex A520 cores, this 12-16 SM Ada Lovelace GPU, and somewhere between 12 and 16 GB of memory, which we presume to be LPDDR5. Knowing that an Apple M2 Pro MacBook Pro has 200 GB per second of memory bandwidth on a 128-bit memory bus, we're hopeful the Switch 2 will be in the ballpark of 150-200 GB/sec, assuming a 96-bit bus for 12 GB of memory or 128-bit for 16. It would be a monster compared to the current Switch while still potentially being something that could fit into the thermal and power profile of a tablet. The video mentions the SoC might be made by MediaTek. 
"

I don't think it necessarily will happen, but it is the upper-limit of possibilities. 

Edit: It wouldn't be surprising at all if the performance difference between the Switch 2 and an RTX 3050 35W was less than the performance difference between the RTX 3050 35W and the RTX 3050 80W. 

Last edited by sc94597 - on 10 September 2023

Iirc, the Matrix demo with TSR turned off rendered in (native) sub 720p on Series S (vs 1620p and much higher settings on PS5 and Series X. That's like 5x Series S's resolution). Lumen, Nanite, Virtual Shadow Maps and crowd density were all dialed back on Series S. Unless whatever Epic did is heavy on the CPU, I just don't see what's so unrealistic or surprising about Switch 2 running the demo at even lower settings and native resolutions.

With some sacrifices, the Switch 2 could run the demo even at half the power of Series S. I wouldn't read much into people saying it's "comparable" to PS5/Series X's performance. I bet they think Series S version was also comparable (it isn't). Diminishing returns are just making the average gamer have the perception of a non-gamer grandma, and PS5/Series X's power being wasted on resolutions is also helping Series S and even last gen consoles hold their own when not compared side by side on a high resolution display.



Kyuu said:

Iirc, the Matrix demo with TSR turned off rendered in (native) sub 720p on Series S (vs 1620p and much higher settings on PS5 and Series X. That's like 5x Series S's resolution). Lumen, Nanite, Virtual Shadow Maps and crowd density were all dialed back on Series S. Unless whatever Epic did is heavy on the CPU, I just don't see what's so unrealistic or surprising about Switch 2 running the demo at even lower settings and native resolutions.

With some sacrifices, the Switch 2 could run the demo even at half the power of Series S. I wouldn't read much into people saying it's "comparable" to PS5/Series X's performance. I bet they think Series S version was also comparable (it isn't). Diminishing returns are just making the average gamer have the perception of a non-gamer grandma, and PS5/Series X's power being wasted on resolutions is also helping Series S and even last gen consoles hold their own when not compared side by side on a high resolution display.

Pretty much yes. 

Most regular gamers are really not going to be able to immediately spot a difference here, and the nature of video games is that you are moving around 95% of the time, there aren't too many games where you just stand still and do nothing.

I remember the days when system difference were easier to spot



Soundwave said:

Getting back to the actual topic, I'm going to take a shot at what I think the Switch 2 specs are based on the Nvidia leak from a year+ ago and merge that with the news from CVG that The Matrix Awakens demo is running on said chip. So just as a quick recap ... (you look Google this stuff yourself if you want more detail on it).

About 18+ months ago a very, very reliable Nvidia leaker named Kopite (has leaked multiple Nvidia GPU things that were 100% correct) basically said the next-gen Switch chipset would was designated as Tegra T239. Tegra T239 didn't really exist at that point, it wasn't an officially announced product. Few months later an Nvidia employee accidentually lets it slip that a Tegra T239 indeed exists.

Then there was a data hack of Nvidia, and sure enough the Tegra T239 is in Nvidia files and people were able to glean a bunch of data about the chip. For starters the big thing was the graphics API for this Tegra T239 chip is listed in the Nvidia data as NVN2, which is a new graphics API. The Nvidia graphics API for the Switch 1 (and only the Switch) is NVN1. So there you go. But from the leak also basically shows the CPU and GPU configuration of this chip as follows

Tegra T239 codename: Drake
8-core ARM A78 (A78AE?) CPU cores
12SMs, 1536 CUDA cores (6x increase over Switch 1)
DLSS and RTX (ray tracing) hardware support
Graphics API is known as "NVN2" (Switch 1 API is "NVN").

This is from Nvidia's own Linux kernel data leak.

This is from Famiboards, but a rough outline of what the Tegra T239 would theoretically output at different clock speeds (thanks to Z0m3le)

Portable
400MHz ->1.228TFLOPs
500MHz ->1.536TFLOPs
600MHz ->1.843TFLOPs

Docked
768MHz (Switch docked clock) ->2.36TFLOPs
1GHz -> 3.07TFLOPs
1.152GHz -> 3.539TFLOPs
1.3GHz -> 3.993TFLOPs
1.5GHz -> 4.6TFLOPs
1.8GHz -> 5.53TFLOPs
2GHz -> 6.14TFLOPs

The current Tegra X1 clocks at 1 GHz for the top end ... but because the 20nm version of the chip was inefficient and ran a bit hot way back in 2015, Nintendo downclocked that to 768 MHz docked. The Tegra X1 in Nvidia's Shield miniconsole can still output at the full 1000 MHz. When the Tegra X1 was die shrunk to 16nm, the high end clock speed of the GPU increased again to 1.27 GHz max for the Mariko Switch/Lite models. Steam Deck by comparison clocks its GPU at 1600 MHz (1.6GHz).

So if this Matrix Awakens report is true, then I am guessing the Tegra T239 is running at 1 GHz docked. That was the original clock speed of the OG Switch and is still lower clocked than the more modern Mariko Tegra X1 in Switch Mariko/OLED + Lite. So it's not an absurdly high clock. It's a lot lower than Steam Deck's GPU clock, it's a good deal lower than the Mariko 16nm Tegra X1's top end too. 

But at 1 GHz docked you're getting 3 teraflops performance ... 3 teraflops is knocking on the door of the XBox Series S (4 teraflops) and when you add DLSS on top of that, could a chip like that run the Matrix Awakens demo at a quality that is worth demoing? With native res at 600p or something DLSSed to 1440p lets say? I would say yeah. That sounds actually right on. 

Z0m3le that guy is a clown. He was the one that hyped FP16 to heavens and said switch with FP16 would basically match xbox or exceed it in some cases and everyone believed him lol. right when i saw his name i stopped reading. About street fighter 2 gif thats just looks like sightly different color use and comparing 30fps-60fps is a much bigger jump, night and day actually if you run side by side.

 

Last edited by zeldaring - on 10 September 2023

Chrkeller said:
Cobretti2 said:

I watched the video and the results are comparable, not identical but good enough.

Look at it this way. In once scene it guessed and added too much water reflections, however how would a general gamer know that it did that unless they had a PS5 version of the game and a Switch 2 version of the game side by side and stopped moving and compare pixel by pixel?

I'll use a TV show example, many people are mind blown by Buffy 1080p amazon cut for how great it looks, however for those who understand that the original source is 4:3, and they basically cut out a 16:9 image randomly from it and scaled it up, know there is information missing. However the general consumer just remembers watching buffy on VHS and blurry picture now its sharp and crisp but don't realised it is zoomed it and cropped out detail. To make things worse they didn't even reintroduce all the colour corrections to it that were done postprocessing so night scenes look almost daytime, but again the general consumer is none the wiser.

Back to your video.

In reality there is nothing wrong with the two FSR images unless you are comparing it to the native source. Ok so in one scene the dragon on the wooden box had less detail? big deal, most gamers don't stand around admiring the detail on a wooden box.

In the parts where he walked around, they looked comparable in motion and I be happy playing any of those 3 versions.

Now the reason PC gamers say it doesn't look great is because of something called consumer bias.

Think of it like chocolate, buy a $100 block vs a $5 block. If you know which cost what your brain will automatically bias you into thinking the $100 one tastes better. Same applies to graphics card. In my head I just spent $1500 bux on the latest and greatest, so in my head I'm like oh yer this is much better then upscaling on my old graphics card. Now in reality we know we have reached diminishing returns, so our brain tricks us into thinking it is much better because of the money we spent to justify the purchase in our head. Is the new card better? absolutely, but to say the old one using FSR isn't comparable is just fooling yourself as that video demonstrated how they all look about the same, especially in motion. The image may not be an accurate representation of the native image, but it has managed to construct something a general gamer woudl be happy with as they non the wiser.

To each their own but not only do I think PC gamers figured out resolution diminishing returns but they figured it way before console gamers did.  Think about it.  The ps5 and Xbox are pushing for 4k....  PC gamers have long locked at 1440p and are pushing for 120 fps....  because 1440p to 4k is minimal.  Meanwhile 30 fps to 120 fps is the grand canyon.  PC gamers are also chasing strong effects as seen by Ray tracing.

 Though perhaps we are saying the same thing.  Resolution has hit a ceiling.  The future is high fps, RT (when done properly), etc.  

When i say diminishing returns i'm talking about on a small screen like TOTk looks nice on a small screen on TV not so much. At the same time  dead space and hogwarts looks like shit on steam deck but for the most part switch 2 won't see games as ugly as witcher 3 or doom i hope. 

Last edited by zeldaring - on 10 September 2023