By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - How much do you care about the graphical leap between consoles at this point?

Soundwave said:

The current Switch is DLSS 2.0 was possible in 2015 would be basically very close to the XBox One in power straight up. 

The current Switch does not and can not have DLSS as it lacks Tensor cores that nVidia introduced with Volta.

Soundwave said:

394 GFLOPS to 1.2 TFLOPS yes, but then you have to factor in the Tegra X1 only needing the render 1/4-1/15th the pixel resolution and that gap shrinks very quickly. AMD also routinely claims a TFLOP number and routinely is outperformed by Nvidia GPUs that have lower TF performance as well, so take AMD's teraflop claims with a grain of salt. 

Flops isn't everything. You are conflating multiple different aspects of hardware here and it is highly erroneous.

AMD's GPU's actually beat nVidia in Tflop benchmarks. A Teraflop for AMD is the same as nVidia, it's single precision floating point. Aka. 32bit FP.
The numbers you see in comparisons however are "theoretical" - But without question, AMD beats nVidia on this front in the real world and has done so for years, especially in Asynchronous compute workloads.
If you are doing any kind of intensive FP32 workload, AMD was the GPU to have because it's theoretical and realworld Teraflop numbers absolutely beat nVidia, which is why AMD's GPU's were the GPU's to have during the crypto-craze era... Crypto mining is entirely compute bound. Aka. Uses those Teraflops and not much else.

But here-in lays the issue... Games aren't all about ALU throughput which AMD focuses far more heavily on than nVidia who typically gives more focus to pixel/texture fillrate/polygon throughput... Which are also needed for games... Ergo, nVidia is able to decisively beat AMD in gaming, despite having less real-world and theoretical Tflop numbers.

So no... Don't take AMD's "Teraflop claims" with a grain of salt... The issue lays with individuals who propagate those numbers without any real understanding of what they mean or what they actually do for gaming.

Soundwave said:

Even shit like SSD, whoopity doo, smartphones already have NVMe drives that are the same thing, Apple's iPhones have had this for 5 years already. UFS 3.1 which is going to be common in Android phones is 3GB/sec by the time Switch 2 is out UFS 4.0 likely is available which would be even faster than that. 

Not all Smartphones use nVME. Apple has used it since iPhone 6 AFAIK.

Android typically relies on other technologies.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:
Soundwave said:

The current Switch is DLSS 2.0 was possible in 2015 would be basically very close to the XBox One in power straight up. 

The current Switch does not and can not have DLSS as it lacks Tensor cores that nVidia introduced with Volta.

Soundwave said:

394 GFLOPS to 1.2 TFLOPS yes, but then you have to factor in the Tegra X1 only needing the render 1/4-1/15th the pixel resolution and that gap shrinks very quickly. AMD also routinely claims a TFLOP number and routinely is outperformed by Nvidia GPUs that have lower TF performance as well, so take AMD's teraflop claims with a grain of salt. 

Flops isn't everything. You are conflating multiple different aspects of hardware here and it is highly erroneous.

AMD's GPU's actually beat nVidia in Tflop benchmarks. A Teraflop for AMD is the same as nVidia, it's single precision floating point. Aka. 32bit FP.
The numbers you see in comparisons however are "theoretical" - But without question, AMD beats nVidia on this front in the real world and has done so for years, especially in Asynchronous compute workloads.
If you are doing any kind of intensive FP32 workload, AMD was the GPU to have because it's theoretical and realworld Teraflop numbers absolutely beat nVidia, which is why AMD's GPU's were the GPU's to have during the crypto-craze era... Crypto mining is entirely compute bound. Aka. Uses those Teraflops and not much else.

But here-in lays the issue... Games aren't all about ALU throughput which AMD focuses far more heavily on than nVidia who typically gives more focus to pixel/texture fillrate/polygon throughput... Which are also needed for games... Ergo, nVidia is able to decisively beat AMD in gaming, despite having less real-world and theoretical Tflop numbers.

So no... Don't take AMD's "Teraflop claims" with a grain of salt... The issue lays with individuals who propagate those numbers without any real understanding of what they mean or what they actually do for gaming.

Soundwave said:

Even shit like SSD, whoopity doo, smartphones already have NVMe drives that are the same thing, Apple's iPhones have had this for 5 years already. UFS 3.1 which is going to be common in Android phones is 3GB/sec by the time Switch 2 is out UFS 4.0 likely is available which would be even faster than that. 

Not all Smartphones use nVME. Apple has used it since iPhone 6 AFAIK.

Android typically relies on other technologies.

Yes of course the current Switch doesn't have DLSS, my point is *if* that technology was available back then for them and they could have implemented it, something that would increase the performance of the Switch significantly. They could render as low as like 512x288 undocked (N64 level resolution) and 540p docked and achieve 720p undocked + full 1080p docked no problem. Games like Zelda BOTW would run even above 1080p, at 900p a DLSS 2.0 could reconstruct that maybe even a full 4K resolution, but certainly 1440p or 1800p would be doable. 

AMD's GPUs have for years struggled to match performance of Nvidia GPU's that are 1-2 years older and have or often times run hotter on top of that in many cases. That's all I meant, when people see like a 2070 Super is "only" 9TFLOPS, that may well perform equal to a PS5 (10TF) or even XSX (12TF) ... it wouldn't surprise me. The RDNA2 architecture they have coming now is basically what Nvidia had almost two years ago with Turing.

Apple has used NVMe for about 4-5 years now. Android makers like Samsung favor UFS, UFS 3.1 can get up to 3GB/sec which is basically as fast as an NVMe drive. By the time Switch 2 is out there probably will be UFS 4.0 available if Nintendo wants it and that will probably be even faster. 



Soundwave said:
Pemalite said:

The current Switch does not and can not have DLSS as it lacks Tensor cores that nVidia introduced with Volta.

Flops isn't everything. You are conflating multiple different aspects of hardware here and it is highly erroneous.

AMD's GPU's actually beat nVidia in Tflop benchmarks. A Teraflop for AMD is the same as nVidia, it's single precision floating point. Aka. 32bit FP.
The numbers you see in comparisons however are "theoretical" - But without question, AMD beats nVidia on this front in the real world and has done so for years, especially in Asynchronous compute workloads.
If you are doing any kind of intensive FP32 workload, AMD was the GPU to have because it's theoretical and realworld Teraflop numbers absolutely beat nVidia, which is why AMD's GPU's were the GPU's to have during the crypto-craze era... Crypto mining is entirely compute bound. Aka. Uses those Teraflops and not much else.

But here-in lays the issue... Games aren't all about ALU throughput which AMD focuses far more heavily on than nVidia who typically gives more focus to pixel/texture fillrate/polygon throughput... Which are also needed for games... Ergo, nVidia is able to decisively beat AMD in gaming, despite having less real-world and theoretical Tflop numbers.

So no... Don't take AMD's "Teraflop claims" with a grain of salt... The issue lays with individuals who propagate those numbers without any real understanding of what they mean or what they actually do for gaming.

Not all Smartphones use nVME. Apple has used it since iPhone 6 AFAIK.

Android typically relies on other technologies.

Yes of course the current Switch doesn't have DLSS, my point is *if* that technology was available back then for them and they could have implemented it, something that would increase the performance of the Switch significantly. They could render as low as like 512x288 undocked (N64 level resolution) and 540p docked and achieve 720p undocked + full 1080p docked no problem. Games like Zelda BOTW would run even above 1080p, at 900p a DLSS 2.0 could reconstruct that maybe even a full 4K resolution, but certainly 1440p or 1800p would be doable. 

AMD's GPUs have for years struggled to match performance of Nvidia GPU's that are 1-2 years older and have or often times run hotter on top of that in many cases. That's all I meant, when people see like a 2070 Super is "only" 9TFLOPS, that may well perform equal to a PS5 (10TF) or even XSX (12TF) ... it wouldn't surprise me. The RDNA2 architecture they have coming now is basically what Nvidia had almost two years ago with Turing.

Apple has used NVMe for about 4-5 years now. Android makers like Samsung favor UFS, UFS 3.1 can get up to 3GB/sec which is basically as fast as an NVMe drive. By the time Switch 2 is out there probably will be UFS 4.0 available if Nintendo wants it and that will probably be even faster. 

Do you have any thrustworthy source that "RDNA2 architeture they have coming now is basically what Nvidia had almost two years ago with Turing"?



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Soundwave said:

Yes of course the current Switch doesn't have DLSS, my point is *if* that technology was available back then for them and they could have implemented it, something that would increase the performance of the Switch significantly. They could render as low as like 512x288 undocked (N64 level resolution) and 540p docked and achieve 720p undocked + full 1080p docked no problem. Games like Zelda BOTW would run even above 1080p, at 900p a DLSS 2.0 could reconstruct that maybe even a full 4K resolution, but certainly 1440p or 1800p would be doable. 

Switch already has games that drop to 640x360 like with Wolfenstein and Doom.

Witcher 3 drops to 810x456.

So we are already dealing with really really low gaming resolutions anyway.

DLSS, not sure if you have used it. It isn't some magical silver bullet, it helps, but it's not a cure-all and I *really* dislike how it over-sharpens everything, which brings with it some visual artifacts. - It's just one of many tools at a developers disposal to help bolster visual quality.

Volta has Tensor cores, that came out in 2017, the same year as the Switch, so the technology did exist when the console debuted, but Nintendo didn't opt for the most powerful SoC anyway or even went with a semi-custom design.

The issue is moot. The Switch doesn't have DLSS and probably will never have DLSS... And there is no guarantees that Nintendo will use DLSS with Switch 2 either, Nintendo does what Nintendo does.

Soundwave said:

AMD's GPUs have for years struggled to match performance of Nvidia GPU's that are 1-2 years older and have or often times run hotter on top of that in many cases. That's all I meant, when people see like a 2070 Super is "only" 9TFLOPS, that may well perform equal to a PS5 (10TF) or even XSX (12TF) ... it wouldn't surprise me. The RDNA2 architecture they have coming now is basically what Nvidia had almost two years ago with Turing.

I actually agree. AMD is behind nVidia technologically by a couple of years.
Just your methodology to getting to that conclusion prior was not what I agreed with.

Soundwave said:

Apple has used NVMe for about 4-5 years now. Android makers like Samsung favor UFS, UFS 3.1 can get up to 3GB/sec which is basically as fast as an NVMe drive. By the time Switch 2 is out there probably will be UFS 4.0 available if Nintendo wants it and that will probably be even faster. 

UFS is based upon SCSI stack, nVME is optimized for NAND.
nVME should in theory offer efficiency advantages over UFS.

I would assume Nintendo would continue with upgrading their current NAND progressively like they have done from Wii > Wii U > Switch. - Just a low-cost solution.



--::{PC Gaming Master Race}::--

ironmanDX said:

Not as substantial as OG Xbox to 360 or SNES to 64 but substantial non the less.

I don't know if that jump from SNES to N64 will ever really be matched, unless we hatch some amazing VR tech overnight that simultaneously reduces the barrier of entry to a modern equivalent of $149, makes a large leap in visuals over current gen games, and makes the setup fair less cumbersome.



Retro Tech Select - My Youtube channel. Covers throwback consumer electronics with a focus on "vid'ya games."

Latest Video: Top 12: Best Games on the N64 - Special Features, Episode 7

Around the Network
Pemalite said:
Soundwave said:

Yes of course the current Switch doesn't have DLSS, my point is *if* that technology was available back then for them and they could have implemented it, something that would increase the performance of the Switch significantly. They could render as low as like 512x288 undocked (N64 level resolution) and 540p docked and achieve 720p undocked + full 1080p docked no problem. Games like Zelda BOTW would run even above 1080p, at 900p a DLSS 2.0 could reconstruct that maybe even a full 4K resolution, but certainly 1440p or 1800p would be doable. 

Switch already has games that drop to 640x360 like with Wolfenstein and Doom.

Witcher 3 drops to 810x456.

So we are already dealing with really really low gaming resolutions anyway.

DLSS, not sure if you have used it. It isn't some magical silver bullet, it helps, but it's not a cure-all and I *really* dislike how it over-sharpens everything, which brings with it some visual artifacts. - It's just one of many tools at a developers disposal to help bolster visual quality.

Volta has Tensor cores, that came out in 2017, the same year as the Switch, so the technology did exist when the console debuted, but Nintendo didn't opt for the most powerful SoC anyway or even went with a semi-custom design.

The issue is moot. The Switch doesn't have DLSS and probably will never have DLSS... And there is no guarantees that Nintendo will use DLSS with Switch 2 either, Nintendo does what Nintendo does.

Soundwave said:

AMD's GPUs have for years struggled to match performance of Nvidia GPU's that are 1-2 years older and have or often times run hotter on top of that in many cases. That's all I meant, when people see like a 2070 Super is "only" 9TFLOPS, that may well perform equal to a PS5 (10TF) or even XSX (12TF) ... it wouldn't surprise me. The RDNA2 architecture they have coming now is basically what Nvidia had almost two years ago with Turing.

I actually agree. AMD is behind nVidia technologically by a couple of years.
Just your methodology to getting to that conclusion prior was not what I agreed with.

Soundwave said:

Apple has used NVMe for about 4-5 years now. Android makers like Samsung favor UFS, UFS 3.1 can get up to 3GB/sec which is basically as fast as an NVMe drive. By the time Switch 2 is out there probably will be UFS 4.0 available if Nintendo wants it and that will probably be even faster. 

UFS is based upon SCSI stack, nVME is optimized for NAND.
nVME should in theory offer efficiency advantages over UFS.

I would assume Nintendo would continue with upgrading their current NAND progressively like they have done from Wii > Wii U > Switch. - Just a low-cost solution.

Lets move the DLSS 2.0 point to what a hypothetical PS5 to Switch 2 could very well be, because the advantages become far more apparent there. Lets say a game like Witcher 4 is on PS5. It's a pretty beefy game lets say and can't run at full 4K even on a PS5, it runs at 1800p. OK. 

If they really wanted to, they could then run the Switch 2 version using DLSS 2.0 at 

PS5 1800p = (5,760,000 pixels)

Switch 2 undocked 1080p " DLSSed" from 640x360 =  (230,400 pixels)

Switch 2 docked 1440p "DLSSed" from 1024x576 = (589,824 pixels)

This is a freaking monstrous disparity, the PS5 has to render over 20x the resolution as the Switch 2 undocked now and 9x docked. Do you think the PS4 would be able to run the Switch version of Witcher 3 at 20x the undocked resolution? Or even docked at 9x the resolution? Not a chance. 

Because the Switch 2 has such a resolution overhead advantage here it means now likely things like being able to increase the frame rate or bump effects settings from low to medium becomes much more feasible. Yes you can quibble that the image quality isn't quite 100% on (too sharp sometimes maybe), but it's still going to be a better image quality than the blur-fest of current Switch-PS4 ports with better graphics settings possible. How much can you really complain because you can't exactly put a PS5 in your coat pocket. 

Regarding NVMe or UFS 3.1 ... I mean why doesn't Nintendo just source the same NVMe part as Apple? It must be mass produced at a huge level these days and Apple's been using those drives by 2023 for 8+ years. If not UFS 3.1 is still extremely fast, 3GB/sec is not a joke and by 2023 that will be common and widespread by hundreds of millions of Android devices, likely even faster UFS 4.0 is available by then. Nintendo will have options on this issue.  



Soundwave said:
Pemalite said:

Switch already has games that drop to 640x360 like with Wolfenstein and Doom.

Witcher 3 drops to 810x456.

So we are already dealing with really really low gaming resolutions anyway.

DLSS, not sure if you have used it. It isn't some magical silver bullet, it helps, but it's not a cure-all and I *really* dislike how it over-sharpens everything, which brings with it some visual artifacts. - It's just one of many tools at a developers disposal to help bolster visual quality.

Volta has Tensor cores, that came out in 2017, the same year as the Switch, so the technology did exist when the console debuted, but Nintendo didn't opt for the most powerful SoC anyway or even went with a semi-custom design.

The issue is moot. The Switch doesn't have DLSS and probably will never have DLSS... And there is no guarantees that Nintendo will use DLSS with Switch 2 either, Nintendo does what Nintendo does.

I actually agree. AMD is behind nVidia technologically by a couple of years.
Just your methodology to getting to that conclusion prior was not what I agreed with.

UFS is based upon SCSI stack, nVME is optimized for NAND.
nVME should in theory offer efficiency advantages over UFS.

I would assume Nintendo would continue with upgrading their current NAND progressively like they have done from Wii > Wii U > Switch. - Just a low-cost solution.

Lets move the DLSS 2.0 point to what a hypothetical PS5 to Switch 2 could very well be, because the advantages become far more apparent there. Lets say a game like Witcher 4 is on PS5. It's a pretty beefy game lets say and can't run at full 4K even on a PS5, it runs at 1800p. OK. 

If they really wanted to, they could then run the Switch 2 version using DLSS 2.0 at 

PS5 1800p = (5,760,000 pixels)

Switch 2 undocked 1080p " DLSSed" from 640x360 =  (230,400 pixels)

Switch 2 docked 1440p "DLSSed" from 1024x576 = (589,824 pixels)

This is a freaking monstrous disparity, the PS5 has to render over 20x the resolution as the Switch 2 undocked now and 9x docked. Do you think the PS4 would be able to run the Switch version of Witcher 3 at 20x the undocked resolution? Or even docked at 9x the resolution? Not a chance. 

Because the Switch 2 has such a resolution overhead advantage here it means now likely things like being able to increase the frame rate or bump effects settings from low to medium becomes much more feasible. Yes you can quibble that the image quality isn't quite 100% on (too sharp sometimes maybe), but it's still going to be a better image quality than the blur-fest of current Switch-PS4 ports with better graphics settings possible. How much can you really complain because you can't exactly put a PS5 in your coat pocket. 

Regarding NVMe or UFS 3.1 ... I mean why doesn't Nintendo just source the same NVMe part as Apple? It must be mass produced at a huge level these days and Apple's been using those drives by 2023 for 8+ years. If not UFS 3.1 is still extremely fast, 3GB/sec is not a joke and by 2023 that will be common and widespread by hundreds of millions of Android devices, likely even faster UFS 4.0 is available by then. Nintendo will have options on this issue.  

I'm still waiting you to explain if DLSS 2.0 is such a godsent how will they still convince people to buy 1000 USD cards when their 200 USD variation with DLSS 2.0 basically erases almost all differences from your own speculation.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

DonFerrari said:
Soundwave said:

Lets move the DLSS 2.0 point to what a hypothetical PS5 to Switch 2 could very well be, because the advantages become far more apparent there. Lets say a game like Witcher 4 is on PS5. It's a pretty beefy game lets say and can't run at full 4K even on a PS5, it runs at 1800p. OK. 

If they really wanted to, they could then run the Switch 2 version using DLSS 2.0 at 

PS5 1800p = (5,760,000 pixels)

Switch 2 undocked 1080p " DLSSed" from 640x360 =  (230,400 pixels)

Switch 2 docked 1440p "DLSSed" from 1024x576 = (589,824 pixels)

This is a freaking monstrous disparity, the PS5 has to render over 20x the resolution as the Switch 2 undocked now and 9x docked. Do you think the PS4 would be able to run the Switch version of Witcher 3 at 20x the undocked resolution? Or even docked at 9x the resolution? Not a chance. 

Because the Switch 2 has such a resolution overhead advantage here it means now likely things like being able to increase the frame rate or bump effects settings from low to medium becomes much more feasible. Yes you can quibble that the image quality isn't quite 100% on (too sharp sometimes maybe), but it's still going to be a better image quality than the blur-fest of current Switch-PS4 ports with better graphics settings possible. How much can you really complain because you can't exactly put a PS5 in your coat pocket. 

Regarding NVMe or UFS 3.1 ... I mean why doesn't Nintendo just source the same NVMe part as Apple? It must be mass produced at a huge level these days and Apple's been using those drives by 2023 for 8+ years. If not UFS 3.1 is still extremely fast, 3GB/sec is not a joke and by 2023 that will be common and widespread by hundreds of millions of Android devices, likely even faster UFS 4.0 is available by then. Nintendo will have options on this issue.  

I'm still waiting you to explain if DLSS 2.0 is such a godsent how will they still convince people to buy 1000 USD cards when their 200 USD variation with DLSS 2.0 basically erases almost all differences from your own speculation.

Well DLSS 2.0 is only supported by RTX range cards, so you can't use this on a $200 card. Nvidia will probably then move up to DLSS 3.0 and simply state you must now have an Ampere based card (3060 or better) and so on and so on. So they can cover themselves that way. 

But for enclosed software hardware ecosystem like the Switch is, Nintendo can simply just build it into every development kit so that it's used basically for every game. On a system like Switch there's no benefit to not using it most of the time. 

As with most AI algorithms, they get better over time, Nvidia could quite possibly give Nintendo a custom solution that can construct a high resolution image from even lower resolutions than current DLSS 2.0 does, it could get ridiculously low. I doubt they put too much thought into the current DLSS implementation about using super low resolutions like 512x288, but it does work anyway. 

Last edited by Soundwave - on 24 May 2020

Soundwave said:
DonFerrari said:

I'm still waiting you to explain if DLSS 2.0 is such a godsent how will they still convince people to buy 1000 USD cards when their 200 USD variation with DLSS 2.0 basically erases almost all differences from your own speculation.

Well DLSS 2.0 is only supported by RTX range cards, so you can't use this on a $200 card. Nvidia will probably then move up to DLSS 3.0 and simply state you must now have an Ampere based card (3060 or better) and so on and so on. So they can cover themselves that way. 

But for enclosed software hardware ecosystem like the Switch is, Nintendo can simply just build it into every development kit so that it's used basically for every game. On a system like Switch there's no benefit to not using it most of the time. 

Guess you missed the point. If DLSS would cover very big gaps in power then small ones won't even exist so even the lowest grade card with DLSS would basically remove all the reason for any card above it.

And you also ignore that there are already other reconstruction techniques that have been used, the on in PS4 was already making things not be noticed when making under 1400p to 4k.

And the UE5 engine DF couldn't really see the difference from 1440p to 4k.

Also you are ignoring that pixel count is just a very small part of the IQ on a game and also of the graphic budget.



duduspace11 "Well, since we are estimating costs, Pokemon Red/Blue did cost Nintendo about $50m to make back in 1996"

http://gamrconnect.vgchartz.com/post.php?id=8808363

Mr Puggsly: "Hehe, I said good profit. You said big profit. Frankly, not losing money is what I meant by good. Don't get hung up on semantics"

http://gamrconnect.vgchartz.com/post.php?id=9008994

Azzanation: "PS5 wouldn't sold out at launch without scalpers."

Pretty much all or nothing. Incremental increases don't interest me in any way. That's why the Xbox One X was kind of a joke to me... the main appeal was slightly sharper textures that I found more or less imperceptible. All the talk of "MAH TERRYFLOPS" and being "the most recent console in the history of mankind" meant nothing if it isn't going to be obvious. That was literally its main draw and was twice the price of the base model. No thanks. Graphics has never been a huge thing for me to begin with, if a game is good, the graphics don't really play into my enjoyment of it. I can say I honestly would not have enjoyed Astral Chain, being built from the ground up on Switch, any more whatsoever had the graphics been at that level. I value style over graphical fidelity.

I think the next generation has the opportunity to impress with this ray tracing business... but we'll see what happens. Though, if the next Switch iteration only manages to reach PS4 Pro levels I can't see myself being all that bothered. It mattered in the PS2/GameCube/Xbox generation when handhelds were still in Super Nintendo levels in terms of power, but diminishing returns, thankfully, have more or less made that obsolete.



*My signature from 2011 which I'm too lazy to change*

Currently awaiting the arrivals of:
Kid Icarus Uprising
Resident Evil: Revelations
Tekken 3D: Prime Edition
Metal Gear Solid: Snake Eater 3D
Beyond the Labyrinth
Heroes of Ruin
Luigi's Mansion 2