Also, the X2 is not really that much better than the X1. The biggest improvement on the X2 is memory bandwidth, which has been doubled.
It's also using Pascal as a base, which can scale up in clockrates much much higher at the same energy cost than Maxwell.
Delta Colour Compression also got improved which increases bandwidth again.
In addition, X2 achieves a maximum of 750 GFLOPS in single precision mode (the X1 could do a maximum of 512).
Who cares about Flops? It's only relevant if both pieces of GPU hardware has everything else identical.
Also. Flops is Clockrate x 2 instructions per clock x Shader pipelines.
The X1 was not limited to a max of 512 Gflop. The X2 is not limited to a max of 750Gflop.
Now Nintendo, most likely in a bid to keep up with the system's thermals, down clocked the GPU so the Switch's actual performance in docked mode is about 1.3X less (about 393 GFLOPS).
And yet. Thanks to the higher clocks at the same TDP, Tegra X2 has a sizable advantage of Tegra X1.
The clocks doesn't just increase your single precision floating point performance either.
Texture and Pixel fillrate increases thanks to an increase in ROP and TMU's operating faster, geometry performance also increases and more.
If we reduce the speed of the X2's GPU by the same amount, then we get about 576 GLOPS.
Doesn't work like that. Every CPU and GPU architecture has a clockspeed/power consumption "sweet spot".
Once you push past that sweet spot then the efficiency of the chip gets pushed out of whack.
Vega 56 and 64 for instance is actually a pretty efficient GPU, once you underclock and undervolt her a little, AMD pushed those chips hard with little concern for power consumption.
While that should provide some boost to certain games, it would still not be nearly large enough for computationally intensive games that struggle to run on the base Xbox One and/or PS4 to come over.
It would make a massive difference. Especially for a GPU heavy game look Doom, it's already an extremely compromised experience on Switch.
In addition, the X2's GPU microarchitecture is based on Pascal which is more of a refinement to the Maxwell based GPU found on the X1. The jump from an X1 to an X2 is just slightly more than the jump from the 3DS to New 3DS
Every GPU architecture is based upon a preceding design, same goes for CPU's.
Intels 8th gen Core i7 still has roots to the Pentium Pro from 1995, which was the start of Intels P6 architecture which saw constant refinements. - We saw Intel take a little detour from that with Netburst, but went back to P6 with Banias and Dothan which was used as the basis for the Core series.
There is more to Tegra X2 than one would assume and some I have listed prior in my reply. The move to 16nm Finfet is a big boon, verses the 20nm planar fabrication process.
The reality is that the Switch's emphasis on portability meant that they had to with hardware found in mobile tablets (these run at around 4-10 watts), and there are very few options available at the time the Switch was being internally developed or even now that would allow for the creation of a $300 handheld that would be on-par with PS4 and Xbox One.
The reality is, the Tegra X2 is a superior chip to the Tegra X1. No contest.
It doesn't need to be on par with the Xbox One or Playstation 4, just offer "good enough" performance.
I don’t think the Switch can handle modern gaming up to X1 and ps4 standards where they have more in common with each other. Online is substandard on the Switch and it’s horsepower is stuck in last gen.
Switch's performance is a big step up from last gen. It's just games have gotten so much more demanding on hardware.
Ram was one of last gens big limiters, Switch has a massive advantage there.
The Switch sits in between the Xbox 360 and Xbox One in terms of performance and capability.
The online however is certainly shit, hopefully Nintendo pulls it's fingers out and sorts it properly.