You have gone totally over the top with the claim effiiciency improvements and we are already getting a good idea of the Switch performance level by what we have seen. There are compromises made with mobility chipsets.
If you think Delta Colour Compression, Polymorph Engines, Tiled based Rasterization, Packed Math is going "over the top" then you are incorrect... And regardless of what I say, will likely not change your mind anyway.
However... The Wii U uses an AMD VLIW architecture with a 320:16:8 core layout, the closest GPU to that is the Radeon 4650, but that still has twice the Texture Mapping Units.
Now we know that Maxwell is significantly more efficient than Graphics Core Next 1.0.
How much more efficient is Graphics Core Next compared to AMD's older VLIW architectures?
Well. Take a Radeon 5870, 1600:80:32 core layout (Shaders, Texture Mapping Units, Rops) with 153.6GB/s of bandwidth and 2.720 Teraflops of FP32 performance.
Then we take the Radeon 7850 with a 1024:64:32 core layout, 153.6GB/s of bandwidth and 1.761 Teraflops of FP32 performance.
The 7850 should loose every time right? Wrong. It is superior to the Radeon 5870, despite having almost a Teraflop of extra FP32 performance.
THAT is what efficiency does.
But don't take my word for it: http://www.anandtech.com/bench/product/1062?vs=1076
And here is the added spanner in the works. Maxwell, which Tegra is derived from is a more efficient architecture than even Graphics Core Next that the Radeon 7850 uses.
But let's look at the difference between Maxwell and Graphics Core Next 1.0 shall we? Aka. Geforce 780 and Radeon 7970. (Using the 7970 because it doesn't have newer tech like Delta Colour Compression.)
The Geforce 970, Maxwell, 1050mhz core clock, 1664:104:56 core layout, 196GB/s of Bandwidth and 3.494 Teraflops of FP32 performance.
The Radeon 7970, Graphics Core Next 1.0, 1ghz core clock, 2048:128:32 core layout, 288GB/s of bandwidth and 4.300 Teraflops of FP32 performance.
The 7970 should in theory have the lead right? It does have more shaders, more texture mapping units, more bandwidth, more flops.
But nope. Thanks to the efficiency that Maxwell Brings, it slaps it around.
And you somehow came to the conclusion that Tegra isn't a massive leap of efficiency over the Wii U's VLIW architecture? Can I ask... How? The evidence says otherwise.
I've a feeling once the hype dies down we may see that indeed the humble wii u actually can perform to a higher level than Switch in some areas. That 70GB/s 32MB of edram memory which gave the wii u a significant performance boost graphically may actually help deliver some surprises against the shared memory of Switch with more minimal cache memory.
Considering that at the end of the Wii U's console generation it's managing to put out titles like Zelda, is pretty great.
But that end-of-generation-game is a launch title for the Switch. Think about that for a moment.
I am sure you are aware, but as a console generation progresses, developers learn the various nuances of the hardware and extract more out of the platform, Zelda is a port, it's not built from the ground up for the Switch.
It's not going to be the best looking Switch game, but it *will* be one of the best looking Wii U games ever.
As for eDRAM, it does help, but it's not a cure for everything, keep in mind that RAM and Caches feature no processing capabilities, it doesn't process anything.
In a worst-case-scenario if the data the Wii U requires isn't in the eDRAM (This happens more often than you think) then the amount of bandwidth tanks to just 12.8GB/s.
And if these fast caches were such an amazing revelation, then the Xbox One wouldn't be struggling in the face of the Playstation 4.
It helps. But it's not a dramatic game changer.
It will be interesting to see how this all unfolds again. Most people are saying how much better Zelda looks on Switch but as some detailed comparisons have shown its not all one way there are advantages on the wii u too some big advantages with lighting effects and shadows. Whether this continues to the final retail games is uncertain.
Of course. But, it's still early days, as I pointed above.
Zelda on the Switch did have better, more stable framerates which is good, better texturing... Better sound, shorter load times, And from what I could tell better LoD too.
And when docked, Switch is rendering it all at a higher resolution.
We will need to wait for the release and a full break down from digital foundry for some real pixel counting as I have zero intention of buying either consoles.
I still believe in the Eurogamer spec based on the speech to developers rather than the Foxconn leak that could be stress tests etc while manufacturing. Certainly looking at the performance of many games it almost feels like the eurogamer spec is too high and perhaps the max mhz of the gpu while docked will be lower at retail.
Eurogamer stated the clocks were a "Theoretical Maximum". So it could certainly be lower in retail, but clockrate is only telling part of the story.
Surely its in Nintendo's interest to impress us with the games on a technical level but most look so poor and are struggling to show distance between the wii u. It really feels like there is a bad surprise coming with regard performance with the Switch. Worse even than are current already low expectations.
This we can agree on. I like hardware and would have wished Nintendo would have set out to impress the world a little harder on the hardware front.
With that said, I was unimpressed with the Xbox One and Playstation 4 as well on release.
Still, remember that it is early days.