Wyrdness said:
Again where are you getting those numbers? You're asking for sources but not giving any, the threads on this very site with people like Permite or what ever his name debated this and came up with the twice the power speculation. If you're basing this on EG's speculation they're already out of date Foxconn is now the most legit leak as they gave details that can't be guessed about the Switch and they had the Switch at better performance than what EG speculated, EG even admitted they sat on their own info for months. Switch for a start isn't a device you can estimate from flops and such this was debated in the other threads, for one Nvidia flops are do more work per number than AMDs, secondly the Switch may have access to Nvida's architecture, shaders and all which Wii U doesn't further widening the gap. Thirdly you're using a game that is a port, Aonuma confirmed BOTW didn't land on the Switch until last April making it a quick port of something built for a different architecture and all. |
Well the original eurogamer article on final spec is here.
http://www.eurogamer.net/articles/digitalfoundry-2016-nintendo-switch-spec-analysis
and the analysis of the other leak is here;
http://www.neogaf.com/forum/showthread.php?t=1334549
It seems to me the original eurogamer spec is more accurate.
While I agree the Nvidia gpu is more capable than its gflops figure indicates compared to wii u its important you don't just run away with ridiculous performance claims. The same comparison was made with the wii u over ps3 and 360 and the end result was not that the wii u easily beat them despite the lower gflops figure it actually struggled despite the generational difference. Remember the gflops figure is an indication of its performance level allowing for its later architecture. What was fantastic performance of 250 gflops in 2005 is pretty much base level now.
Lets not forget if the wii u has 176 gflops for its main gpu and up to 24 gflops asisst from its wii gpu plus 70GB/s of high speed memory for its frame buffer that is pretty good compared to 150 gflops and 25.6GB/s shared memory. When I say a overall 30% increase I'm not exactly being unfair I'm giving the nvidia alot of allowance for its later architecture possibly too much.








