By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
Bofferbrauer2 said:

Not sure if using the X2 would have resulted in that much more performance, (pure GPU performance was ~50% higher, but when both CPU and GPU were used, the margin tended to be much lower between the two chips iirc) but it would have been the better choice for sure, especially with it's larger memory bandwidth.

I expect NVidia to create a new Tegra chip soon. Not just for the Switch, but also for the Jetson Nano and NVidia Shield devices, which use the same chips as the Switch does. In fact, I believe that without the chip crunch NVidia would already have done it and it would have been the basis of the new OLED model.

It won't be much more than a die-shrinked X2 probably (to 12nm out for originally 16nm), but that alone could result in up to 50% more performance compared to the current model. Most interesting would be that the memory could be increased to 8GB on the next model, which could definitely help some games.

Nah. Improvements were big between Maxwell to Pascal in terms of performance gains.

On the CPU side you went from...
A57 Quad-core @1.9Ghz.
A53 Quad-core @1.3Ghz.

To:
Denver2 Dual Core @2.0Ghz
A57 Quad-Core @ 2.0Ghz.

However Tegra Maxwell only had the A57 Quad-Core unit enabled...

Tegra Pascal you would run the super high-performant Denver cores... But the design also required the use of at-least a single A57 "Core0" core for I/O and interrupt, plus some other tasks.

Denver2+One ARM A57 generally provided twice the throughput in benchmarks compared to a quad-A57 cluster.

Obviously Carmel makes them both out to be a joke... But we are talking about available chips on the Switch's release.

On the GPU side of the equation...

Maxwell @ 1ghz
vs
Pascal at 1.5Ghz.

Same power envelope... 20nm vs 16nm.
Remember, TSMC's 16nm process is basically 20nm but with Finfet... Pascal from the very outset was designed to push 50% higher clockrates at the same TDP as Maxwell.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/6

So even when both are on 16nm, Tegra Pascal will have a clockspeed advantage at the same TDP... As that was nVidia's original design philosophy with that GPU design, increase clocks, keep the same amount of functional units and features.

Conversely... Tegra Pascal brings forth improved delta colour compression, so it has more usable bandwidth available... But also allows for more than twice the memory bandwidth as Tegra Maxwell. (25.6GB/s vs 59.7GB/s)

And memory bandwidth is one of the biggest hindrances of the Switch from being able to achieve more than 720P in most titles, especially when allot of heavy use of Alpha effects are being thrown around.

So I am probably actually being a little conservative at stating a 50% performance improvement.

The Denver cores were based upon the A72, and not very good ones. In fact, due to the way scheduling works on them, having them working concurrent to the A57 could actually cost performance. Because the Denver doesn't want to share what's in his memory, the A57 might have to calculate the whole thing all over again to get the results in the Denver cache if he needs them too. This might actually be the reason why Nintendo opted for the X1 instead.

Oh, and while the GPU is more powerful, it's not 50% more. While the TX2 at Max-Q settings has about the same performance as a TX1, at Max-P, that only increases to a 20-40% performance lead over the TX1 while consuming a similar amount of power.

Either way, this discussion is a bit pointless, as the original X2 will certainly not be used in the Switch going forward, and even some future chip based on the X2 might be changed internally.