By using this site, you agree to our Privacy Policy and our Terms of Use. Close
spemanig said:
Darc Requiem said:

Based on Nvidia's own slides. The Tegra X2 is roughly 50% more powerful 768GLOPS (X2) versus 512GLOPS (Tegra X1). It's manufactured on a smaller process. So it should use less power than a Tegra X1. The Tegra X2 also has 2 to 3 times the memory bandwidth depending on whether it's using LDDR4 (50GB/s) vs GDDR5 (80GB/s). The Tegra X1 had 25GB/s LDDR4. You'd hope Nintendo would opt for GDDR5 but you never know with Nintendo.

Edit: The 1.5 TFLOP number people are touting is inaccurate that's in FP16. It's half that number 768GLOPs in FP32. FP32 is what the Xbox One and PS4 GPUs specs measured with.

What does that mean in laymans terms, though? What would X1 mean for the Switch as opposed to X2 in terms of practical application.

I'm a layman myself. However the X1 is 20nm so it would run hotter and use more power. The X1 has really limited memory bandwith. That's the biggest issue I see. The Wii U has really limited bandwith 12.8GB /s for the main memory but at least it has embedded VRAM in the GPU to compensate. Similar to how the Xbox one compensates for it's main memory bandwith (68GB/s) with ESRAM. The X1 has LDDR4 with 25.6 GB/s bandwith. That's 360/PS3 level bandwith. Best case scenario is an X2 with GDDR5.