By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

You have twisted my statements to try and fit your narrative.

Nintendo didn't purchase the Tegra X1 as a "Hand me down". That's your false statement.

The Tegra X1 is not "Custom designed" for any form factor. nVidia made a chipset and threw it everywhere they could... Which is why it ended up in Tablets, HTPC's, Handheld and Automotive.
It's literally not custom designed for anything, it's a general purpose SoC with an emphasis on GPU capability.

nVidia literally every single year... Takes a chip, adjusts it's clockspeeds and sells it to a different market, form factor or price bracket, Tegra X1 is no different.
Nintendo had to make the decision to be conservative with clocks to keep TDP and Battery life within an acceptable "Worse case scenario".
nVidia had nothing to do with that clockspeed decision, that was Nintendo, which is why Switch consoles are capable of overclocking so well.

I never said the X1 was customized for any specific form-factor. In fact, that is my original point. It was designed to be used in many different devices with many different purposes. Also, let's not become the pot calling the kettle black with "you have twisted my statements to try to fit your narrative." You have the tendency of splicing statements out of their original context and replying to them as if they aren't part of a stream of thought. You do this often and to many people. Besides, I didn't "twist your statement" I addressed that you put words into my mouth that I never made, like the idea that hardware was sitting somewhere and then was being used in Switch's.  

The cost of designing the chip was already done and dusted before the Tegra X1 even released.
The GPU is derived from their Desktop offerings, being Maxwell based.

nVidia then licensed the ARM cores from... Well. ARM.

What do you think happened? nVidia designed the CPU and GPU architectures from scratch? No. No. That didn't occur.

It is no more of a "hand me down" chip than what the Xbox Series X, Playstation 5 and Switch 2 has... They are all based on already developed CPU and GPU architectures.

nVidia, AMD and Intel build "libraries" which are blocks of semiconductors that they can line up with other libraries in order to expedite time-to-market chip development by partnering one library up with another.

This is no different than the Switch 2 SoC being based on another nVidia chip used for IoT, Cars and more.
It's based once again on ARM I.P and uses the PC Ampere GPU architecture... Again, this technology is several years old at this point.

Every single design decision requires a labor-force that typically makes six-figures per FTE (in the U.S) to plan, design, and test those decisions. Microarchitecture R&D is separate from the R&D for designing the actual chipsets. There were still FTE's that made decisions about what the specific SOCs look like with the X1, and they cost money to employ. This isn't some binary of "R&D is done when Maxwell [or other micro-architecture] comes out." Tons of decisions are made after that. 

And yes, planning the development of a chipset specifically with one platform in mind and its form-factor does require more R&D than just plopping in an already designed general-purpose chip, turning off a few CPU cores, and adjusting clock-rates. Are you seriously arguing otherwise here? 

If I were to make an analogy, its the difference between making a new video game and making a remaster of a game. In both scenarios you might be using middleware like a game engine (in this analogy that would be the micro-architecture design that has already been done) but the latter (remaster : adjusting a few clock rates) requires a lot less work, FTE, and money than the prior (developing a new game : designing hardware to fit a specific form-factor and purpose over the course of years.) 

The Switch 2 SoC isn't based on the T234 in the same way T210 in the Switch was on T210 in the Shield. I'm not sure where you got that idea. It was developed adjacent to it and the design decisions were made with its form-factor and intended use in mind. Some decisions might have been made for Ampere-based Tegra in general, but not all of these decisions. That's why it doesn't, for example, include specialized edge-compute hardware but has an FDE. Just because they both use the same basic architecture and share some design choices, doesn't mean there isn't different R&D being done for these two chips beyond the micro-architecture and common chipset family R&D. 

Last edited by sc94597 - on 03 May 2025