haxxiy said:
Blazerz said:
Troll harder. 8nm is so inefficient it was never going to happen (would actually be more efficient to get a weaker chip and overclock it). 4-5nm it is.
|
Kopite says it's 8 nm so it has to be 8nm. I'm not too fond of any more than you do, but his Nvidia stuff is basically always right.
It's an 8-inch screen hybrid with an 8nm APU that will likely be revealed in March and that's that.
|
Except he got everything wrong from his own tweets, the only thing he got right was the T239 name, but from the Nvidia leak everything else he said was wrong (wrong number of graphics cores, he actually estimated more, wrong code name, Drake is not Dane, Ada Lovelace being the architecture etc. etc. These were all wrong. How do we know that? Because in the Nvidia ransom ware leak, which is what you should trust because that's official leaked docs from Nvidia internally said otherwise. I think he has sources basically in Nvidia's PC GPU division but not so great outside of that and even there he gets things wrong, I believe he said the 40 or 50 series GPUs would be a certain node and then a few months later admitted he was wrong.
The problem with this whole thing is the Nvidia leak shows 12SMs for T239 ... that's a shit ton of graphics cores for no great reason. That's not a low end part, it's nothing like the Wii or DS or even Switch. It's a monstrously huge chip, we're talking at 8nm something that would just be a little shy of the XBox Series S, lol.
So that whole "this is just some cheap junk chip" goes out the window already even if you want to argue 8nm, fine, so Nintendo is investing in this massive, huge, power suck of a chip (because that makes no sense, but OK fine). That's nothing like their past chipset philosophy.
For a 12SM chip at 8nm to have anything like reasonable battery life it would have to be clocked ridiculously low (like below 40% even). But the kicker with that is if they wanted to be cheap, why even bother with 12SMs in the first place? Why are you creating this massive chip with all these extra cores (that will lower yields). It creates a more expensive chip for no reason, you could cut SMs (graphics cores) down to 6SMs or 8SMs and just clock those cores a little higher and get the same performance if you're going to clock 12SMs so low.
12SMs is the problem. It makes no sense but it's there in black and white in Nvidia's documentation. Now at 5nm (TSMC 4N) it makes a lot more sense, the chip would then be about the same size as the Tegra X1 in the Switch 1 was and have about the same battery life. At 8nm, you're going to get a much bulkier system and you're not likely even to have that cheap of a chip. If Nintendo wanted the cheapest chip possible they should have used 6SMs or 8SMs, there's no cost in clocking graphics cores higher.
The other thing is in the Nvidia ransomware leak we see that Nvidia was testing DLSS performance for the T239 but they're using clock speeds way too high for a chip to run off battery power at 12SMs. But it would fit almost perfectly with the same consumption as the Switch 1 OG units if it was on 5nm.
I think most people (include kopite) are saying 8nm because that was what other Ampere chips are, but even there the Tegra X1 inside the Switch is an example of that rule not even holding. The Tegra X1 is a Maxwell based chip, all Maxwell home GPUs were 28nm but for the Tegra X1, Nvidia made it 20nm, a more (well was supposed to be) power efficient and modern node. So I mean your own darn Switch is proof positive that this line of logic doesn't even hold steady. If we go by that logic the Tegra X1 inside the Switch should have been 28nm like all the other Maxwell GPUs.
Anyway we will know from the sounds of it in like a month or so, so I'm not wasting a whole bunch of time on this, everything that can be said about this has been said at this point, unless someone has a leak to share from actual Switch 2 hardware.
Last edited by Soundwave - on 09 February 2024