By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - More hints of AMD's Potential Partnership with Nintendo on the NX

This is good new for AMD since their so desperate for some cash ...



Around the Network

I think people too early ditch ARM, AMD already stated that they have two semi-custom design wins and those parts will be introduced in 2016, one is x86 and one is ARM, and atleast one will be on gaming.



Dusk said:


The X1 is emulating the 360 with it's backwards compatibilty. That's the difference between open source reverse engineering and using the proprietary code and designing software to run it closed source without the need for reverse engineering. BTW, you don't need an i5 to properly emulate the Wii on Dolphin, Dolphin is restricted far more with the GPU than the CPU, but even then it's not that high of a requirement. 

I think that Microsoft will release X360 games like Nintendo does with Virtual Console but you'll need the physical disc just for piracy (don't quote me on that). It's not a simple emulation. If it was a simple emulation you would just need an "emulator" app and the physical disk - no additional downloads. But as I understand you're going to download every X360 game in your X1. 

From my programming background I suppose that Microsoft is rewriting some API calls - from X360 to X1 - maybe using something like a proxy pattern. They need to optimise every single game because X1 lacks raw power. They are listing and couting the API calls that the software uses and making an unique and optimised "emulator" - a new layer to be precise - for each game.

It doesn't matter if they know their own code or if they are using reverse engineering. In fact, it has nothing to do with knowing their own code. X1 can't have a "general" X360 emulator. It lacks raw power. You can see it by looking the games currently available. They are all poorly demanding - simple - games. I think that they can't make Skyrim run properly for the time being.

In the same way, NX would need to be more powerful than PS4 to be able to run Wii U discs using a "general" emulator. Maybe that's the reason for Iwata saying "absorbing Wii U architecture". Maybe we'll get Wii U VC games instead of a traditional backwards compatibility.



Soundwave said:
I kinda hope Nintendo just ditches AMD to be honest.

PowerVR can give them better performance per watt at likely a lower price.

The GT7900 could give them 800 GFLOPS on the console variant at under 10 watts and can scale up even further if they want. A mobile version of the same chip could be in the 400 GFLOP range for 5-6 watts and give them a very powerful handheld.

I doubt AMD can beat those performance ratios, if they could they would be making GPUs for tablets/phones and be a big player in that scene, but they're not.

Make smart decisions with the hardware, IBM/AMD sure as heck aren't helping Nintendo now, they're stuck with a bloated custom design that they can't lower the price on. Use a more mainstream vendor like PowerVR IMO. They make the Apple iPhone/iPad GPUs so you know they operate on a huge mass production scale.

I doubt the GT 7900 does consume less than 10W TDP, even less so at your 800 GFlops (it's stated to reach 666 GFlops @ 650Mhz, clocking it at around 800 Mhz WILL drastically increase consumtion - if these speeds can even reached with it's architecture). Apple's A8X uses the biggest PowerVR Chip to date, which only makes around 330 GFlops, half of an GT 7900, yet still get pretty hot.

Also, PowerVR are notoriously bad at their drivers, just ask Intel about the tons of gliches and incompatibilities they provoked when they used them in their early iGPs. And while they had comparable GFlop capacities to early mid-level AMD APU, they still got trounced by them as they couldn't get their theoretic power to the ground.

Also, considering a custom chip where all unessessary parts and functions are cut out a bloated design is actually pretty funny, but the contrary of reality. If it has a good yield rate (which it should by now), I don't expect Nintendo having to pay much more than 20-30$ for the whole package, a price the GT 7900 would easely surpass on it's own. Why? Because it's not their own design, so Nintendo would only be a normal client and charged as such.

And again, you're forgetting a CPU.



numberwang said:
AMD uses Globalfoundries to manufacture their chips if I am no mistaken. So are we going with 32/28nm for the CPU/GPU in 2016. What are the implications?

Globalfoundries is adopting Samsungs 14/16nm process right now, AMD's Zen Processors are slated to use them in 2016, and probably the next Radeons before that.



Around the Network
Bofferbrauer said:

Globalfoundries is adopting Samsungs 14/16nm process right now, AMD's Zen Processors are slated to use them in 2016, and probably the next Radeons before that.

It's not like AMD will be using it to design chips for low margin products such as consoles so it still makes sense for Nintendo to use 28nm ...

As for Nintendo getting the latest CPU and GPU micro-architectures from AMD that's only if their willing to spend hundreds of millions of dollars to port it to 28nm seeing as how AMD won't be wasting time on an old process node ...



Pavolink said:
Ugh, backwards compatible. I fear this will hold once again the console.


I dunno, being able to play Wii U (and maybe even Wii games) on the system would be pretty nifty.

 

Besides, people were bitching that Wii U didn't have GC backwards compatibility.



Bofferbrauer said:
Soundwave said:
I kinda hope Nintendo just ditches AMD to be honest.

PowerVR can give them better performance per watt at likely a lower price.

The GT7900 could give them 800 GFLOPS on the console variant at under 10 watts and can scale up even further if they want. A mobile version of the same chip could be in the 400 GFLOP range for 5-6 watts and give them a very powerful handheld.

I doubt AMD can beat those performance ratios, if they could they would be making GPUs for tablets/phones and be a big player in that scene, but they're not.

Make smart decisions with the hardware, IBM/AMD sure as heck aren't helping Nintendo now, they're stuck with a bloated custom design that they can't lower the price on. Use a more mainstream vendor like PowerVR IMO. They make the Apple iPhone/iPad GPUs so you know they operate on a huge mass production scale.

I doubt the GT 7900 does consume less than 10W TDP, even less so at your 800 GFlops (it's stated to reach 666 GFlops @ 650Mhz, clocking it at around 800 Mhz WILL drastically increase consumtion - if these speeds can even reached with it's architecture). Apple's A8X uses the biggest PowerVR Chip to date, which only makes around 330 GFlops, half of an GT 7900, yet still get pretty hot.

Also, PowerVR are notoriously bad at their drivers, just ask Intel about the tons of gliches and incompatibilities they provoked when they used them in their early iGPs. And while they had comparable GFlop capacities to early mid-level AMD APU, they still got trounced by them as they couldn't get their theoretic power to the ground.

Also, considering a custom chip where all unessessary parts and functions are cut out a bloated design is actually pretty funny, but the contrary of reality. If it has a good yield rate (which it should by now), I don't expect Nintendo having to pay much more than 20-30$ for the whole package, a price the GT 7900 would easely surpass on it's own. Why? Because it's not their own design, so Nintendo would only be a normal client and charged as such.

And again, you're forgetting a CPU.


If AMD is so great, why does no one in mobile use them? I doubt AMD can give Nintendo the same power per watt for the same price PowerVR can. There's a reason Apple banks on them. A8x does get warm, but it works and works fine and is in tens of millions of devices, mass produced at 20nm without too many hitches since last year. 

IMO, one of the reasons Nintendo is having problems is they haven't been smart in the last few years in their component partner choices. Yamauchi was much more focused on getting a the best price for performance, whereas Nintendo since then has become bogged down into "well our designers only want this architecture" or something. Poor decision making. 

I'd suspect NX will probably be something like 300 GFLOPS for the handheld version and 600 GFLOPS (basically just double) for the home version at 28nm and Nintendo will likely even pay more to get that performance than Apple does.



Soundwave said:

If AMD is so great, why does no one in mobile use them? I doubt AMD can give Nintendo the same power per watt for the same price PowerVR can. There's a reason Apple banks on them. A8x does get warm, but it works and works fine and is in tens of millions of devices, mass produced at 20nm without too many hitches since last year. 

If PowerVR is so great why aren't they competing in the discrete GPU market anymore ? (I do get why you'd be skeptical of AMD.)

Plus I don't think Nintendo will use 20nm and it relates to your question as why no one uses AMD in the mobile segment since AMD can't afford to push out too many 20nm chip designs ...

Soundwave said:

IMO, one of the reasons Nintendo is having problems is they haven't been smart in the last few years in their component partner choices. Yamauchi was much more focused on getting a the best price for performance, whereas Nintendo since then has become bogged down into "well our designers only want this architecture" or something. Poor decision making. 

I absolutely agree right here with you but the reason why Nintendo's choices seem so poor has to do with the fact that they won't forgo backwards compatibility ...

Soundwave said:

I'd suspect NX will probably be something like 300 GFLOPS for the handheld version and 600 GFLOPS (basically just double) for the home version at 28nm and Nintendo will likely even pay more to get that performance than Apple does.

Tegra X1 has a 50 GFlops/watt ratio on 20nm so how do you suppose that PowerVR will be able to deliver that at 28nm ?!



fatslob-:O said:
Bofferbrauer said:

Globalfoundries is adopting Samsungs 14/16nm process right now, AMD's Zen Processors are slated to use them in 2016, and probably the next Radeons before that.

It's not like AMD will be using it to design chips for low margin products such as consoles so it still makes sense for Nintendo to use 28nm ...

As for Nintendo getting the latest CPU and GPU micro-architectures from AMD that's only if their willing to spend hundreds of millions of dollars to port it to 28nm seeing as how AMD won't be wasting time on an old process node ...

If the chip comes in 16nm already, there's no point in porting it back to 28nm, as this would make the chip much more expensive as it would use up almost 4 times as much space on a wafer - and thus having less chips per wafer. Since chips are sold per wafer, it would drastically increase their cost. Besides, the bigger chip would also consume more energy, possibly too much to work at the same clock speed as the 16nm chip.

If they would still be using a chip in 28nm from AMD, than it's probably a Puma (+) Processor (an evolution of the Jaguar in the PS4/Xbox ONE), since the Bulldozer based chips wouldn't work well (read: even worse) in a console, and a GCN 1.1/1.2 (PS4/Xbox ONE: 1.0) graphics chip. In both cases, the evolution upon the hardware in the other current-gen consoles is pretty weak, meaning Nintendo won't be able to come close to PS4 power unless they would more than double over the consumption of the Wii U. Since Nintendo has stated they prefer to build small, I doubt they will do so, as the extra power would need extra space for stronger cooling and heat dissipation.

So, either 14/16nm or along the line of Xbox ONE power at best. Unless they'd use 2 seperate chips for CPU and GPU, allowing them to mix 2 different production processes. But that's kinda pointless if the producer delivers both in a neat all-in-one package.