By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - Who will provide the NX GPU?

 

Who is making the NX GPU

nVida 187 41.19%
 
AMD 210 46.26%
 
Silicon Graphics Inc 16 3.52%
 
Sony (the power of the Cell!!!) 41 9.03%
 
Total:454
Pemalite said:
Soundwave said:

What do you think two Tegra X2's in unison could accomplish (Base NX + A Hypothetical Supplement Compute Device with a second SoC)? 

I'm kinda just curious to see how far a company could take these little chips, it's sorta fascinating. 

I think the SCD would simply be a second Tegra X2 or maybe the same chip with more CUDA cores or something, Nintendo won't want to pay for an entirely seperate semi-custom design and by putting the same GPU in the SCD, it could lower Nintendo's costs by increasing mass production of the same chip.

Actually it is kinda interesting that the Parker Drive X2 giant board already utilizes two Tegra X2s in tandem, I wonder if their automotive work has forced Nvidia to become more used to multiple processor usage from that and maybe that's also where Nintendo's idea for the Supplemental Compute Device comes from. 

I would place two Tegra's X2 at roughly Xbox One levels of imagry to be honest, but with 720P resolution.

I just wan't answers and clear concise ones from Nintendo with a full NX reveal already. Haha

I know it's not the be all end all, but wouldn't two Tegra X2's in unison constitute about 1.25 TFLOP of performance, and if we average out that Nvidia's floating point performance is generally 30% higher, that would put the two at 1.625 TFLOPS in AMD terms. 

Maybe if they added say 24-32MB of high speed eDRAM onto the SCD version of the TX2 ... would that change things? That would kinda cancel out the XB1's memory bandwidth advantage. 

Where would it be lacking vs a XB1 in that scenario? Surely two X2 units would boost things like the poly count and fillrate?



Around the Network
Soundwave said:
Pemalite said:

I would place two Tegra's X2 at roughly Xbox One levels of imagry to be honest, but with 720P resolution.

I just wan't answers and clear concise ones from Nintendo with a full NX reveal already. Haha

I know it's not the be all end all, but wouldn't two Tegra X2's in unison constitute about 1.25 TFLOP of performance, and if we average out that Nvidia's floating point performance is generally 30% higher, that would put the two at 1.625 TFLOPS in AMD terms. 

Maybe if they added say 24-32MB of high speed eDRAM onto the SCD version of the TX2 ... would that change things? That would kinda cancel out the XB1's memory bandwidth advantage. 

Where would it be lacking vs a XB1 in that scenario? Surely two X2 units would boost things like the poly count and fillrate?

Thats useing PC numbers, with DX11.

On consoles the differnce will be smaller, as will it with a differnt used API, like DX12 or Vulkan.

The "avg" advantage of flops to flops will be alot smaller in a console. Id be surprised if it was more than 10-15%.

I think Pemalite is correct, in saying that 2 x tegra x2 = roughly Xbox One level of performances, maybe limited by memory bandwidth so running lesser resolution.

 

Also if your going that route, your hypothetical 2x Tegra x2 chip system, will be more expensive than the Xbox One to make.

The reason Sony and Microsoft went with AMD, is because you can get everything in 1 chip.

Which cuts down on price, plus they have designed their cpu+gpu to be like lego's.

It cost them effeciency, but it means they can easily build basically anything you want with them.

Thats why they do custom soc's, because they designed the cpu and gpu to be able to funktion like that.

Nvidia doesnt. They cant just magically make a new chip thats doubled up.

They could use 2 in parrallel like SLI for the GPU part.... but that usually cuts effeciency of the GPU.

Suddenly that Gflop to Gflop advantage nvidia has, your talking about, becomes a dis-advantage.

 

And ontop of all that, it would mean the NX was more expensive than the Xbox One, for less performance.

Do you want Nintendo to launch a 400$ handheld hybrid ?

When Sony will release a 249$ PS4 slim soon? thats even more powerfull? I dont think Nintendo will use 2 chips.

It will be 1 Tegra X2, and that will be it. But hopefully priced around 199-249$.



JRPGfan said:
Soundwave said:

I know it's not the be all end all, but wouldn't two Tegra X2's in unison constitute about 1.25 TFLOP of performance, and if we average out that Nvidia's floating point performance is generally 30% higher, that would put the two at 1.625 TFLOPS in AMD terms. 

Maybe if they added say 24-32MB of high speed eDRAM onto the SCD version of the TX2 ... would that change things? That would kinda cancel out the XB1's memory bandwidth advantage. 

Where would it be lacking vs a XB1 in that scenario? Surely two X2 units would boost things like the poly count and fillrate?

Thats useing PC numbers, with DX11.

On consoles the differnce will be smaller, as will it with a differnt used API, like DX12 or Vulkan.

The "avg" advantage of flops to flops will be alot smaller in a console. Id be surprised if it was more than 10-15%.

I think Pemalite is correct, in saying that 2 x tegra x2 = roughly Xbox One level of performances, maybe limited by memory bandwidth so running lesser resolution.

Direct3D or DX is a propietary API from MSoft, so neither Sony nor Nintendo will use it in their development tools. That leaves us with OpenGL/Vulkan.

Also, those theories with 2xTegras have, in my eyes, one big flaw: they are assuming a 100% scaling. That's something that won't happen.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
JRPGfan said:

Direct3D or DX is a propietary API from MSoft, so neither Sony nor Nintendo will use it in their development tools. That leaves us with OpenGL/Vulkan.

Also, those theories with 2xTegras have, in my eyes, one big flaw: they are assuming a 100% scaling. That's something that won't happen.

Yep Nvidia's SLI is more like 80-85%. Its less effective than Crossfire is.

I said the same in my post above (1 page back). It would be a bad move by Nintendo, it would mean they re still behinde the Xbox One in performance, but suddenly will have to try and sell a 400$ handheld hybrid.

Price was one of the things that killed the Wii U. They spendt like 80$ on the gamepad, and couldnt keep the console price down enough for it to be competitive in terms of price/performance, it hurt their precived value.



JRPGfan said:
JEMC said:

Direct3D or DX is a propietary API from MSoft, so neither Sony nor Nintendo will use it in their development tools. That leaves us with OpenGL/Vulkan.

Also, those theories with 2xTegras have, in my eyes, one big flaw: they are assuming a 100% scaling. That's something that won't happen.

Yep Nvidia's SLI is more like 80-85%. Its less effective than Crossfire is.

I said the same in my post above (1 page back). It would be a bad move by Nintendo, it would mean they re still behinde the Xbox One in performance, but suddenly will have to try and sell a 400$ handheld hybrid.

Price was one of the things that killed the Wii U. They spendt like 80$ on the gamepad, and couldnt keep the console price down enough for it to be competitive in terms of price/performance, it hurt their precived value.

I agree that price will be a key point in the sucess or failure of NX.

If it's an hybrid device, it doesn't matter what Nintendo does because most people will see it as a handheld that connects to a TV, and people aren't willing to pay 250 $/€ or more for such devices.

I'd say that Nintendo's best bet would be to sell the handheld and the "boost dock" separetedly... but that kind of goes against the "hybrid" selling point.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
JEMC said:
dongo8 said:

I am no computer expert, but as far as I have heard and read FLOPS are FLOPS. All of the other stuff doesn't matter, because FLOPS are literally processes performed per second, therefor a lot of the things listed, would ALREADY be taken into account.

The problem with Flops, and the reason why Pemalite always tries to warn people to not make that mistake, is that they are only valid for one of the tasks a GPU can do, and gaming is not that task.

Let's put an example that explains what he's talking about:

(data taken from The Tech Report)

If we only look the TFlops, then the performance of those cards, from slower to faster would be

R9 380 = GTX 970 < GTX 1060 < R9 390 < GTX 1070 < RX 480

Well, let's look at the relative performance of those cards tested with 16 games:

(graph taken from The Tech Report review of the GTX 1060, that's why this card is the base card at 100%.)

Surprise!

The GTX 970 is not only much faster than the equal (in Flops) 380, but it's almost on par with the 390 and 480. The 1060 is around 10% faster than the 480 despite having 2 full TeraFlops less, and the 1070 simply destroys them all.

That's the reason why TFlops are not a good way to compare performance in videoconsoles, even less is they are from two different vendors and from different generations too.

TheLastStarFighter said:

Yeah, of course, I'm just speculating on the 1+1+3+3 thingy.

I'm not sure if they will do the suplimental device thing, but it does make a tonne of sense if they can do it right.  It would be mostly new for consoles, but PC's use multipe GPUs all the time, and Alienware offers a supplimental GPU for their laptops now.

So, if done right, Nintendo could offer a portable system for the bulk of their customers, the "3DS audience".  They could offer a second SKU that includes the portable and the SCU docking station for home gaming.  Let's just throw out price points of $299 and $399 to start.  The "home" SKU could include the power-boosting docking station and pro-style controller.  This console setup could be appealing to those that want the home Nintendo experience, or the "Wii U" audience.

If you have both the NX and its dock with say an 800 GFLOP GPU unit, that unit alone would be screaching performance for a portable.  Combined 2X, the home unit would be acceptable levels for current consoles.

The goal here from a company perspective, would be that you sell a tonne of the portable NX and get real good 3rd party support and have extensive 1st party support focused on the single system.  SInce a kid can then dock the system to their TV and up performance and play their favorite titles, why bother buying another home console?  This appealing setup could then lead to more third party support, which leads to more sales, which leads to the system being more in demand, and it's a continous positive cycle.

PCs have it, but they need special drivers and they don't scale perfectly. And we're talking about a secondary GPU, because if that powered dock has another Tegra to work in tandem, that's another CPU thrown in the mix...

Also, in your example, if they make two SKUs with the NX and then dock+controller, why can't they make the dock+controller a console by itself? Why does it need the NX? It's a wasted oportunity.

Wish more people read this as people are jumping to conclusions based on GFLOP's



Soundwave said:

I know it's not the be all end all, but wouldn't two Tegra X2's in unison constitute about 1.25 TFLOP of performance, and if we average out that Nvidia's floating point performance is generally 30% higher, that would put the two at 1.625 TFLOPS in AMD terms. 

Maybe if they added say 24-32MB of high speed eDRAM onto the SCD version of the TX2 ... would that change things? That would kinda cancel out the XB1's memory bandwidth advantage. 

Where would it be lacking vs a XB1 in that scenario? Surely two X2 units would boost things like the poly count and fillrate?

Nope. Because there is inefficiency's being added into the system thanks to Multi-GPU technologies, they don't scale linearly.
Plus the bandwidth doesn't get combined either... It's all well and good to have insane levels of performance... But if you can't squeeze enough data through the small pipes, then you aren't going anywhere.

The numbers you are looking at are only theoretical anyway.
You can't look at a couple of gaming benchmarks (Which represents an entire chip) and quanitify it as nVidia being 30% better at floating point (Which is only a small part of a chip), when AMD can beat nVidia rather soundly in some floating point tasks. (I.E. Anything using Async Compute is AMD's domain, I.E. Anything modern and forward looking.)

Also, you need more than just flops and bandwidth to win the graphics game, if that was all that was ever needed then designing GPU's would be far easier. :P

The Xbox One also has more texturing power, it can handle larger and more textures at once than Tegra, basically all the surfaces in the game with all the little details? Like the floor, heck even the sky? Furniture? Will all look better on the Xbox One, more Nomal Maps. Everything.

There is a reason why Xbox 360 ports to Tegra X1 still looked inferior on the Tegra X1 compared to the Xbox 360, despite the Tegra X1 having almost twice the Gflops, it could keep up with shader effects and lighting and such, but fell backwards with texturing, geometry, streaming and in some cases, resolution, Anti-Aliasing and filtering.

Geometry wise, the Xbox One would beat Tegra too, Tegra is pretty relaxed on it's Polymorph engines for good reason... So the Xbox One can have more detailed models... And thanks to Tessellation, more smaller "bumpy" surfaces like small rocks and pebbles rather than flat surfaces without definition.

Tegra is also not as proficient at single and double precision floating point as Graphics Core Next either, generally. Which can impact accuracy.
It also falls behind on integers and Vertex.

The eDRAM would help eat away at the Xbox One's bandwidth advantage, but I think at that point, Nintendo would be better off throwing a couple of Gigabytes of GDDR5X memory at the problem... And even then, it's not a clear-cut winner on who is faster between the NX and Xbox One.

Werix357 said:

Wish more people read this as people are jumping to conclusions based on GFLOP's

I am constantly nagging people about it.

JEMC said:

Direct3D or DX is a propietary API from MSoft, so neither Sony nor Nintendo will use it in their development tools. That leaves us with OpenGL/Vulkan.

Also, those theories with 2xTegras have, in my eyes, one big flaw: they are assuming a 100% scaling. That's something that won't happen.

That's true.
But I took into account that Pascal is generally more efficient all around than Graphics Core Next 1.0/1.1 in the Xbox One.
It has better compression and culling which helps.

I stand by that two Tegra X2's should be roughly equivalent to the Xbox One, just forcing games to be at a lower 720P resolution and that isn't based on the flops either. :P

JEMC said:

Direct3D or DX is a propietary API from MSoft, so neither Sony nor Nintendo will use it in their development tools. That leaves us with OpenGL/Vulkan.


Sony and Nintendo will likely have both OpenGL and Vulkan. Possibly a 3rd low-level API.
There is advantages to having both OpenGL and Vulkan, despite Vulkan being the technical successor to OpenGL.



--::{PC Gaming Master Race}::--

TheLastStarFighter said:
Pemalite said:

I can see costs blowing out.
nVidia isn't exactly known for being cheap.

Not only that, but nVidia's Multi-GPU technology has never been that flexible as far as I know.

It could be a mid-tear, 2 TFLOP card or so.  It could function like the setup for cars referenced above, or like an Alienware laptop PC that shuts down the mobile card when connected to SCD at home.

So how much do you think the whole setup would cost? A fast handheld (able to play home console games with reduced settings) with a good display and good battery life + a docking station + a 2 Tflop GPU + enough flash memory in the handheld for a few games (which won't be small)?



Conina said:
TheLastStarFighter said:

It could be a mid-tear, 2 TFLOP card or so.  It could function like the setup for cars referenced above, or like an Alienware laptop PC that shuts down the mobile card when connected to SCD at home.

So how much do you think the whole setup would cost? A fast handheld (able to play home console games with reduced settings) with a good display and good battery life + a docking station + a 2 Tflop GPU + enough flash memory in the handheld for a few games (which won't be small)?

hypothetical question, but a 2 Teraflop docking station for a Handheld of around 0.6 teraflops?

Seems mis matched, and probably wastefull.

If thats the case the "docking station" part would probably be like around 300$.

Ontop of the handheld thats probably going to end up 199$.

That seems like a high entry price to own a NX and be able to use it as a home console.

Such a scenario would be bad for nintendo in my opinion Conina.



JRPGfan said:
Conina said:

So how much do you think the whole setup would cost? A fast handheld (able to play home console games with reduced settings) with a good display and good battery life + a docking station + a 2 Tflop GPU + enough flash memory in the handheld for a few games (which won't be small)?

hypothetical question, but a 2 Teraflop docking station for a Handheld of around 0.6 teraflops?

Seems mis matched, and probably wastefull.

If thats the case the "docking station" part would probably be like around 300$.

Ontop of the handheld thats probably going to end up 199$.

That seems like a high entry price to own a NX and be able to use it as a home console.

Such a scenario would be bad for nintendo in my opinion Conina.

I'm sure Conina will answer you and point your mistake with his post, but meanwhile I'll say something about the bolded part.

Such dock wouldn't be 300 $/€ unless Nintendo is trying to scam us. The reason for that is that Nvidia's GTX 1060 can be found from board partners for $249, and that's a ful card with the memory, power system and cooling, and the profits from both Nvidia and the board partner. Of course, Nintendo wouldn't go for the 1060 because it's too much for the handheld part with its almost 4TFlops. But then, there's the GP107 which is half the chip of that card.

So Nintendo going with a lower cost/performance chip and having a better price from Nvidia (aren't they supposedly eager to enter the console market? And wouldn't they offer Nintendo a better price than its AIB for a contract of maybe tens of million chips?), they could launch such dock for $249 and still make a nice profit.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.