By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo Discussion - I flew too close to the 1TF sun and my Switch wings melted.

Soundwave said:
mutantsushi said:

Obviously what they did do was not viable with any significant audience.
Whether Wii U might have been more successful by dumping tablet and selling at lower price is open question.
Or using the same budget to allow stronger CPU/GPU/memory at the same cost to consumer.

It probably would've sold more, but it wouldn't have magically sold like 100 million or even 40-50 million again. 

The Wiimote was bundled with the Wii U in Japan with Wii Party and Wii Fit (don't tell me this SKU was aimed at hardcore gamers) and it still sold like crap. 

Kinect was bundled with the XBox, that damaged the XBox rather than helping it. By around 2011 I think people kinda just got tired of waggling their arms around ... it gets boring and "same ol', same ol'" after the 50000th time and every system had it with the same exact types of games.

Right, and why I wrote "not viable with any significant audience/more succesful" not "not insanely blow-out success". 
And likewise why I didn't suggest Wiimote pack-in as route to success.  That was a flash in the pan fad that was over and no longer unique.



Around the Network
SonytendoAmiibo said:
Einsam_Delphin said:

Only two? Well here's twenty words!

That's just a bunch of old games bundled together, not Nintendo's next system that developers will be making games for.

 

You counted those words, didn't you. Dont lie, you made sure there were exactly twenty words. The mini NES is Nintendo rereleasing old tech for modern cash and its this holiday's system. I rest my case. Next please.

Of course I did, as did you. Now you can rest all you want, but that wont make a collection of 30 20+ year old games even remotely comparable to original game systems.



SonytendoAmiibo said:
Thats the real bummer. If third-party's didn't want to do extra work to support Wii U's game pad they sure as heck wont make two versions of games so the docked mode runs better. Which unfortunately means most if not all third party games will just be made to work at the portable spec. Bummer.

Not really an apt comparison IMHO.  Utilizing full power of GPU is getting the most out of the game whatever it is.
Not using 2nd screen is not choosing to base gameplay around schtick which may or may not improve gameplay.
Fact is that it didn't offer greater experience for very many games, faulting devs for this is absurd.
If 2nd screen was so great, why does 2nd small screen peripherals for PC simply not exist?
Any functionality it can offer, can be replicated by modal screens using gyro or button vs. refocusing head/eyes on 2nd screen.

And furthermore, NOT utilizing it boosts system power available for core gameplay, again showing fault of comparison.



jonathanalis said:
i flew into 600 gflops docked.
im still ok.

 

Thank you so much for staying on topic.

   

Hey! They got SONY on my amiibo! Wait a minute. Two great gaming tastes that game great together!

Switch FC: SW-0398-8858-1969

mutantsushi said:
SonytendoAmiibo said:
Thats the real bummer. If third-party's didn't want to do extra work to support Wii U's game pad they sure as heck wont make two versions of games so the docked mode runs better. Which unfortunately means most if not all third party games will just be made to work at the portable spec. Bummer.

Not really an apt comparison IMHO.  Utilizing full power of GPU is getting the most out of the game whatever it is.
Not using 2nd screen is not choosing to base gameplay around schtick which may or may not improve gameplay.
Fact is that it didn't offer greater experience for very many games, faulting devs for this is absurd.
If 2nd screen was so great, why does 2nd small screen peripherals for PC simply not exist?
Any functionality it can offer, can be replicated by modal screens using gyro or button vs. refocusing head/eyes on 2nd screen.

And furthermore, NOT utilizing it boosts system power available for core gameplay, again showing fault of comparison.

 

I'm not placing fault on third party developers, I'm saying many of them didn't want to deal with the extra work.

   

Hey! They got SONY on my amiibo! Wait a minute. Two great gaming tastes that game great together!

Switch FC: SW-0398-8858-1969

Around the Network
curl-6 said:

Honestly, I feel like underclocking the Switch to the degree that Nintendo did was a mistake, if only for the negative PR that will result.

Gamers today do care about specs, and headlines all across the internet declaring that Switch is weaker than Tegra X1 even when docked and only 30% as strong undocked will significantly hurt the system's image.

I know "preorder cancelled" has become a widespread joke, but there are plenty of people for whom this news will mean they don't buy one.

Well what else were Nintendo going to do with the given constraints ? 

They went with Nvidia for higher performance per watt, they made the handheld form factor bigger than ever before and they tried to balance between performance and thermal output so that they could hopefully have a good MTBF rating ... 

Would you want similar scenario's happening to the Switch like how Microsoft struggled in the last generation with the red ring of death ? 

There's a reason why Nintendo couldn't clock the Switch as high as the Tegra X1 when there's no active cooling ...



Why is everyone quoting 176 Gflops (0.176Tflops) for the Wii U? 

The Latte GPU found in the Wii U is widely believed to be the RV770 with 320 stream processors (ALUs) clocked at 550mhz. AMD HD4000/5000/6000 graphics series can perform 2 arithmetic logic unit operations per clock cycle. 320 ALUs x 550mhz x 2 ops/cycle = 352 Gflops or 0.352 Tflops. This has been covered in detail a long time ago: 

http://www.eurogamer.net/articles/df-hardware-wii-u-graphics-power-finally-revealed

Was there new information that proves the Wii U's GPU only has 160 ALUs?! The issue with Wii U's GPU is that it uses a VLIW-5 architecture which predates the scalar GCN and Maxwell/Pascal architectures. The problem with VLIW-4/5 was that game engines had to be coded specifically to schedule all the wavefronts for maximum utilization of the GPU. This proved too costly and time consuming. It's why 3rd party multiplats were doomed from day 1 on the Wii U. Since GCN has 2 asynchronous compute scheduling engines in the xbox 1 and 8 (!) in the PS4, the driver and the dynamic scheduler of XB1/PS4 was more capable of handling unpredictable game code of next gen games.  Since it became too costly to make separate game engine optimizations for VLIW-5 of the Wii U, Latte's stream processors would remain idle for most of that console's life. An indication of just how difficult it became to optimize for HD4000-6000 VLIW architectures is evidenced by the fact that AMD stopped driver support for all of those lines not long after GCN HD7000 launched in 2012. This is another reason the load power usage on the Wii U in 3rd party games was low because the graphics card was underutilized. That is why the Wii U's 0.352Tflops would rarely translate in practice. Therefore, comparisons of Wii U's Tflops to Tegra X1 is completely misrepresentative. 

It's not correct to compare aggregate graphical horsepower capabilities between completely different GPU architectures using Tflops since that assumes both GPU architectures are 100% ALU limited. The only reason the Tflops comparison works for Xbox One vs. PS4 is because HD7790 (XB1) and HD7850/7870 (PS4) are the same GCN 1.0 architecture. Even then, that comparison is just a coincidence. IIRC, XB1 has 16 ROPs, 48 TMUs, 768 stream processors against PS4's 32 ROPs, 72 TMUs and 1152 stream processors. The massive raster output units and texture mapping units advantage of Pitcairn GPU in PS4 is what allows it to run games are 1080p when XB1 is forced to drop down to 720-900p. Conversely, an RX480 is 50-60% faster in modern games compared to the HD7970Ghz but RX480's Tflops advantage is only 36% [2304 SPs x 1266mhz / (2048 SPs x 1050mhz)]. 

Moral of the story is you often cannot even directly compared 2 AMD Graphics Core Next cards based on Tflops, so how can you compare NV vs. AMD? You cannot! 

Here afe more examples:

Flat out comparisons of 1.58Tflops Fermi GTX580 to only a 35-40% faster in games 3.2Tflops Kepler GTX680 have long proven that comparing different GPU architectures by only using arithmetic logic unit calculations is 100% flawed. This can be easily illustrated by the fact that GTX1080 with nearly 9Tflops of power is only 21-23% faster than a 6.5Tflops GTX1070.

I am not making any excuses for supposed 256 CUDA core, 16 TMU, 16 ROPs, 25.6 GB/sec Switch specs, but a lot of you are missing the forest from the trees that graphical capability is NOT necessarily only arithmetic bound. Besides shaders/ALUs/Stream Processors, there are rasterization unit(s), geometry units, ROPs, TMUs, delta color compression, L2 cache, static vs. dynamic compute scheduler, Asynchronous Compute, access to lower level APIs such as Vulkan, etc. 

NV and AMD architectures absolutely CANNOT be compared accurately strictly from a TFlops perspective. ~9 Tflops GTX1080 is almost 50% faster in games than an 8.6Tflops Fury X. Again, I am in no way shape or form defending the Switch's specs, but simply pointing out that direct comparisons of PS4/Xbone as XYZ times more powerful than the Switch absolutely cannot be drawn solely on the Tflops figures. It's even more tricky since GCN has dynamic compute scheduler and 2-8 Asynchronous Compute Engines (think why Uncharted 4 is so good looking), but Tegra X1's Maxwell's architecture doesn't have either of these features. [Maxwell has a static scheduler and its Async Compute units are only active in the CUDA framework/eco-system -- they are disabled in games on all consumer Maxwell videocards]

The biggest obstacles to 3rd party support will be small user install base initially and launch timing. Even if the Switch were 5X more powerful, many of the games scheduled to launch in 2017-2018 have started their design with no intention of ever showing up on the Switch, regardless of its hardware capabilities. Developers and publishers have limited human capital and financial resources to be able to easily add a dedicated separate team just for the Switch, when we don't even know if the console will top 5-10M unit sales in 2017. 



mutantsushi said:

Obviously what they did do was not viable with any significant audience.
Whether Wii U might have been more successful by dumping tablet and selling at lower price is open question.
Or using the same budget to allow stronger CPU/GPU/memory at the same cost to consumer.

I think it does have bearing on Switch in revealing consequences of cheaping out on performance strategy.
That did not mean they could later drop their costs of production so much to maximize competitiveness in S/M market.
Too large a percentage of production cost was tied up in components less amenable to production gains.
Switch also commits to obsolete 2D 20nm process when future fab improvements (10nm, 7nm) will be 3D FinFET.
Rather than getting on board same fab node bandwagon everybody else is, even if aiming for smaller APU at lower clock,
Nintendo uses last-gen fab node which makes their design less amenable to future cost reductions from fab improvements.
It is aiming far behind the curve, and thus will always miss the largest % of cost improvements which the mainstream gains.

But cheap as f***.

Nvidia has tons of these 3 year old chips they made laying around, they cant sell.

Hey Nintendo want some cheap mobile chips? ofc nintendo says yes, and just downclocks these power hungry chips so they can get the battery life they want.

Unless Nintendo f***ed up the hardware design costs again (hello 80$ BOM Wii U tablet), this should allow the switch to be priced really low.

The Switch isnt a bad system if its just cheap enough.

 

If Nintendo launch the switch at 299$ they are some greedy bastards, and deserve the poor says it ll cause.

If they launch it at 199$ they re going to be off to a great start. Handheld users will be willing to get it early/day1 too at that price.



Mowco said:
onionberry said:
Why is that people is so fucking whiny towards Nintendo and always talk about the same power bullshit and not about new option that Nintendo is trying to offer, the future gameplay design and games, omg but let's be happy because now I have more teraflops on my playstation and I can play shit games at 45 fps and upscaled 4k.

 

You seemed pretty bothered when you said this just a few days ago. "And btw the switch is going to be as powerful or really close to the xbox, that's what people were expecting because of the form factor." You also said you were "100% sure" it would be equivalent to an XBO in power. Why make such predictions if you don't care about power. http://gamrconnect.vgchartz.com/post.php?id=8199405

 

onionberry said:
Mowco said:

 

It's not 1.5 behind XBO. Docked - 400Gflops XBO - 1.3Tflops It's a little less than a third. I don't think it's about people having a "boner for power" they're not after the MOST powerful system otherwise they'd never be interested in the switch even prior to this news. I was on the fence when I thought it was going to be a 600Gflop system and expected 400 Gflop's undocked. I didn't expect 400 Docked and 150 Undocked. It's not about wanting a power-house such as the 1080. It's about wanting a device that is powerful ENOUGH, that is somewhat competent. How long will this device last? Are we going to be gaming on a 400Gflop "home console" in 2020? Alongside the ps5 and next xbox? Such a pitiful device doesn't deserve a free pass cause it's nintendo. If this was an Xbox/PlayStation you would mock the shit out of it.

If this was xbox and playstation they would have a big vcr case or a big triple cheeseburger always pluged to a power source with that power then yeah I would mock the shit out of it but we are talking about a device thiner than a dualshock that can produce better performance than a wii u when undocked and much better performance when is docked for $250, now give me a better deal of a device like that and then I will understand the rage.

I blame Miguel_Zorrro for mistakes, mispredictions and miscalculations, by locking your epic thread he gimped your butt powah.   



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


BlueFalcon said:

Why is everyone quoting 176 Gflops (0.176Tflops) for the Wii U? 

The Latte GPU found in the Wii U is widely believed to be the RV770 with 320 stream processors (ALUs) clocked at 550mhz. AMD HD4000/5000/6000 graphics series can perform 2 arithmetic logic unit operations per clock cycle. 320 ALUs x 550mhz x 2 ops/cycle = 352 Gflops or 0.352 Tflops. This has been covered in detail a long time ago: 

http://www.eurogamer.net/articles/df-hardware-wii-u-graphics-power-finally-revealed

Was there new information that proves the Wii U's GPU only has 160 ALUs?! The issue with Wii U's GPU is that it uses a VLIW-5 architecture which predates the scalar GCN and Maxwell/Pascal architectures. The problem with VLIW-4/5 was that game engines had to be coded specifically to schedule all the wavefronts for maximum utilization of the GPU. This proved too costly and time consuming. It's why 3rd party multiplats were doomed from day 1 on the Wii U. Since GCN has 2 asynchronous compute scheduling engines in the xbox 1 and 8 (!) in the PS4, the driver and the dynamic scheduler of XB1/PS4 was more capable of handling unpredictable game code of next gen games.  Since it became too costly to make separate game engine optimizations for VLIW-5 of the Wii U, Latte's stream processors would remain idle for most of that console's life. An indication of just how difficult it became to optimize for HD4000-6000 VLIW architectures is evidenced by the fact that AMD stopped driver support for all of those lines not long after GCN HD7000 launched in 2012. This is another reason the load power usage on the Wii U in 3rd party games was low because the graphics card was underutilized. That is why the Wii U's 0.352Tflops would rarely translate in practice. Therefore, comparisons of Wii U's Tflops to Tegra X1 is completely misrepresentative. 

It's not correct to compare aggregate graphical horsepower capabilities between completely different GPU architectures using Tflops since that assumes both GPU architectures are 100% ALU limited. The only reason the Tflops comparison works for Xbox One vs. PS4 is because HD7790 (XB1) and HD7850/7870 (PS4) are the same GCN 1.0 architecture. Even then, that comparison is just a coincidence. IIRC, XB1 has 16 ROPs, 48 TMUs, 768 stream processors against PS4's 32 ROPs, 72 TMUs and 1152 stream processors. The massive raster output units and texture mapping units advantage of Pitcairn GPU in PS4 is what allows it to run games are 1080p when XB1 is forced to drop down to 720-900p. Conversely, an RX480 is 50-60% faster in modern games compared to the HD7970Ghz but RX480's Tflops advantage is only 36% [2304 SPs x 1266mhz / (2048 SPs x 1050mhz)]. 

Moral of the story is you often cannot even directly compared 2 AMD Graphics Core Next cards based on Tflops, so how can you compare NV vs. AMD? You cannot! 

Here afe more examples:

Flat out comparisons of 1.58Tflops Fermi GTX580 to only a 35-40% faster in games 3.2Tflops Kepler GTX680 have long proven that comparing different GPU architectures by only using arithmetic logic unit calculations is 100% flawed. This can be easily illustrated by the fact that GTX1080 with nearly 9Tflops of power is only 21-23% faster than a 6.5Tflops GTX1070.

I am not making any excuses for supposed 256 CUDA core, 16 TMU, 16 ROPs, 25.6 GB/sec Switch specs, but a lot of you are missing the forest from the trees that graphical capability is NOT necessarily only arithmetic bound. Besides shaders/ALUs/Stream Processors, there are rasterization unit(s), geometry units, ROPs, TMUs, delta color compression, L2 cache, static vs. dynamic compute scheduler, Asynchronous Compute, access to lower level APIs such as Vulkan, etc. 

NV and AMD architectures absolutely CANNOT be compared accurately strictly from a TFlops perspective. ~9 Tflops GTX1080 is almost 50% faster in games than an 8.6Tflops Fury X. Again, I am in no way shape or form defending the Switch's specs, but simply pointing out that direct comparisons of PS4/Xbone as XYZ times more powerful than the Switch absolutely cannot be drawn solely on the Tflops figures. It's even more tricky since GCN has dynamic compute scheduler and 2-8 Asynchronous Compute Engines (think why Uncharted 4 is so good looking), but Tegra X1's Maxwell's architecture doesn't have either of these features. [Maxwell has a static scheduler and its Async Compute units are only active in the CUDA framework/eco-system -- they are disabled in games on all consumer Maxwell videocards]

The biggest obstacles to 3rd party support will be small user install base initially and launch timing. Even if the Switch were 5X more powerful, many of the games scheduled to launch in 2017-2018 have started their design with no intention of ever showing up on the Switch, regardless of its hardware capabilities. Developers and publishers have limited human capital and financial resources to be able to easily add a dedicated separate team just for the Switch, when we don't even know if the console will top 5-10M unit sales in 2017. 

So what you are trying to tell us non-technophile neanderthals is that trying to use teraflops as a measurement of system power simply doesn't work because there are just too many variables involved. The days of comparing 8 bit vs. 16 bit have been over for decades and there is no easy way to make comparisons anymore. Maybe someone needs to come up with an easy to understand grading system for these chips so they are easier to compare. Thanks for the info. 



   

Hey! They got SONY on my amiibo! Wait a minute. Two great gaming tastes that game great together!

Switch FC: SW-0398-8858-1969