By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Nintendo - Rumor/Leak: Switch Hardware Specs, 1024 FLOP (Possibly 1TF) Device, Maxwell Architecture, Chat w/out Smartphone, Bluetooth Enabled...

at this point I'd rather just wait for a mad person to dismantle it and confirm specs



Around the Network
bonzobanana said:
SuperNova said:

 

At the risk of repeating myself, but the manufacturer of both the 3DS ans Switch cartiges annouced (either in late 2015 or early 2016 can't remember now) that they had the manufacuring cost of 32GB carts down to the manufacturing cost of a BD.

This is probably why the 32GB carts are the maximum available size as of summer 2016. They are probably working on getting 64GB down in cost as we speak and it's not unreasonable to assume that they will become available to publishers at a BD pricepoint within the switches lifetime. Other than that, devs will just have to use compression. Many games these days are barely compressed and would probably fit on 32GB easily with clever compression work.

Absolutely no way they could make a 32GB cartridge for the price of a bluray disc. I've seen bluray films (poor one's) designed to retail at about £1 and must cost in the region of 20p or so to make. If flash memory or any type of random access memory was as cheap as that it would be a revolution in hardware prices. Why the hell is the Switch fitted with so little storage memory. The maximum size of launch Switch cartridges appears to be 8GB (64Gb) and 16GB (128Gb) but most games appear to be on much smaller cartridges.

Do you have a link for this 32GB equals bluray manufacturing cost. How much is a writable bluray disc because they normally cost more than pressed blurays and still pretty cheap. 

My guess for a bluray disc duplicating cost would be about 30c/25p and for a cartridge by Macronix lets say 8GB and 16GB would be $3 and $4.50.  However most Switch games seem to be on very small capacity cartridges so probably somewhere around $1-2.

Third parties seem to be highly motivated to bring small games to Switch. Hence ports of old small games or simple games like Bomberman. It seems third parties are highly motivated to keep their game sizes very small to lower their cartridge costs. There is absolutely no indication of low cartridge costs.

I looked it up for you, it was in Macronix finacial report for 2016. The gains are possible because Macronix switched from 75nm manufactoring process (3DS carts) to 32nm (Switch carts).

This link states the 23GB at cost of BD for Nintendo, presumambly in bulk: https://arstechnica.com/gaming/2016/05/why-nintendo-nxs-rumored-shift-from-discs-to-cartridges-is-actually-smart/

This one is about the manifacturing process: https://mynintendonews.com/2016/05/05/rumour-nintendo-nx-to-return-to-cartridges/

There also was a whole thread about it on this very forum about it but I honestly can't be arsed to find it again.

As for cost, these are ROM carts, possibly with a tiny little flash portion for savefiles, so they are largely non-rewritable and an entirely diffrent beast from the Switches 32gig of internal storage or even any SD/Micro SD cards you might be thinking of. ROM is generally much cheaper to produce than flash.



So, is this good? Bad?
Can it run games? Or only light apps?



Proud to be the first cool Nintendo fan ever

Number ONE Zelda fan in the Universe

DKCTF didn't move consoles

Prediction: No Zelda HD for Wii U, quietly moved to the succesor

Predictions for Nintendo NX and Mobile


JRPGfan said:
dahuman said:
1TF FP32 would be interesting, but, I wouldn't hold my breath.

xbox one is like 80watts? or so to reach 1.4 teraflops.

Switch is like 20? watts.

Its just not possible.

I mean, Maxwell and later GPUs are more efficient and x86 CPUs are power hogs, but no, I don't see it either even after those facts, that's why I said I wouldn't hold my breath for it lol.



Seems like an early spec sheet. The TBD for clocks and odd constraints such as SD card, bluetooth(*older version than retail), and usb are disabled.Atleast, we now know which CPU and GPU. It's interesting there is are no A53s, which makes sense.

*From what i understand BT 4.0 vs 4.1 is just a firmware update.



Around the Network
Ljink96 said:
JRPGfan said:

Switch 1024 FP16 (512 Gflops FP32).

Playstation 4 is 1,840 Gflops FP32 (aka 1.84 teraflops).

The thing is, the document says 1024 Flops. So I can't really assume at this point. I kinda doubt switch would be .5TF, we've heard .75 from digital foundry. The document is confusing. It really doesn't specify if it's a Megaflop, Gigaflop or at FP16 or 32 but you are most likely right. 

It could be 1024Gflops and be 1TF FP16, or 1024Gflops FP32, aka 512TF. But do hardware manufacturers default to FP16 when noting specs?

I think the idea is to optimize software for FP16 so that you really do have 1TF worth of compute power.

If you look at spec sheets provided by nVidia for Tegra, they list both FP16 and FP32 tflops. We're used to seeing FP16 / 2 = FP32 flops, but that isn't always the case. The Tegra K1 only had ~350 gflops in either FP16 or FP32 modes because the Kepler architecture couldn't even leverage FP16. Starting only with Tegra X1 did they do anything to take advantage of FP16.

People have been reeeaaally quick to write off FP16, but seldom give you valid, scientific reasons why.



I predict NX launches in 2017 - not 2016

fleischr said:
Ljink96 said:

The thing is, the document says 1024 Flops. So I can't really assume at this point. I kinda doubt switch would be .5TF, we've heard .75 from digital foundry. The document is confusing. It really doesn't specify if it's a Megaflop, Gigaflop or at FP16 or 32 but you are most likely right. 

It could be 1024Gflops and be 1TF FP16, or 1024Gflops FP32, aka 512TF. But do hardware manufacturers default to FP16 when noting specs?

I think the idea is to optimize software for FP16 so that you really do have 1TF worth of compute power.

If you look at spec sheets provided by nVidia for Tegra, they list both FP16 and FP32 tflops. We're used to seeing FP16 / 2 = FP32 flops, but that isn't always the case. The Tegra K1 only had ~350 gflops in either FP16 or FP32 modes because the Kepler architecture couldn't even leverage FP16. Starting only with Tegra X1 did they do anything to take advantage of FP16.

People have been reeeaaally quick to write off FP16, but seldom give you valid, scientific reasons why.

Not expert on subject, but few things to consider:

"To get an idea of what a difference in precision 16 bits can make, FP16 can represent 1024 values for each power of 2 between 2-14 and 215 (its exponent range). That’s 30,720 values. Contrast this to FP32, which can represent about 8 million values for each power of 2 between 2-126 and 2127. That’s about 2 billion values—a big difference."

https://devblogs.nvidia.com/parallelforall/mixed-precision-programming-cuda-8/ 

 

Current performance king in gaming GPUs, Titan X:

FP32: 10,157 GFLOPS

FP16: 159 GFLOPS

This is of course nVidia's way to prevent gaming cards being bought instead of Teslas for tasks that actually benefit from FP16...but just shows how little FP16 performance is important in games.

Honestly, not an expert on the subject, but last time I recall any talk about FP16 in gaming was some 15 or so years ago. Sure, mobiles have it, but degradation in quality seems to be quite noticeable.



fleischr said:
Ljink96 said:

The thing is, the document says 1024 Flops. So I can't really assume at this point. I kinda doubt switch would be .5TF, we've heard .75 from digital foundry. The document is confusing. It really doesn't specify if it's a Megaflop, Gigaflop or at FP16 or 32 but you are most likely right. 

It could be 1024Gflops and be 1TF FP16, or 1024Gflops FP32, aka 512TF. But do hardware manufacturers default to FP16 when noting specs?

I think the idea is to optimize software for FP16 so that you really do have 1TF worth of compute power.

If you look at spec sheets provided by nVidia for Tegra, they list both FP16 and FP32 tflops. We're used to seeing FP16 / 2 = FP32 flops, but that isn't always the case. The Tegra K1 only had ~350 gflops in either FP16 or FP32 modes because the Kepler architecture couldn't even leverage FP16. Starting only with Tegra X1 did they do anything to take advantage of FP16.

People have been reeeaaally quick to write off FP16, but seldom give you valid, scientific reasons why.

The difference in quality is much less noticeable at a distance.



“Simple minds have always confused great honesty with great rudeness.” - Sherlock Holmes, Elementary (2013).

"Did you guys expected some actual rational fact-based reasoning? ...you should already know I'm all about BS and fraudulence." - FunFan, VGchartz (2016)

HoloDust said:
fleischr said:

I think the idea is to optimize software for FP16 so that you really do have 1TF worth of compute power.

If you look at spec sheets provided by nVidia for Tegra, they list both FP16 and FP32 tflops. We're used to seeing FP16 / 2 = FP32 flops, but that isn't always the case. The Tegra K1 only had ~350 gflops in either FP16 or FP32 modes because the Kepler architecture couldn't even leverage FP16. Starting only with Tegra X1 did they do anything to take advantage of FP16.

People have been reeeaaally quick to write off FP16, but seldom give you valid, scientific reasons why.

Not expert on subject, but few things to consider:

"To get an idea of what a difference in precision 16 bits can make, FP16 can represent 1024 values for each power of 2 between 2-14 and 215 (its exponent range). That’s 30,720 values. Contrast this to FP32, which can represent about 8 million values for each power of 2 between 2-126 and 2127. That’s about 2 billion values—a big difference."

https://devblogs.nvidia.com/parallelforall/mixed-precision-programming-cuda-8/ 

 

Current performance king in gaming GPUs, Titan X:

FP32: 10,157 GFLOPS

FP16: 159 GFLOPS

This is of course nVidia's way to prevent gaming cards being bought instead of Teslas for tasks that actually benefit from FP16...but just shows how little FP16 performance is important in games.

Honestly, not an expert on the subject, but last time I recall any talk about FP16 in gaming was some 15 or so years ago. Sure, mobiles have it, but degradation in quality seems to be quite noticeable.

Can you give an example? FunFan just gave one that seems to prove otherwise.



I predict NX launches in 2017 - not 2016

Pemalite said:
curl-6 said:

Yeah, I was wondering what the go was there. With so many PS2 games (including GT4 itself) failing to hit even true 480p, 1080i seemed a bridge too far.

For years people keep bringing up GT4's supposed 1080i resolution to try to make PS2 look better compared to Gamecube and Wii, but it always felt fishy.

Because people don't make the distinction between rendered resolution and output resolution.

Like the days when people thought 720p/sub-HD PS360 games were 1920x1080 because it said on the back of the box that they "supported" 1080p. 

Hell, several of my friends were taken aback recently when I informed them that PS4 Pro didn't play all or even most games in true 4K. One of them heard a misleading ad in EB Games tell them it would "play all your favourite games in stunning 4K" and took their word for it. (Which to be fair wasn't entirely his fault; very dishonest marketing there)