What do you guys think? The new GDDR5X standard offers double the bandwidth per-pin compared to current generation GDDR5, without any major design or electrical changes, letting GPU makers make a seamless and cost-effective transition to it.
What do you guys think? The new GDDR5X standard offers double the bandwidth per-pin compared to current generation GDDR5, without any major design or electrical changes, letting GPU makers make a seamless and cost-effective transition to it.
Why? If it's about bandwidth HBM. Great bandwidth so very high data transfer rates and at the same time low power consumption.
If it's about price they'll go for a cheaper solution like GDDR5/DDR4 or slower.
We know nothing right now, so guessing about RAM without knowing how much bandwidth is actually needed is senseless.
HBM, if they want impress gamers and developers alike. Falls in line with their enegy saving tactics too
| BasilZero said: Is this confirmed or just fan based speculation o.o? |
Nothing was confirmed, all NX threads are pointless.
Nintendo goes AMD and AMD goes HBM, end of story.
唯一無二のRolStoppableに認められた、VGCの任天堂ファミリーの正式メンバーです。光栄に思います。
Unless Nintendo plans to go with a Fiji chip or one or the top of the line chips of the next Artic Islands, I doubt they'll use HBM.
DDR3 is EoL, and why Nintendo could use it, its price will only go up and its availability down. GDDR5 or DDR4 are more likely and of those, DDR4 probably is probably their best bet: it's fast, uses less power than DDR3 (and a lot less than GDDR5), it's available in large quantities and as it gets more common in the PC space (the latest Intel chips are already DDR4 and the next AMD chips will also bet on DDR4) its price will also go down.
Please excuse my bad English.
Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070
Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB
Steam / Live / NNID : jonxiquet Add me if you want, but I'm a single player gamer.
Because Nintendo is always on the forefront of embracing new technology. Seriously, have you put any thought at all into this before clicking the submit button? There is so much wrong with this "speculation" that it can't even be counted as part of a language anymore.
If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.
| JEMC said: Unless Nintendo plans to go with a Fiji chip or one or the top of the line chips of the next Artic Islands, I doubt they'll use HBM. DDR3 is EoL, and why Nintendo could use it, its price will only go up and its availability down. GDDR5 or DDR4 are more likely and of those, DDR4 probably is probably their best bet: it's fast, uses less power than DDR3 (and a lot less than GDDR5), it's available in large quantities and as it gets more common in the PC space (the latest Intel chips are already DDR4 and the next AMD chips will also bet on DDR4) its price will also go down. |
DDR4-3200 (the fastest variant currently specified by the JEDEC) is only about 50% faster than DDR3-2133 (again, the fastest variant specified by JEDEC), which would barely be enough for the Xbox ONE. So unless it's used in conjunction with some eDRAM (expensive), DDR4 would not be enough.
HBM2 is good on paper, but too new and thus very expensive, and it's necessary interposer keeps it at a pretty high price, too
GDDR5 is probably the cheapest, but also the most energy consuming one of the bunch
GDDR5X is useless unless the GPU part would reach High-end levels. Since those cards alone consume more enrgy than a whole console, it's sole advantage over GDDR5 would be wasted
There is a 5th solution though: HMC, Hybrid Memory Cube. It's Bandwith is on par with GDDR5(X) but somewhat lower than HBM (160GB/s to 320GB/s depending on the number of Lanes). Since the huge amount of Bandwith provided by HBM is not needed however, HMC would largely suffice. Like HBM, it's comprised of stacked RAM, but more simple setup and thus cheaper to produce. Also unlike HBM or GDDR, it allows simultanous access to and from the RAM (meaning writing and reading data at the same time). It's main drawback is that the technolgy has yet to be implemented into a consumer device. Still, if it where up to me, that's the one which I would probably choose