By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Pemalite said: 

Bofferbrauer2 said:
Wouldn't help at all. The approach was good - for the Server chips! An 8-core Ryzen actually ain't much smaller than a native intel 8-core chip, and the difference in size mainly comes from the Intel 8-cores having much more L3 cache ( Core i7 5960X/i7 6900K have 20MB, 7820X clocks in at 11MB, compared to the 8MB of Ryzen) since they derive from. In Servers, where Intel comes with a 28-core behemoth, having 4x8 cores instead is a massive manufacturing advantage, but in the sizes of desktop/console chips, this doesn't make a difference at all. I mean, AMD could come theoretically come with 4 dualcores with each having 512kB L2 and 2MB L3 cache, but I very much doubt they will. Because it wouldn't help at all in production, what is won in such small scale chips is lost again by the redundancies that need to be build into each chip (which makes a 4x2 cores bigger than a native 8-core)

You are mistaken. It would help.
You need to remember that wafers have a % of defects, the more % of space a chip takes up on the wafer the greater the chance it's going to have a fault, which decreases yields and thus increases costs.

Now companies tend to get around this to a degree by building chips to be larger than they need to be. For example, Sony actually has more CU's in the Playstation 4, but because such a high number of chips had defects, Sony disabled a chunk of the chip to increase yields.

Now the reason why AMD didn't take a similar approach for chips with 8~ cores and under is simple. The chips were already relatively small and had good yields, thus it was mostly unnecessary. - Plus they were die harvesting parts for chips with smaller core counts.

But that can only take you so far.

If you were to start making a monolithic next-gen SoC with an 8-core Ryzen complex, with a beefier memory controller and a big array of CU's for the GPU, then you start running into the same issue as thread-ripper, the chip is going to be stupidly massive, you are only going to get a few chips per wafer, costs will sky rocket.

Don't worry, I took it into account. That's where I said that the advantages are being eaten up by it's disadvantages. Because at such a small scale (in square mm), there's  not much to win by splitting the chip up. The only thing I could see coming (and actually expect) would be that it won't be an APU, but not due to size, but for cooling reasons.

The Jaguar cores don't consume much power, but Ryzen, as efficient as it is, will consume more than the Jaguar cores unless clock speed would be very low. Add to this that the GPU part will probably also consume around 150-200W (more than that is very hard to cool in a small case like a console and liable for breakdowns, RRoD anyone?) and it would result into a monstrous APU that would be very hard to keep cool, especially when the weather is hot.

Bofferbrauer2 said:

Fabs switched because DRAM was too long too cheap and drove the manufacturers to either change production or face bankruptcy. Some where big enough to do both, and those where who stayed. At the time where they did the switch, NAND Flash was still very new and expensive, so for those who couldn't live from their DRAMs anymore it was a pretty easy way out as they didn't need to change much in terms of machinery. When the market then exploded, there where simply not enough manufacturers left in that domain, and having the biggest DRAM fab in south east asia being totally flooded (as in submerged under several meters of rainwater) when the demand started to rise again didn't help matters, either. There is no artificial demand inflation, just the fact that demand was much lower a couple of years ago - too low to feed all the producers at the time.

If NAND really will get oversupplied, then we will have a reversal of the situation pre-2015, where DRAM was oversupplied and NAND in short supply. This might incite the same movement like it did back then, just from NAND to DRAM this time around.

I have to disagree.
Hence the DRAM price fixing debacle.
https://en.wikipedia.org/wiki/DRAM_price_fixing

Right now manufacturers are playing the supply game, switching between NAND and DRAM to maximize profits.
They also try to ramp up production of a certain technology when something like the next iPhone/Samsung drops, which has a flow on effect to other markets.

You are aware that that happened over 15 years ago?

When the prices where so low in the early 2010s, it was almost impossible for those companies to survive, the market was oversaturated. Let's look at the list of manufacturers that got into the DRAM price fixing and what they are doing now, shall we? Infineon? Left the market entirely and produces mainly microcontrollers and power controller chips nowadays. Elpida? Went bankrupt in 2012 because they where solely producing DRAM chips and those had almost no margins back then. Aquired by Micron, the next one on our list. Makes DRAM, NAND and NOR Flash Memory. This broad listing is why they survived. Most of their production is still DRAM, and their most modern Fabs exclusively produce DRAM. Hynix? Got auctioned for 3 Billions in 2010 and saved when SK Group bought a large part of the company, hence why it's called SK Hynix now. The other ones who still do DRAM today are Samsung and Toshiba, to big Corporations, and Sandisk, who survived mainly because of HDD market.

tl;dr: DRAM wasn't enough to survive during the 2009-2015 era where prices were so low that they didn't have much margin anymore.

However checking the Semiconductor Fab lists of some companies, it doesn't seem like the production is going to be expanded, SK Hynix is building 3 Fabs, but all for NAND Flash. TSMC is building 3 new Fabs, but I have no idea what will be produced there in the end, it could be anything with them. Samsung at least is building a Fab for both DRAM and VNAND. So it's quite possible they are not willing to let the prices drop again.