Soleron said:
OK, I agree there's some room for it to happen anyway.
Since Larrabee is so new, Intel will first have it in an expensive, high-end product and then gradually lower the level. It'll also take a few generations to catch up to AMD and Nvidia, they're not going to be perfect on the first commercial go.
So it will be a while before we see a competitive desktop Larrabee in a mobile power envelope, and probably another generation after that before we see on-die integration.
They also have to get the Fusion-style memory controller right; that's the magic sauce that will make Llano work. Just wiring them together will get integrated graphics level performance, you need GDDRx-like access speeds in order to make a real GPU work. Another issue is drivers - none of the Intel integrated parts have had good drivers despite years to perfect them and a stationary target. The 965s never worked, G3x and G4x still sucked and Clarkdale has major shortcomings (though bearable). Look at the difference between the open source drivers and the binary drivers on Linux for AMD cards to see how much drivers matter (open is 1/10 to 1/3 of binary performance depending on game).
Charlie D did say, with regards to consoles, that Larrabee was a dead cert provided it hit performance and timeline milestones. A few months later he said it had missed both of those and was now out of the running. So it comes down to how reliable he is as a source on the subject of Intel, and my opinion is still in the 'unproven' range. (With Nvidia news it's now 'credible').
|
Charlie has probably heard it from at least two people, he always verifies his sources independantly before he posts an article which is why his hit rate was so high especially in relation to Fermi. This means his article represents the truth as his sources see it however its not infallible. Im actually tempted to ask him what its all about.
The reason why I believe it will show up in integrated form is because Intel is aiming to integrate GPGPU and GPU functionality into their core CPU die and because the future of the desktop market is limited relative to the mobile platforms. If they released an expensive add-on board the developers could safely ignore it due to the fact it would represent a very low fraction of the overall GPU architecture market share. However if they released it as an intergrated GPU first the developers would be forced to adopt it due to the sheer massive market share they have for their integrated GPUs.
Remember the design experiment? Intel was able to fit a 10-core Larrabee into the space of a Core 2 Duo die. Given the specs of the Core 2 Duo Intel used (4MB L2 cache), it appears to be a 65nm Conroe/Merom based Core 2 Duo - with a 143 mm^2 die size.
At 143 mm^2, Intel could fit 10 Larrabee-like cores so let's double that. Now we're at 286mm^2 (still smaller than GT200 and about the size of AMD's RV770) and 20-cores. Double that once more and we've got 40-cores and have a 572mm^2 die, virtually the same size as NVIDIA's GT200 but on a 65nm process.
The source link is dead btw.
If a LRB core is ~15mm^2 on 45nm, they could likely fit a bank of 8 onto any single CPU die they wanted, and possibly 16 of them and still fit within a reasonable die size on their 32nm or 22nm processes. I wouldn't call it a space issue.
As for the direct memory access issue, well its something they have to work out whether its for their standard IGPs or LRB. I don't doubt that its an issue they could solve, especially as they have a cross patent agreement with AMD so given enough time they can simply borrow AMDs implementation if they so chose.
In regards to the consoles, the extension of the deadline for new consoles has likely given them a reprieve. If they do get a design win they already have their samples for the LRB 1.0 processor and they could always make some more if required. The longer the generation lasts the more powerful their position becomes at least relative to Nvidia and IBM as they are one of two companies with unified solutions. If they were to throw in cheaper flash memory it'd probably seal the deal and at this point its a question of where there priorities are and how far they are willing to go to ensure industry adoption of the architecture.