By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Basically it is a GPU built by a great number (think 32-64) of X86 cores. Each core is smaller, cheaper, less performant than a typical Core 2 CPU, but the number of them can assure a great computational power for parallelizable jobs. In addition there's a small amount of specialized silicon as in a texture unit, if I remember well.

It inherits some ideas from the Cell architecture, in that it sort of trespasses the line between CPUs and GPUs, but nowadays most GPUs are approaching this same blend from the other side (ie morphing their shader units more and more into general purpose cores). Also it has a somewhat similar memory architecture, but each core is more inidipendent and there's not the FPE/SPE asymmetry.

So what is good about it? The idea is that instead of having hardware built around the requirements of an API (say DirectX or OpenGL with extensions) you have a completely neutral processing unit.

Say that in a few years someone comes with a new idea for effects and a shader 4.0 model is proposed as standard. With specialized silicon you would have to design a new chip that implements in hardware the new specs. On Larrabee all you would have to do is write a new version of the drivers that uses the X86 cores to produced the desired result in pixels, when the new function is called in -say- DirectX 12.

In a certain sense, it would render it all in software (as in the driver software), only on a battery of many dedicated cores.

This flexiblity also means that it could be used to render graphics in completely new ways, not according to the traditional DirectX/OpenGL pipelines made of triangles, textures, shader effects. Alternative methods such as ray-tracing, voxels or hybrid ones could be run on the Larrabee X86 cores as well, it would be all up to the software.

Another obvious good point is scalability: newer models would ramp up the number and clock of the cores, but the underlying structure would not change dramatically.

A third point is that sooner or later Intel would surely end up integrating CPU and GPU in a single chip, leading to having N cores used for general computation and M(>N) smaller, simpler cores for parallelizable works such as graphic. That's where the similarity with some ideas of the Cell design kick in.

Mind you, the project has been delayed because the prototypes ran too hot and were not up-to par (yet) with the competition when it comes to "traditional" rendering as used in software today. Still, I think that it's an interesting way to pursue, as specialized silicon on the GPUs is becoming less and less important.

If you want to delve into the technicalities, look for the articles on ArsTechnica, they were quite clear.


"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman