By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - What exactly is Intel Larabee?

There has been alot of Buzz around Intel Larabee but i still didn't understand what exactly it is

 

I have heard people say its like the Cell.Some say its going to replace the CPU and GPU is going to do all the work of the computer.

 

I am very confused so please explain me what are its function and other feature,its competition



Around the Network

Basically it is a GPU built by a great number (think 32-64) of X86 cores. Each core is smaller, cheaper, less performant than a typical Core 2 CPU, but the number of them can assure a great computational power for parallelizable jobs. In addition there's a small amount of specialized silicon as in a texture unit, if I remember well.

It inherits some ideas from the Cell architecture, in that it sort of trespasses the line between CPUs and GPUs, but nowadays most GPUs are approaching this same blend from the other side (ie morphing their shader units more and more into general purpose cores). Also it has a somewhat similar memory architecture, but each core is more inidipendent and there's not the FPE/SPE asymmetry.

So what is good about it? The idea is that instead of having hardware built around the requirements of an API (say DirectX or OpenGL with extensions) you have a completely neutral processing unit.

Say that in a few years someone comes with a new idea for effects and a shader 4.0 model is proposed as standard. With specialized silicon you would have to design a new chip that implements in hardware the new specs. On Larrabee all you would have to do is write a new version of the drivers that uses the X86 cores to produced the desired result in pixels, when the new function is called in -say- DirectX 12.

In a certain sense, it would render it all in software (as in the driver software), only on a battery of many dedicated cores.

This flexiblity also means that it could be used to render graphics in completely new ways, not according to the traditional DirectX/OpenGL pipelines made of triangles, textures, shader effects. Alternative methods such as ray-tracing, voxels or hybrid ones could be run on the Larrabee X86 cores as well, it would be all up to the software.

Another obvious good point is scalability: newer models would ramp up the number and clock of the cores, but the underlying structure would not change dramatically.

A third point is that sooner or later Intel would surely end up integrating CPU and GPU in a single chip, leading to having N cores used for general computation and M(>N) smaller, simpler cores for parallelizable works such as graphic. That's where the similarity with some ideas of the Cell design kick in.

Mind you, the project has been delayed because the prototypes ran too hot and were not up-to par (yet) with the competition when it comes to "traditional" rendering as used in software today. Still, I think that it's an interesting way to pursue, as specialized silicon on the GPUs is becoming less and less important.

If you want to delve into the technicalities, look for the articles on ArsTechnica, they were quite clear.


"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

Werekitten, I thought it was cancelled? It's only been delayed. AMD/ATI's Fusion CPU/GPU combo may not have the market to itself after all.



WereKitten said:

Basically it is a GPU built by a great number (think 32-64) of X86 cores. Each core is smaller, cheaper, less performant than a typical Core 2 CPU, but the number of them can assure a great computational power for parallelizable jobs. In addition there's a small amount of specialized silicon as in a texture unit, if I remember well.

It inherits some ideas from the Cell architecture, in that it sort of trespasses the line between CPUs and GPUs, but nowadays most GPUs are approaching this same blend from the other side (ie morphing their shader units more and more into general purpose cores). Also it has a somewhat similar memory architecture, but each core is more inidipendent and there's not the FPE/SPE asymmetry.

So what is good about it? The idea is that instead of having hardware built around the requirements of an API (say DirectX or OpenGL with extensions) you have a completely neutral processing unit.

Say that in a few years someone comes with a new idea for effects and a shader 4.0 model is proposed as standard. With specialized silicon you would have to design a new chip that implements in hardware the new specs. On Larrabee all you would have to do is write a new version of the drivers that uses the X86 cores to produced the desired result in pixels, when the new function is called in -say- DirectX 12.

In a certain sense, it would render it all in software (as in the driver software), only on a battery of many dedicated cores.

This flexiblity also means that it could be used to render graphics in completely new ways, not according to the traditional DirectX/OpenGL pipelines made of triangles, textures, shader effects. Alternative methods such as ray-tracing, voxels or hybrid ones could be run on the Larrabee X86 cores as well, it would be all up to the software.

Another obvious good point is scalability: newer models would ramp up the number and clock of the cores, but the underlying structure would not change dramatically.

A third point is that sooner or later Intel would surely end up integrating CPU and GPU in a single chip, leading to having N cores used for general computation and M(>N) smaller, simpler cores for parallelizable works such as graphic. That's where the similarity with some ideas of the Cell design kick in.

Mind you, the project has been delayed because the prototypes ran too hot and were not up-to par (yet) with the competition when it comes to "traditional" rendering as used in software today. Still, I think that it's an interesting way to pursue, as specialized silicon on the GPUs is becoming less and less important.

If you want to delve into the technicalities, look for the articles on ArsTechnica, they were quite clear.

Very nice explanation. 



Yes, WereKitten's explanation was good.

It remains to be seen whether the overhead of x86 is a good enough tradeoff for the extra programmability. Current GPUs are quite inflexible and difficult to program for, and have a lot of dedicated hardware that can't be repurposed, but that makes them efficient on power and die size terms. GPGPU is such a niche market at the moment that Larrabee's first iteration would not be competitive in the market right now.

What Intel would eventaully do with it is integrate lots of Larrabee cores onto the die of a CPU. So you would have one chip that would be a CPU and GPU. AMD's Fusion (Llano) is the same idea but using their processors and Radeon GPU 'cores'.



Around the Network
Darc Requiem said:
Werekitten, I thought it was cancelled? It's only been delayed. AMD/ATI's Fusion CPU/GPU combo may not have the market to itself after all.

This iteration has been cancelled. A retail one will only arrive in 2011 and beyond, and the layout and ISA will be very different. But the idea of Intel having a high-performance GPU consisting of x86 cores will remain. Llano should arrive before discrete Larrabee, and well before they integrate Larrabee into a CPU die.

The first Larrabee would not replace the CPU.

 



^Have they committed to any deadline at all? I thought they just put it on hold, waiting for the silicon/fabrication techs to gain ground so that the architecture can be realized in a somewhat competitive form.



"All you need in life is ignorance and confidence; then success is sure." - Mark Twain

"..." - Gordon Freeman

WereKitten said:
^Have they committed to any deadline at all? I thought they just put it on hold, waiting for the silicon/fabrication techs to gain ground so that the architecture can be realized in a somewhat competitive form.

No, I'm just speculating. It won't be 2010 though, otherwise they wouldn't have announced the delay. That reason is right.



hmm their was talk over on NintendoEverything about Intel wanting this to be in the Wii HD



I personally think a variant of the AMD Fusion will be powering the successor to the Wii.