By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GPGPU Explained: How Your GPU is Taking Over CPU Tasks

Soleron said:

 

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

How can a CPU be that much faster than a GPU at decoding video?



Around the Network
Slimebeast said:
Soleron said:

 

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

How can a CPU be that much faster than a GPU at decoding video?

Because it isn't using the general CPU hardware, it's using a tiny fixed-function piece of hardware (a hundredth of the die size or so) that's sole job is to decode.



Soleron said:
Slimebeast said:
Soleron said:

 

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

How can a CPU be that much faster than a GPU at decoding video?

Because it isn't using the general CPU hardware, it's using a tiny fixed-function piece of hardware (a hundredth of the die size or so) that's sole job is to decode.

Wow I had no idea such a thing existed. I wouldn't have imagined you could have such an advantage with a specialized mini-chip (chip-within-a-chip or whatever it is).



Slimebeast said:
...

Wow I had no idea such a thing existed. I wouldn't have imagined you could have such an advantage with a specialized mini-chip (chip-within-a-chip or whatever it is).

Usually it isn't economical to design and manufacture a custom processor for one specific application that you have. Because, sure, it'll be 10, 100  faster/smaller/more efficient than an off the shelf part, but the million dollars you need for R&D is too much.

It makes sense for embedded devices, phones and games consoles because they are high volume  but using less power is crucial. So you see a lot of custom hardware for those segments.

The hardware also has its own special instruction set, you can't use the normal x86 commands to control it. So developers will need to target it specifically to use it. Therefore it doesn't need all the decoding hardware that a general processor needs. The I/O and memory access it needs is already on the CPU die anyway. And so on, until the actual specific block doesn't need to be that complex.



Won't the latency problem that gpu's have in processing these kind of data will be reduced when new chips come out combining cpu and gpu, like fusion?

They won't be in the pciE bus and will have direct access to the cpu and ram. Ok that main memory won't be as fast but does it affect these kind of calculations?

It would be great you have for example the fusion gpu to compute physics and let the other gpu on the pciE port do the graphics.



Around the Network

nothing new. I've being using my gpu power for video encoding for years now. There are some video filters out there which are designed to use your gpu for cleaning up images and maintaining quality, whilst the cpu encodes the frames. 

 

it is good to see this growing to more and more uses.



 

 

Slimebeast said:
Soleron said:
Slimebeast said:
Soleron said:

 

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

How can a CPU be that much faster than a GPU at decoding video?

Because it isn't using the general CPU hardware, it's using a tiny fixed-function piece of hardware (a hundredth of the die size or so) that's sole job is to decode.

Wow I had no idea such a thing existed. I wouldn't have imagined you could have such an advantage with a specialized mini-chip (chip-within-a-chip or whatever it is).


You've seen the same thing in Pc expansion cards over the years.  When CPU power was at a premium, external soundcards had hardware acceleration to offload processor usage.  Custom processors are quite common in the history of the PC, traditionally over the years functions that were on seperate expansion cards have moved to the motherboard, then to either the northbridge or southbridge and now they're getting moved into the CPU die. Of course sometimes a hardware accelerated function become negligible performance hit so the specialised processing chips get dropped altogether.