GPU computing has a number of issues today. When they're mostly resolved only then will it become widespread.
1. Technical - Very high latency and limited flexibility and memory
GPUs are still mostly designed for graphics, so they are designed to perform the same operation on hundreds of pixels/vertexes in parallel. This works for protein folding, but when you want more branchy code, or to do different operations on different parts of the data at once, or worse if you need regular communication with the CPU, GPUs are very inefficient at present.
2. No guarantee of GPU power
Most computers ship with a poorly-performing Intel IGP. If consumer software is to use GPUs widely, there must be a guarantee that the majortity of the market has a fast and flexible enough graphics processor to do it.
3. Can be better served by fixed-function hardware
Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.
4. Lock-in to a single GPU vendor
Due to large investment in software tools and academic projects, Nvidia's CUDA seems to be the only GPGPU platform in use. There's no competition, despite AMD's hardware being equally capable if companies optimised for it. A mass market project can't be tied to a single vendor like that - at least AMD and probably Intel need to be on board with GPGPU before it's worth developing for.







