By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GPGPU Explained: How Your GPU is Taking Over CPU Tasks

Most computers have two main processors. The CPU (Central Processing Unit) does the bulk of the general number crunching, while the GPU (Graphics Processing Unit) does all the calculations necessary to put complex images on the screen. Traditionally, the GPU has been used only for graphical processing, determining the placement of pixels, vertices, lines, and other geometric constructs in 2D and 3D spaces. Thanks to the evolution of GPUs, they're increasingly finding work outside of pixel and polygon-crunching.  

When GPUs are used for non-graphical processing, it's known as GPGPU, or general-purpose computing on graphics processing unit. Here's how it works. 
 
CPUs are primarily serial processors. While they might have multiple cores and threads, they still function by performing many calculations very rapidly, one after the other. GPUs, on the other hand, have become increasingly parallel processors. While an Intel or AMD CPU might have 2, 4, or 8 cores, an Nvidia GeForce or ATI Radeon GPU can have hundreds, all working at the same time. The individual cores on a GPU perform relatively simple functions, but since they all work together simultaneously, they can perform some impressive mathematical feats.  
 
The sheer processing power found in GPUs have not gone unnoticed by researchers, who have turned to GPGPU to produce a wide array of models and simulations that serial processing CPUs either can't handle or are simply far less efficient. While video cards were once solely used for getting graphics on your computer screen, now they're computational multitools for scientists. Their parallel architecture makes them ideal for handling hundreds of similar computations at once, exactly the sort of work scientific simulations require. However, they're also incredibly suited for brute-force security exploration. Parallel processors can produce thousands of passwords or keys per second, which is why they've become an ideal engine for password crackers and decryption tools. 
  
GPGPU actually presents a significant security problem beyond simple brute-force hacking, according to Intel. Last year, the company made public its concerns that as GPUs become more powerful and find more use, they will become susceptible to computer virii. While GPUs previously were only able to crunch numbers for graphics, they're getting more capabilities like Flash acceleration and even C++ processing, opening the chips up to potential exploits. 
 
If you want to take advantage of GPGPU at home,  GPUradar features a catalog of software that takes advantage of your GPU's parallel processing power. The site notes that GPGPU assists much more than strange scientific modeling programs. Several video converters, codecs, and players benefit from GPUs, as do numerous cryptography and security tools. Even every statistics student's best friend, Mathematica, can use parallel-processing GPUs to crunch numbers as of version 7 of the software.  
 

Nvidia is actively promoting the use of its GPU architecture for general processing. The company calls its parallel processing architecture CUDA, and while it's a fundamental part of its GeForce GPUs, it's also marketed as a valuable tool for researchers. According to Nvidia, its CUDA architecture has been used by SeismicCity to interpret seismic data to find potential oil wells, the University of Illinois at Urbana-Champaign to model the satellite tobacco mosaic virus, Techniscan Medical Systems to process ultrasound data, and General Mills to simulate how cheese melts.  
 
GPGPU.org keeps track of advances in and implementations of GPGPU technology. The site has cataloged GPU use in researching biology, physics, mathematics, and even computer security. Dozens of researcher papers have been written on how GPGPU can improve modeling, including  vastly speeding up searching protein substructures modeling the movement of particles influenced by gravity, and rapidly scanning computers to find virii and  other security breaches.      

http://www.tested.com/news/gpgpu-explained-how-your-gpu-is-taking-over-cpu-tasks/1017/



@TheVoxelman on twitter

Check out my hype threads: Cyberpunk, and The Witcher 3!

Around the Network

I've been aware of this for some time and been trying to convince my supervisor to get some GPUs in to do our molecular dynamics simulations for protein binding and energy minimas. He did say he'd look into Cuda mind.

Would be nice for me if more software used GPUs as my GPU (GeForce 8800) is disproportionately faster than my CPU (3800 X2).



GPU computing has a number of issues today. When they're mostly resolved only then will it become widespread.

1. Technical - Very high latency and limited flexibility and memory

GPUs are still mostly designed for graphics, so they are designed to perform the same operation on hundreds of pixels/vertexes in parallel. This works for protein folding, but when you want more branchy code, or to do different operations on different parts of the data at once, or worse if you need regular communication with the CPU, GPUs are very inefficient at present.

2. No guarantee of GPU power

Most computers ship with a poorly-performing Intel IGP. If consumer software is to use GPUs widely, there must be a guarantee that the majortity of the market has a fast and flexible enough graphics processor to do it.

3. Can be better served by fixed-function hardware

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

4. Lock-in to a single GPU vendor

Due to large investment in software tools and academic projects, Nvidia's CUDA seems to be the only GPGPU platform in use. There's no competition, despite AMD's hardware being equally capable if companies optimised for it. A mass market project can't be tied to a single vendor like that - at least AMD and probably Intel need to be on board with GPGPU before it's worth developing for.



Soleron said:

GPU computing has a number of issues today. When they're mostly resolved only then will it become widespread.

1. Technical - Very high latency and limited flexibility and memory

GPUs are still mostly designed for graphics, so they are designed to perform the same operation on hundreds of pixels/vertexes in parallel. This works for protein folding, but when you want more branchy code, or to do different operations on different parts of the data at once, or worse if you need regular communication with the CPU, GPUs are very inefficient at present.

2. No guarantee of GPU power

Most computers ship with a poorly-performing Intel IGP. If consumer software is to use GPUs widely, there must be a guarantee that the majortity of the market has a fast and flexible enough graphics processor to do it.

3. Can be better served by fixed-function hardware

Intel's Sandy Bridge hardware decoder, which will be on most future CPUs, outperforms very high-end GPUs on video decoding while using a hundredth of the power. That's a major task the industry hoped GPUs would be useful for that could better be served by fixed hardware.

4. Lock-in to a single GPU vendor

Due to large investment in software tools and academic projects, Nvidia's CUDA seems to be the only GPGPU platform in use. There's no competition, despite AMD's hardware being equally capable if companies optimised for it. A mass market project can't be tied to a single vendor like that - at least AMD and probably Intel need to be on board with GPGPU before it's worth developing for.

I agree with what you're saying.

Just wondering with point 4 if you think Open CL would help to move things along?



Scoobes said:
...

I agree with what you're saying.

Just wondering with point 4 if you think Open CL would help to move things along?

It would if CUDA didn't have such a hold on the minds of developers. Nvidia has spent a LOT of money funding universities and software companies to make it happen. I'm not saying that's wrong, but it won't become mainstream until OpenCL or DirectCompute are the standard tools and AMD makes their GPGPU software as accessible. Look at PhysX. It's a good idea but it hasn't taken off because of the vendor specific aspect to Ageia and then Nvidia. A lot more games would use it if AMD had support for it.

The other problem is that Fermi is vastly inferior perf/watt wise against Evergreen [and perf/watt is everything for HPC GPGPU], and even more so against Northern Islands. The Fermi chip isn't yielding in manufactuable quantities. So even if you do go for Nvidia, it's still a fragile situation because their roadmap is less than good.



Around the Network
Soleron said:

The Fermi chip isn't yielding in manufactuable quantities.

You've been saying that ever since I rememmber. It may have been true for the first few months of production, however no one will have problems buying a fermi card from different manufacturers at this point.

So I think one could say that they are manufacturable enough to meet demand...



disolitude said:
Soleron said:
 
 

The Fermi chip isn't yielding in manufactuable quantities.

You've been saying that ever since I rememmber. It may have been true for the first few months of production, however no one will have problems buying a fermi card from different manufacturers at this point.

So I think one could say that they are manufacturable enough to meet demand...

Oh, they're manufacturing them anyway. Just that no company which cares about being profitable would do it. GF100 yields remain in the 20% region, so you can't make money on them. Hence the loss last quarter and huge inventory pileup which means greater losses are coming. Intel or AMD would not attempt to sell a chip yielding that badly.

Notice that they have yet to release a full-shader (512) version. What graphics chip in history has had yields poor enough so they've had to do that? Even the super-expensive halo Tesla and Quadro parts only have 448 SPs at most, whereas if they had thousands of 512-shader candidates they would certainly release them there.

GTX 4xx demand is very low as well (evidence: how much the prices have dropped while 5xxx hasn't really) so it masks the fact they're producing minimally. Look how fast the GTX 460 dropped from $200 to $170, and Digitimes is saying further cuts are coming. The GTS 450 was rebated from $130 to $100 even before its official release, even though the chip performs like the $140 5770 and is the over 50% bigger in die size than Juniper.

I will say that they are good price/performance for the consumer, and that GF104/6/8 are yielding good enough to sell, but Nvidia can't make money on the high-end unless something changes there.

The 6xxx series is going to kill their price/perf even. I'll bet heavily that the GF100 parts will be cancelled, because selling a GTX 470 at $200 is not something Nvidia can even do.



Soleron said:
 

Oh, they're manufacturing them anyway. Just that no company which cares about being profitable would do it. GF100 yields remain in the 20% region, so you can't make money on them. Hence the loss last quarter and huge inventory pileup which means greater losses are coming. Intel or AMD would not attempt to sell a chip yielding that badly.

Notice that they have yet to release a full-shader (512) version. What graphics chip in history has had yields poor enough so they've had to do that? Even the super-expensive halo Tesla and Quadro parts only have 448 SPs at most, whereas if they had thousands of 512-shader candidates they would certainly release them there.

GTX 4xx demand is very low as well (evidence: how much the prices have dropped while 5xxx hasn't really) so it masks the fact they're producing minimally. Look how fast the GTX 460 dropped from $200 to $170, and Digitimes is saying further cuts are coming. The GTS 450 was rebated from $130 to $100 even before its official release, even though the chip performs like the $140 5770 and is the over 50% bigger in die size than Juniper.

I will say that they are good price/performance for the consumer, and that GF104/6/8 are yielding good enough to sell, but Nvidia can't make money on the high-end unless something changes there.

The 6xxx series is going to kill their price/perf even. I'll bet heavily that the GF100 parts will be cancelled, because selling a GTX 470 at $200 is not something Nvidia can even do.

I disagree with your whole price/performance ratio... and its because Nvidia offers things ATI doesn't. 3D vision, Physx, cuda...etc. They can get away with charging a little more, just like lexus charges more than toyota yet has similar horsepower.

As far as demand...I am also not sure about that. I mean, yeah GTX 470 and 480 are not selling like hot cakes as they are expensive high end cards. And gTx465 was a dud. But GTX460 and GTS450 now have changed the demand wars. I can tell by the 15 new posts every day on Nvidia forums asking "will my GTX460 run 3D vision?" that these cards are moving...

I am not arguing that ATI isn't in a better position when it comes to market share, or even card profitability. But Nvidia is doing things right to get market share back from regular consumers...price cuts, rebates, aggressive game bundles with cards.

I do think Nvidia have the hardcore PC gamer locked, with things like amazing SLI scalability, 3D vision and 3d surround and Physx. Not to mention that 480 is still the single most powerful GPU which is what a lot of users want (SLI has its drawbacks)... 495 is just around the corner to compete with 5970 for the most powerful card money can buy.



disolitude said:
Soleron said:
 

I disagree with your whole price/performance ratio... and its because Nvidia offers things ATI doesn't. 3D vision, Physx, cuda...etc. They can get away with charging a little more, just like lexus charges more than toyota yet has similar horsepower.

You read me wrong. I think that NVIDIA HAS BETTER PRICE/PERFORMANCE.

As far as demand...I am also not sure about that. I mean, yeah GTX 470 and 480 are not selling like hot cakes as they are expensive high end cards. And gTx465 was a dud. But GTX460 and GTS450 now have changed the demand wars. I can tell by the 15 new posts every day on Nvidia forums asking "will my GTX460 run 3D vision?" that these cards are moving...

Let's look at the Steam survey results coming out early October. Closest we'll get to real numbers. I bet the GTX 460 and 450 won't even register against the advances of 57xx and 58xx in the same time. And the average GF104 is $200 while the average Cypress is about $280, considering all SKUs, and GF104 is bigger, so Nvidia must be making less money per chip.

I am not arguing that ATI isn't in a better position when it comes to market share, or even card profitability. But Nvidia is doing things right to get market share back from regular consumers...price cuts, rebates, aggressive game bundles with cards.

None of which are helping the bottom line. It's great for consumers but unsustainable for Nvidia. I still think they're being forced into these rebates and price cuts by lack of demand. If Nvidia could sell their GTX 460s at $250 they would. If Nvidia could have kept the GTX 480 above $450 they would have. Why else would they drop it if AMD is holding steady?

I do think Nvidia have the hardcore PC gamer locked, with things like amazing SLI scalability, 3D vision and 3d surround and Physx. Not to mention that 480 is still the single most powerful GPU which is what a lot of users want (SLI has its drawbacks)... 495 is just around the corner to compete with 5970 for the most powerful card money can buy.

3D Vision is niche. Hugely niche. I'll be seriously amazed if it's above 1% of the global market who have a discrete video card. It's nice technology but the cost is so high that it doesn't matter to consumers or to Nvidia's revenue. Any penetration stats yet?

PhysX is again nice technology. It's an advantage over AMD where it exists. So few games use it for anything more than particle effects though. How many games use it for player interaction?

3D Surround is not more impressive than Eyefinity, except for the 3D.

SLI scaling is better than Crossfire, but I believethe 6xxx series will make it look poor value.

All in all, it comes down to whether the technologies sell cards. I'm not seeing a wave of GTS 450/GTX 460 sales, but I have no sales figures. I'd like to be proven wrong if you can find some (I don't want to shoot anecdotes and speculation at you).

 

This GTX 495. What is it, and how will it perform?

Scenario: It's a dual GF104

The GTX 460 has a 150W TDP for 768MB. Two of those is 300W, assuming no downclocking. Remember AMD had to make the 5970 like two 5850s to stay under 300W.

You can't enable any more shaders on GF104 or you go over the TDP limit. Hence it will perform like GTX 460 768MB SLI. Not 1GB SLI, because that would go over the TDP limit. Let's apply a typical scaling factor of 185% from a few reviews. That would perform like a 5970.

Now, a single Cayman card will outperform a 5870 by 35%. That's a few frames short of a 5970, and higher minimum framerates with no microstutter. I think people will choose Cayman over that.

Scenario: It's a full GF100.

Well, first of all I don't believe they can make on in quantity. Let's assume they can. It's 6% faster than a GTX 480, putting it behind Cayman per above. No win.

Scenario: It's a respin

It's been six months since GF100 was available. Not enough time to do a full new architecture, and no new process node is available. It would have to be a base layer spin of GF100 or GF104. So maximum performanec does not go up except that clockspeeds might increase to 1500MHz shaders (the design target). Still short of Cayman.

I can't think of a GPU where they managed to increase performance on the same node in the same power consumption by enough (~30%) to beat Cayman, in only six months.

Even if they do, it'll be 530 or 730mm^2 of silicon against 380mm^2 for Cayman. Not a fight Nvidia can win on price either.



As far as I'm concerned, the 4xx series is a pretty big Nvidia screw up. I'm going to go ahead and say the real war will be in 2013 at this point, unless we get hit by a big ass solar wave.