By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - What the hell is a GPU-CPU?

If we put this in terms of the human body:

CPU is your brain, that lets you think and calculate.
GPU is the specific part of your brain that processes what you see through your eyes.
RAM is your short term thoughts and memories, including all the information you have observed through your senses and processed recently. (Hard drive is your long term memory).



Around the Network
fordy said:
walsufnir said:
fordy said:
It's worth adding that CPUs generally excel in integer arithmetic (numbers that do not have decimal points. The earlier 80x86 CPUs had a co-processor that handled floating point arithmetic, which was designated an 80x87)

GPUs excel in floating point arithmetic (numbers that are decimal mainly, usually between 0 and 1), and given their high parallel computing capabilities, it's very handy for Matrix multiplication (which is what is used to transform polygons within the GPU). The GPU also comes with other things specific to getting it from numbers to screen pixels, such as vertex and pixel shaders (think of them as a programmable device within the GPU that lets you add additional effects before it's output to screen).

I said that CPU' ALU calculates integers and an fpu, nowadays integrated in the CPU, calculates floats. We want to help the author to understand, not be picky about details as this would be too complex and wouldn't help understanding.

Too complex? You're explaining caches when he is asking about RAM. Cache is purely performance gain, there's nothing fundamental about it. I think my explanations didn't even hit that kind of level.

You mentioned that the GPU is like a CPU, whereas it's more like a massively parallel FPU.

Also, memory is not just another jump from cache. The purpose of cache is to be transparent to everything. Programmers cannot access the cache, whereas memory is manually manipulated through computer programs.


Because cache is not really different to RAM - only faster and smaller (because more expensive). And programmers can even access registers if they want, also write cache-friendly code but this requires proficient knowledge of computers and their internals.

The part with GPUs is that they *calculate* massively in parallel - but let's not try to be picky and wait till deyon tells us what he understands and what not.



walsufnir said:
fordy said:
walsufnir said:
fordy said:
It's worth adding that CPUs generally excel in integer arithmetic (numbers that do not have decimal points. The earlier 80x86 CPUs had a co-processor that handled floating point arithmetic, which was designated an 80x87)

GPUs excel in floating point arithmetic (numbers that are decimal mainly, usually between 0 and 1), and given their high parallel computing capabilities, it's very handy for Matrix multiplication (which is what is used to transform polygons within the GPU). The GPU also comes with other things specific to getting it from numbers to screen pixels, such as vertex and pixel shaders (think of them as a programmable device within the GPU that lets you add additional effects before it's output to screen).

I said that CPU' ALU calculates integers and an fpu, nowadays integrated in the CPU, calculates floats. We want to help the author to understand, not be picky about details as this would be too complex and wouldn't help understanding.

Too complex? You're explaining caches when he is asking about RAM. Cache is purely performance gain, there's nothing fundamental about it. I think my explanations didn't even hit that kind of level.

You mentioned that the GPU is like a CPU, whereas it's more like a massively parallel FPU.

Also, memory is not just another jump from cache. The purpose of cache is to be transparent to everything. Programmers cannot access the cache, whereas memory is manually manipulated through computer programs.


Because cache is not really different to RAM - only faster and smaller (because more expensive). And programmers can even access registers if they want, also write cache-friendly code but this requires proficient knowledge of computers and their internals.

The part with GPUs is that they *calculate* massively in parallel - but let's not try to be picky and wait till deyon tells us what he understands and what not.

Except registers are not part of cache. Registers can read/write to RAM (with or without the transparent cache. It's not part of the fundamental layout at all. It's merely for optimisation.

Cache-friendly code is incredibly rare nowadays, given that compilers handle a lot of that stuff for you. But it's really no big deal now. The larger cache gets, the less likely one is to encounter a cache miss penalty involving recent data.



fordy said:
walsufnir said:
fordy said:
walsufnir said:
fordy said:
It's worth adding that CPUs generally excel in integer arithmetic (numbers that do not have decimal points. The earlier 80x86 CPUs had a co-processor that handled floating point arithmetic, which was designated an 80x87)

GPUs excel in floating point arithmetic (numbers that are decimal mainly, usually between 0 and 1), and given their high parallel computing capabilities, it's very handy for Matrix multiplication (which is what is used to transform polygons within the GPU). The GPU also comes with other things specific to getting it from numbers to screen pixels, such as vertex and pixel shaders (think of them as a programmable device within the GPU that lets you add additional effects before it's output to screen).

I said that CPU' ALU calculates integers and an fpu, nowadays integrated in the CPU, calculates floats. We want to help the author to understand, not be picky about details as this would be too complex and wouldn't help understanding.

Too complex? You're explaining caches when he is asking about RAM. Cache is purely performance gain, there's nothing fundamental about it. I think my explanations didn't even hit that kind of level.

You mentioned that the GPU is like a CPU, whereas it's more like a massively parallel FPU.

Also, memory is not just another jump from cache. The purpose of cache is to be transparent to everything. Programmers cannot access the cache, whereas memory is manually manipulated through computer programs.


Because cache is not really different to RAM - only faster and smaller (because more expensive). And programmers can even access registers if they want, also write cache-friendly code but this requires proficient knowledge of computers and their internals.

The part with GPUs is that they *calculate* massively in parallel - but let's not try to be picky and wait till deyon tells us what he understands and what not.

Except registers are not part of cache. Registers can read/write to RAM (with or without the transparent cache. It's not part of the fundamental layout at all. It's merely for optimisation.

Cache-friendly code is incredibly rare nowadays, given that compilers handle a lot of that stuff for you. But it's really no big deal now. The larger cache gets, the less likely one is to encounter a cache miss penalty involving recent data.

There are ppl who say otherwise: http://www.drdobbs.com/parallel/cache-friendly-code-solving-manycores-ne/240012736 But I am out of coding business nowadays so don't know exactly.



I'll be locking this thread per Deyon's request.

And not because anyone here was baiting, flaming, or trolling...

But because you guys did such an awesome job explaining the concept. I don't think I've ever locked a thread because the discussion was too good.