curl-6 said:
Fair enough. I guess it's just that with Espresso I at least know what I'm talking about, while Latte is still so nebulous, haha. |
Well it's not your fault. Most people couldn't even begin to understand how a GPU works other than the mantra that the GPU is known to be massively parallel. After all they are the ones with the shit ton of transistors so it's only fair that they would have the most complexity involved in them. You have to account for things such as concurrency or how scalable a workload is for a GPU. You even have to know what types of rendering methods they and what drawbacks that it will pose. There are many parts to a GPUs performance such as functional units like shaders, ROPS, TMUs, and even bandwidth as well as memory. Oh and that's just the surface of things! CPUs have not become very complex in their design because they benefit more from serial based workloads and the laws of physics prevents them from going into higher frequencies so chip designers altogether except for intel stopped trying to make to make significantly more powerful and even then intel only does incremental boosts to their CPU performance, not the 20% like we used to get back in the day with a new process node! I can't even begin to start programming for GPUs yet even though microsoft made it easier with the C++ AMP library because it's still too hard for me!







