By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - nVidia's GT300 specifications revealed - it's a cGPU!

What is GT300?

Even though it shares the same first two letters with GT200 architecture [GeForce Tesla], GT300 is the first truly new architecture since SIMD [Single-Instruction Multiple Data] units first appeared in graphical processors.

GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.

GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.

This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Tegra, Tesla, GeForce and Quadro cards.

This architectural change should result in dramatic increase in Dual-Precision performance, and if GT300 packs enough registers - performance of both Single-Precision and Dual-Precision data might surprise all the players in the industry. Given the timeline when nVidia begun work on GT300, it looks to us like GT200 architecture was a test for real things coming in 2009.

Just like the CPU, GT300 gives direct hardware access [HAL] for CUDA 3.0, DirectX 11, OpenGL 3.1 and OpenCL. You can also do direct programming on the GPU, but we're not exactly sure would development of such a solution that be financially feasible. But the point in question is that now you can do it. It looks like Tim Sweeney's prophecy is slowly, but certainly - coming to life.

http://brightsideofnews.com/news/2009/4/22/nvidias-gt300-specifications-revealed---its-a-cgpu!.aspx

Very nice, I'm gonna have to get my hands on that some day



Around the Network

I read about it, it really sounds incredible. But a lot of its early performance is gonna depend on drivers...



I don't get it at all. If we are talking about 512 stream processors with 4 FLOPS/clock cycle at 2 GHz, so it should be 4 TFLOPS instead of 3 as mentioned on the article.
I'm assuming TMUs are staying at 8 per array, so we have 128 now, and ROPs have been doubled from GT200's 32 up to 64...



 

 

 

 

 

I perdict that the GT300 will be more than twice as fast as a Geforce GTX 285. To be more specific, rufly 2.3 times as fast.



Nice, I just hope it doesnt end up $650 at launch like the GTX 280. Are you listening nvidia?




Around the Network
nojustno said:
Nice, I just hope it doesnt end up $650 at launch like the GTX 280. Are you listening nvidia?

I doubt they're listening; they know it already. Besides, they can't drop the price too much and too little cuts wouldn't make much of a difference so they're going to maximize profits. And to some extent a more expensive GPU might even seem more powerful.



Article is all fluff.

They don't know what they are talking about.

GT200 architechture? That isn't even an architechture in itself, it's a marketing term. The codename is different.

Although I don't dispute this being true as it is imminent but the info doesn't sound right at all.



nojustno said:
Nice, I just hope it doesnt end up $650 at launch like the GTX 280. Are you listening nvidia?

 

That was about the release price of the GTX 8800 also. Nothing new really.



 

 

 

 

 

Yeah, the top high-end cards always cost ~$600/€500 at launch. It's been like that for a decade.

Only once did I fall for it. In Spring 2006 I payed €600 (~$700) for a ATI X1900XT, and it wasn't even the worst of them all (the 1900XTX was, and cost $100 more).



haxxiy said:
nojustno said:
Nice, I just hope it doesnt end up $650 at launch like the GTX 280. Are you listening nvidia?

 

That was about the release price of the GTX 8800 also. Nothing new really.

That doesnt make it right. They should learn from ATi that pricing their cards reasonably can actually have a massive impact, like with HD4800 cards. The HD4870 lanched at $299 and was only ~10% slower than the $650 GTX 280. Made me lol.