| ssj12 said: .... CUDA allows for the card to be directly coded for different factors like you said Physics and to be more plain, Physx.
Due to CUDA's ability to be coded in C++ developers can allow for more direct coding and interaction with the card. Developers will be able to tap into every once of ram, and power in the card.This should be basic knowledge because this is a similar platform that the game consoles run on. All the game console GPUs are hand coded to run each game. This is why it takes developers years to tap into the console's power. They have to actually learn the hardware. For example of one practical application for CUDA is that Adobe is putting a nVidia only accelerator in Photoshop CS4. If developers coded their own accelerators for nVidia GPUs guess what? games like crysis would be flying like a bird. PC hardware haven;t had this problem before. You bitch about cost. Tell me mr. brainiac, why is it that for a 15% increase in performance on a SINGLE CORE gpu cost a company a ton of money to put out? Simple. It is because they are hitting the difficulties presented in Moore's Law and the Universal Limit. What am I talking about universal limit in technology? Let me use a direct quote "The physical limits to computation have been under active scrutiny over the past decade or two, as theoretical investigations of the possible impact of quantum mechanical processes on computing have begun to make contact with realizable experimental configurations. We demonstrate here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law. The limits we derive are stringent, and include the possibilities that the computing performed is either distributed or local. A careful consideration of the effect of horizons on information processing is necessary for this analysis, which suggests that the total amount of information that can be processed by any observer is significantly less than the Hawking-Bekenstein entropy associated with the existence of an event horizon in an accelerating universe."
Simply put we might be able to shrink our transistors and jam more and more but there is a limit. The more we push that limit the more expensive the card. For every minor gain in power the card itself will cost a retarded amount of money. The GTX 280 costs $649, standard clock, because it is at the limits of it's size. 1.4 billion transistors gives nVidia the most powerful single core GPU on the planet. Is the fact they are pushing the limit's of our technologies and the basic physical laws mean they have out dated tech? hell no. If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2. Seriously outdated tech, lol. So a switch to more stream processors, which are still a mess of transistors and diodes isnt out dated? Do YOU know computers? If you did you would not being trying to act like a smart ass like you are. I know the hardware better then you do obviously.
|
Lulz
For starters it should be obvious that CUDA will never be used because developers have rarely taken advantage of features exclusive to one line of cards -and I can't imagine what their reaction would be to something that requires a radical reworking of their engine. But the big issue is that you cant access texture memory in CUDA. So, how the hell are you going to render a game without access to the texture memory? Thats the reason why Nvidia hasn't been talking about CUDA being used for anything more than physics in games. Honestly, dont you think they would've talked about it if it could make Crysis "fly like a bird" But I guess you didn't think.
Secondly the problems Nvidia is currently encountering, and the ones you go on and on about, stem from the fact that they are approaching GPU design completely wrong. ATI is on the right track and they will start to reap the rewards shortly. What do I mean by this? Well, nvidia is going with a single large die that breaks a billion transistors. Because of this they are hitting major problems with fabrication and the like, as you so eloquently put it. The solution to this, and the path ATI is going down, is to instead make multi chip cards. But this isn't the multi-gpu solutions of old. Instead you have a relatively simple core that offers easy scaling across product segments.
So a cheap GPU would have 2 cores, a middle market card would have 4, and the top of the line 8. The other major difference from current cards is to have all the cores share RAM instead of giving each its own bank. Again, this is something ATI is doing. The end result of all this is you shorten the design process because its been simplified radically, cards are cheaper because they are much easier to produce and other issues like thermal design become negligible. What little performance loss you have from scaling across multiple cores is meaningless since the cost benefits are so great.
Thats essentially what I mean when I said that they are outdated. The time of the single core graphics card is coming to a close and Nvidia is behind in making a practical multi core solution, but they are going multi-core. The die shrink of the GTX280 may be their last monolithic GPU. Not to mention the cost/performance ratio is pitiful and you can buy Geforce cards that perform nearly as well for hundreds left.
"If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2"
And that would be pretty sad. I can't imagine how much a GX2 like that would cost, but the X2 would likely be significantly cheaper.
You may know a thing or two about technology but you don't seem to have a clue about where GPU design is headed. Hell, you seem to barely have a grasp of what current GPUs are. I remember that whole "I just bought a 8400 and its amazing" bit you pulled, when you defended the card even though it was obviously a POS compared to the competition. I'm pretty sure that you're just a Nvidia fanboy who picks up all his tech info second-hand on forums. Not that there is anything wrong with liking a computer parts manufacturer for no real reason, but the ego you carry with you is ridiculous.
Leo-j said: If a dvd for a pc game holds what? Crysis at 3000p or something, why in the world cant a blu-ray disc do the same?
ssj12 said: Player specific decoders are nothing more than specialized GPUs. Gran Turismo is the trust driving simulator of them all.
"Why do they call it the xbox 360? Because when you see it, you'll turn 360 degrees and walk away"







