sieanr said:
Lulz For starters it should be obvious that CUDA will never be used because developers have rarely taken advantage of features exclusive to one line of cards -and I can't imagine what their reaction would be to something that requires a radical reworking of their engine. But the big issue is that you cant access texture memory in CUDA. So, how the hell are you going to render a game without access to the texture memory? Thats the reason why Nvidia hasn't been talking about CUDA being used for anything more than physics in games. Honestly, dont you think they would've talked about it if it could make Crysis "fly like a bird" But I guess you didn't think. Secondly the problems Nvidia is currently encountering, and the ones you go on and on about, stem from the fact that they are approaching GPU design completely wrong. ATI is on the right track and they will start to reap the rewards shortly. What do I mean by this? Well, nvidia is going with a single large die that breaks a billion transistors. Because of this they are hitting major problems with fabrication and the like, as you so eloquently put it. The solution to this, and the path ATI is going down, is to instead make multi chip cards. But this isn't the multi-gpu solutions of old. Instead you have a relatively simple core that offers easy scaling across product segments. So a cheap GPU would have 2 cores, a middle market card would have 4, and the top of the line 8. The other major difference from current cards is to have all the cores share RAM instead of giving each its own bank. Again, this is something ATI is doing. The end result of all this is you shorten the design process because its been simplified radically, cards are cheaper because they are much easier to produce and other issues like thermal design become negligible. What little performance loss you have from scaling across multiple cores is meaningless since the cost benefits are so great. Thats essentially what I mean when I said that they are outdated. The time of the single core graphics card is coming to a close and Nvidia is behind in making a practical multi core solution, but they are going multi-core. The die shrink of the GTX280 may be their last monolithic GPU. Not to mention the cost/performance ratio is pitiful and you can buy Geforce cards that perform nearly as well for hundreds left. "If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2" And that would be pretty sad. I can't imagine how much a GX2 like that would cost, but the X2 would likely be significantly cheaper. You may know a thing or two about technology but you don't seem to have a clue about where GPU design is headed. Hell, you seem to barely have a grasp of what current GPUs are. I remember that whole "I just bought a 8400 and its amazing" bit you pulled, when you defended the card even though it was obviously a POS compared to the competition. I'm pretty sure that you're just a Nvidia fanboy who picks up all his tech info second-hand on forums. Not that there is anything wrong with liking a computer parts manufacturer for no real reason, but the ego you carry with you is ridiculous. |
Actually i took PC Support 1 and 2 in HS. And have basically grown up learning computers from software to hardware. Software from web development to C++ and hardware from wtf they do to building a PC to troubleshooting an issue.
I have a way bigger background in technology then what you are giving me credit for.










