By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - New GeForce GTX 280!

ssj12 said:
Soleron said:
The Radeon 4850 completely destroys the competition and also beats cards that are $100 (9800GTX) or even $200 (GTX 260) more expensive.

http://techreport.com/articles.x/14967

Your GTX 280 ($649) is beaten by 4850 Crossfire ($398).

 

And your point? of course crossfire on a new generation of cards will top a high-end card thats basically a mid-generation card. Lets have a game that actually takes advantage of CUDA as nothing does at the moment and then lets see what happens. From what I can guess CUDA will be nVidia's trump card.


Do you actually know anything about computers?

CUDA has almost nothing to do with gaming. I'll repeat that so you can understand it; Nothing. Its purpose is to run desktop programs on the GPU, which is great if you need to run an operation that takes advantage of a GPUs strength. AMD/ATI has its own system of doing the same thing called Close To Metal. Neither of these will likely ever be used in games in the foreseeable future, if ever. There is a plethora of reasons why that should be pretty damn obvious, but based on your apparent knowledge of computer tech its not surprising that you think CUDA is some sort of magic bullet.

Havok on AMD GPUs is a different beast then CUDA altogether. Although if CUDA does anything for gaming it will be simplifying physics on the GPU. Anyways back to AMD: basically they're making it easy to run havok accelerated physics on the GPU. The problem is at this point communication between the CPU and GPU tends to be slow. So if the GPU runs any physics calculations that change the game environment - say a pillar falling on the ground- then this information needs to be sent to the CPU so the CPU will know that the AI has to walk around the pillar, or to know when the player runs into it. Since this would be something of a bottleneck its likely that all GPU physics in the near future will be purely eye candy and "game changing" physics will still be on the CPU. Eventually physics will be all on the GPU, but not with this generation of cards.

Oh, and the fact that Nvidia is trumping up CUDA tells you that even they know that they have a bomb on their hands. They were caught complacent and are now stuck trying to push an outdated design on the public at an outrageous price. Unfortunatly for Nvidia the competition stepped up their game, designed a truly next gen GPU, and was able to get the process down to very good yields.

I can't wait to see 4870 crossfire, then 4870X2. Nvidia is going to have its ass handed to them, especially at the high end.

Leo-j said: If a dvd for a pc game holds what? Crysis at 3000p or something, why in the world cant a blu-ray disc do the same?

ssj12 said: Player specific decoders are nothing more than specialized GPUs. Gran Turismo is the trust driving simulator of them all. 

"Why do they call it the xbox 360? Because when you see it, you'll turn 360 degrees and walk away" 

Around the Network

Sorry, HD4850 GDDR5 for me ^_^



These things always go up and down every time I see some benchmarking tests (last time was some months ago). Nvidia is mostly at top from what I seen though, seems more popular also.



sieanr said:
ssj12 said:
Soleron said:
The Radeon 4850 completely destroys the competition and also beats cards that are $100 (9800GTX) or even $200 (GTX 260) more expensive.

http://techreport.com/articles.x/14967

Your GTX 280 ($649) is beaten by 4850 Crossfire ($398).

 

And your point? of course crossfire on a new generation of cards will top a high-end card thats basically a mid-generation card. Lets have a game that actually takes advantage of CUDA as nothing does at the moment and then lets see what happens. From what I can guess CUDA will be nVidia's trump card.


 

Do you actually know anything about computers?

 

CUDA has almost nothing to do with gaming. I'll repeat that so you can understand it; Nothing. Its purpose is to run desktop programs on the GPU, which is great if you need to run an operation that takes advantage of a GPUs strength. AMD/ATI has its own system of doing the same thing called Close To Metal. Neither of these will likely ever be used in games in the foreseeable future, if ever. There is a plethora of reasons why that should be pretty damn obvious, but based on your apparent knowledge of computer tech its not surprising that you think CUDA is some sort of magic bullet.

 

Havok on AMD GPUs is a different beast then CUDA altogether. Although if CUDA does anything for gaming it will be simplifying physics on the GPU. Anyways back to AMD: basically they're making it easy to run havok accelerated physics on the GPU. The problem is at this point communication between the CPU and GPU tends to be slow. So if the GPU runs any physics calculations that change the game environment - say a pillar falling on the ground- then this information needs to be sent to the CPU so the CPU will know that the AI has to walk around the pillar, or to know when the player runs into it. Since this would be something of a bottleneck its likely that all GPU physics in the near future will be purely eye candy and "game changing" physics will still be on the CPU. Eventually physics will be all on the GPU, but not with this generation of cards.

 

Oh, and the fact that Nvidia is trumping up CUDA tells you that even they know that they have a bomb on their hands. They were caught complacent and are now stuck trying to push an outdated design on the public at an outrageous price. Unfortunatly for Nvidia the competition stepped up their game, designed a truly next gen GPU, and was able to get the process down to very good yields.

 

I can't wait to see 4870 crossfire, then 4870X2. Nvidia is going to have its ass handed to them, especially at the high end.

.... CUDA allows for the card to be directly coded for different factors like you said Physics and to be more plain, Physx.

Due to CUDA's ability to be coded in C++ developers can allow for more direct coding and interaction with the card. Developers will be able to tap into every once of ram, and power in the card.This should be basic knowledge because this is a similar platform that the game consoles run on. All the game console GPUs are hand coded to run each game. This is why it takes developers years to tap into the console's power. They have to actually learn the hardware.

For example of one practical application for CUDA is that Adobe is putting a nVidia only accelerator in Photoshop CS4. If developers coded their own accelerators for nVidia GPUs guess what? games like crysis would be flying like a bird.

PC hardware haven;t had this problem before. You bitch about cost. Tell me mr. brainiac, why is it that for a 15% increase in performance on a SINGLE CORE gpu cost a company a ton of money to put out? Simple. It is because they are hitting the difficulties presented in Moore's Law and the Universal Limit. 

What am I talking about universal limit in technology?

Let me use a direct quote

"The physical limits to computation have been under active scrutiny over the past decade or two, as theoretical investigations of the possible impact of quantum mechanical processes on computing have begun to make contact with realizable experimental configurations. We demonstrate here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law. The limits we derive are stringent, and include the possibilities that the computing performed is either distributed or local. A careful consideration of the effect of horizons on information processing is necessary for this analysis, which suggests that the total amount of information that can be processed by any observer is significantly less than the Hawking-Bekenstein entropy associated with the existence of an event horizon in an accelerating universe."

Source

 

Simply put we might be able to shrink our transistors and jam more and more but there is a limit. The more we push that limit the more expensive the card. For every minor gain in power the card itself will cost a retarded amount of money.

The GTX 280 costs $649, standard clock, because it is at the limits of it's size. 1.4 billion transistors gives nVidia the most powerful single core GPU on the planet. Is the fact they are pushing the limit's of our technologies and the basic physical laws mean they have out dated tech? hell no. If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2.

Seriously outdated tech, lol. So a switch to more stream processors, which are still a mess of transistors and diodes isnt out dated?

Do YOU know computers? If you did you would not being trying to act like a smart ass like you are. I know the hardware better then you do obviously.

 



PC gaming is better than console gaming. Always.     We are Anonymous, We are Legion    Kick-ass interview   Great Flash Series Here    Anime Ratings     Make and Play Please
Amazing discussion about being wrong
Official VGChartz Folding@Home Team #109453
 
ssj12 said: .... CUDA allows for the card to be directly coded for different factors like you said Physics and to be more plain, Physx.

Due to CUDA's ability to be coded in C++ developers can allow for more direct coding and interaction with the card. Developers will be able to tap into every once of ram, and power in the card.This should be basic knowledge because this is a similar platform that the game consoles run on. All the game console GPUs are hand coded to run each game. This is why it takes developers years to tap into the console's power. They have to actually learn the hardware.

For example of one practical application for CUDA is that Adobe is putting a nVidia only accelerator in Photoshop CS4. If developers coded their own accelerators for nVidia GPUs guess what? games like crysis would be flying like a bird.

PC hardware haven;t had this problem before. You bitch about cost. Tell me mr. brainiac, why is it that for a 15% increase in performance on a SINGLE CORE gpu cost a company a ton of money to put out? Simple. It is because they are hitting the difficulties presented in Moore's Law and the Universal Limit.

What am I talking about universal limit in technology?

Let me use a direct quote

"The physical limits to computation have been under active scrutiny over the past decade or two, as theoretical investigations of the possible impact of quantum mechanical processes on computing have begun to make contact with realizable experimental configurations. We demonstrate here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law. The limits we derive are stringent, and include the possibilities that the computing performed is either distributed or local. A careful consideration of the effect of horizons on information processing is necessary for this analysis, which suggests that the total amount of information that can be processed by any observer is significantly less than the Hawking-Bekenstein entropy associated with the existence of an event horizon in an accelerating universe."

Source

 

Simply put we might be able to shrink our transistors and jam more and more but there is a limit. The more we push that limit the more expensive the card. For every minor gain in power the card itself will cost a retarded amount of money.

The GTX 280 costs $649, standard clock, because it is at the limits of it's size. 1.4 billion transistors gives nVidia the most powerful single core GPU on the planet. Is the fact they are pushing the limit's of our technologies and the basic physical laws mean they have out dated tech? hell no. If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2.

Seriously outdated tech, lol. So a switch to more stream processors, which are still a mess of transistors and diodes isnt out dated?

Do YOU know computers? If you did you would not being trying to act like a smart ass like you are. I know the hardware better then you do obviously.

 

Lulz

For starters it should be obvious that CUDA will never be used because developers have rarely taken advantage of features exclusive to one line of cards -and I can't imagine what their reaction would be to something that requires a radical reworking of their engine. But the big issue is that you cant access texture memory in CUDA. So, how the hell are you going to render a game without access to the texture memory? Thats the reason why Nvidia hasn't been talking about CUDA being used for anything more than physics in games. Honestly, dont you think they would've talked about it if it could make Crysis "fly like a bird" But I guess you didn't think.

Secondly the problems Nvidia is currently encountering, and the ones you go on and on about, stem from the fact that they are approaching GPU design completely wrong. ATI is on the right track and they will start to reap the rewards shortly. What do I mean by this? Well, nvidia is going with a single large die that breaks a billion transistors. Because of this they are hitting major problems with fabrication and the like, as you so eloquently put it. The solution to this, and the path ATI is going down, is to instead make multi chip cards. But this isn't the multi-gpu solutions of old. Instead you have a relatively simple core that offers easy scaling across product segments. 

So a cheap GPU would have 2 cores, a middle market card would have 4, and the top of the line 8. The other major difference from current cards is to have all the cores share RAM instead of giving each its own bank. Again, this is something ATI is doing. The end result of all this is you shorten the design process because its been simplified radically, cards are cheaper because they are much easier to produce and other issues like thermal design become negligible. What little performance loss you have from scaling across multiple cores is meaningless since the cost benefits are so great.

Thats essentially what I mean when I said that they are outdated. The time of the single core graphics card is coming to a close and Nvidia is behind in making a practical multi core solution, but they are going multi-core. The die shrink of the GTX280 may be their last monolithic GPU. Not to mention the cost/performance ratio is pitiful and you can buy Geforce cards that perform nearly as well for hundreds left.

"If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2"

And that would be pretty sad. I can't imagine how much a GX2 like that would cost, but the X2 would likely be significantly cheaper. 

You may know a thing or two about technology but you don't seem to have a clue about where GPU design is headed. Hell, you seem to barely have a grasp of what current GPUs are. I remember that whole "I just bought a 8400 and its amazing" bit you pulled, when you defended the card even though it was obviously a POS compared to the competition. I'm pretty sure that you're just a Nvidia fanboy who picks up all his tech info second-hand on forums. Not that there is anything wrong with liking a computer parts manufacturer for no real reason, but the ego you carry with you is ridiculous. 



Leo-j said: If a dvd for a pc game holds what? Crysis at 3000p or something, why in the world cant a blu-ray disc do the same?

ssj12 said: Player specific decoders are nothing more than specialized GPUs. Gran Turismo is the trust driving simulator of them all. 

"Why do they call it the xbox 360? Because when you see it, you'll turn 360 degrees and walk away" 

Around the Network
sieanr said:
ssj12 said: .... CUDA allows for the card to be directly coded for different factors like you said Physics and to be more plain, Physx.

Due to CUDA's ability to be coded in C++ developers can allow for more direct coding and interaction with the card. Developers will be able to tap into every once of ram, and power in the card.This should be basic knowledge because this is a similar platform that the game consoles run on. All the game console GPUs are hand coded to run each game. This is why it takes developers years to tap into the console's power. They have to actually learn the hardware.

For example of one practical application for CUDA is that Adobe is putting a nVidia only accelerator in Photoshop CS4. If developers coded their own accelerators for nVidia GPUs guess what? games like crysis would be flying like a bird.

PC hardware haven;t had this problem before. You bitch about cost. Tell me mr. brainiac, why is it that for a 15% increase in performance on a SINGLE CORE gpu cost a company a ton of money to put out? Simple. It is because they are hitting the difficulties presented in Moore's Law and the Universal Limit.

What am I talking about universal limit in technology?

Let me use a direct quote

"The physical limits to computation have been under active scrutiny over the past decade or two, as theoretical investigations of the possible impact of quantum mechanical processes on computing have begun to make contact with realizable experimental configurations. We demonstrate here that the observed acceleration of the Universe can produce a universal limit on the total amount of information that can be stored and processed in the future, putting an ultimate limit on future technology for any civilization, including a time-limit on Moore's Law. The limits we derive are stringent, and include the possibilities that the computing performed is either distributed or local. A careful consideration of the effect of horizons on information processing is necessary for this analysis, which suggests that the total amount of information that can be processed by any observer is significantly less than the Hawking-Bekenstein entropy associated with the existence of an event horizon in an accelerating universe."

Source

 

Simply put we might be able to shrink our transistors and jam more and more but there is a limit. The more we push that limit the more expensive the card. For every minor gain in power the card itself will cost a retarded amount of money.

The GTX 280 costs $649, standard clock, because it is at the limits of it's size. 1.4 billion transistors gives nVidia the most powerful single core GPU on the planet. Is the fact they are pushing the limit's of our technologies and the basic physical laws mean they have out dated tech? hell no. If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2.

Seriously outdated tech, lol. So a switch to more stream processors, which are still a mess of transistors and diodes isnt out dated?

Do YOU know computers? If you did you would not being trying to act like a smart ass like you are. I know the hardware better then you do obviously.

 

Lulz

For starters it should be obvious that CUDA will never be used because developers have rarely taken advantage of features exclusive to one line of cards -and I can't imagine what their reaction would be to something that requires a radical reworking of their engine. But the big issue is that you cant access texture memory in CUDA. So, how the hell are you going to render a game without access to the texture memory? Thats the reason why Nvidia hasn't been talking about CUDA being used for anything more than physics in games. Honestly, dont you think they would've talked about it if it could make Crysis "fly like a bird" But I guess you didn't think.

Secondly the problems Nvidia is currently encountering, and the ones you go on and on about, stem from the fact that they are approaching GPU design completely wrong. ATI is on the right track and they will start to reap the rewards shortly. What do I mean by this? Well, nvidia is going with a single large die that breaks a billion transistors. Because of this they are hitting major problems with fabrication and the like, as you so eloquently put it. The solution to this, and the path ATI is going down, is to instead make multi chip cards. But this isn't the multi-gpu solutions of old. Instead you have a relatively simple core that offers easy scaling across product segments. 

So a cheap GPU would have 2 cores, a middle market card would have 4, and the top of the line 8. The other major difference from current cards is to have all the cores share RAM instead of giving each its own bank. Again, this is something ATI is doing. The end result of all this is you shorten the design process because its been simplified radically, cards are cheaper because they are much easier to produce and other issues like thermal design become negligible. What little performance loss you have from scaling across multiple cores is meaningless since the cost benefits are so great.

Thats essentially what I mean when I said that they are outdated. The time of the single core graphics card is coming to a close and Nvidia is behind in making a practical multi core solution, but they are going multi-core. The die shrink of the GTX280 may be their last monolithic GPU. Not to mention the cost/performance ratio is pitiful and you can buy Geforce cards that perform nearly as well for hundreds left.

"If they put the GTX 280 in a SLi setup like the GX2 the card would be a beast. It would probably be enough to hold it's own against the Radeon 4900X2"

And that would be pretty sad. I can't imagine how much a GX2 like that would cost, but the X2 would likely be significantly cheaper. 

You may know a thing or two about technology but you don't seem to have a clue about where GPU design is headed. Hell, you seem to barely have a grasp of what current GPUs are. I remember that whole "I just bought a 8400 and its amazing" bit you pulled, when you defended the card even though it was obviously a POS compared to the competition. I'm pretty sure that you're just a Nvidia fanboy who picks up all his tech info second-hand on forums. Not that there is anything wrong with liking a computer parts manufacturer for no real reason, but the ego you carry with you is ridiculous. 

Actually i took PC Support 1 and 2 in HS. And have basically grown up learning computers from software to hardware. Software from web development to C++ and hardware from wtf they do to building a PC to troubleshooting an issue.

I have a way bigger background in technology then what you are giving me credit for.



PC gaming is better than console gaming. Always.     We are Anonymous, We are Legion    Kick-ass interview   Great Flash Series Here    Anime Ratings     Make and Play Please
Amazing discussion about being wrong
Official VGChartz Folding@Home Team #109453
 
Soleron said:
The Radeon 4850 completely destroys the competition and also beats cards that are $100 (9800GTX) or even $200 (GTX 260) more expensive.

http://techreport.com/articles.x/14967

Your GTX 280 ($649) is beaten by 4850 Crossfire ($398).

The article provided doesn't show 4850 Crossfire results, so how can you say it is beaten? Also "The Radeon 4850 completely destroys the competition" is simply not true, your own article shows that in almost every test the Radeo 4850 is beaten by the GTX 260, and is completely destroyed by the GTX 280. Also, the GTX cards remain at a much more stable frame rate while the 4850 dips much lower.

 



SSJ12 please get a decent graphics card! That way you can talk from experience rather than what you've seen or heard.



Tease.

Squilliam said:
SSJ12 please get a decent graphics card! That way you can talk from experience rather than what you've seen or heard.

 

why dont you BUY me one then? Right now I;m happy with just being able to run UT3 on medium.



PC gaming is better than console gaming. Always.     We are Anonymous, We are Legion    Kick-ass interview   Great Flash Series Here    Anime Ratings     Make and Play Please
Amazing discussion about being wrong
Official VGChartz Folding@Home Team #109453
 
Username2324 said:
Soleron said:
The Radeon 4850 completely destroys the competition and also beats cards that are $100 (9800GTX) or even $200 (GTX 260) more expensive.

http://techreport.com/articles.x/14967

Your GTX 280 ($649) is beaten by 4850 Crossfire ($398).

The article provided doesn't show 4850 Crossfire results, so how can you say it is beaten? Also "The Radeon 4850 completely destroys the competition" is simply not true, your own article shows that in almost every test the Radeo 4850 is beaten by the GTX 260, and is completely destroyed by the GTX 280. Also, the GTX cards remain at a much more stable frame rate while the 4850 dips much lower.

 

No, the Radeon HD 4850 is meant to compete with the Geforce 8800GT, 8800GTS, and 9600GT, not the GTX260 or GTX280.

With Crossfire, the HD 4850 does pretty well--definitely not being destroyed by either of the last two mentioned graphic cards...  It performs well or so-so depending on the game, though...

http://anandtech.com/video/showdoc.aspx?i=3338&p=14



I'm an ALIEN!!!! - officially identified as by Konnichiwa

Of course... My English is still... horrible - appreciation and thanks to FJ-Warez  

Brawl FC: 0301-9911-8154