By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - GTX 1080 unveiled; 9 teraflops

JEMC said:

I don't know how could AMD keep Fury as their high-end cards with only 4GB. That's why I think (or hope) that the 490X will be faster than Fury and around GTX 1070 levels of performance.

Also, I think that AMD has Vega up and running, but they are waiting on HBM2 prices to fall down to launch it. They expect it will happen early next year, but if it happens before, they could release Vega late this year.

The decision to go with only 4GB was made to allow the use of HBM. AMD got exclusive access to HBM and a high share of the initial production of the HBM2 modules. They kind of screwed the Fury X, but managed to get a huge share of HBM2. Nvidia isn't using it on Pascal probably because the amount of modules they would be capable to secure was too low so they went with GDDR5X. It was a strategic sacrifice to "screw" Pascal. Let's see if it pays off.

JEMC said:

I agree with you that having an "halo" product improves the perception of a company, but given the financial situation of AMD, focusing on what sells the best is the right choice.

Yes, they are pretty beaten up. They lost tons os share in the GPU market and the CPU one is even worse. They have a console "monopoly", but integrated parts have slim profit margins, so that wouldn't keep them afloat. I'm a big fan of them, I would really like to see Zen put them back to the fight. I don't want my next CPU to be from Intel, but they need to step up their game right now.



Around the Network
eva01beserk said:
sc94597 said:

I would say that. On a GPU-bound game like The Witcher 3, the performance is equivalent between a 750ti and a PS4. On CPU-relevant games the performance is in favor of PC.

First parties have been mostly irrelevant since the 7th generation started.

What are you comparing relevancy with? just gpu and putting in any cpu? any ram? ssd or hdd? I dont want to bring price again into this, but price is the only way to really compare 2 things cuz everything else is not quantifiable in the same way for consoles and pc. 

And people who say ecclusives dont matter are only kidding themselfs. It might not push hardwar as much as before, but exclusives do matter, mainly to push the consoles to their limits. Otherwie third party devs will coast on the same year one games the rest of the gen.

The original post which you quoted was talking about what GPU the next generation consoles would use. You said, it is okay if they only use Maxwell because consoles can get more out of the GPU than a PC due to optimization. Price doesn't matter in this discussion, because we already chose a particular GPU with particular theoretical limits, and I was disputing you argument that consoles can reach the same theoretical limits better than PC's. Both the PS4's pitcairn and the 750 Ti have similar specifications, and using your logic the PS4 should outperform the PC. That was not true. I chose  GPU-bound game (the Witcher 3) so that we could select for external factors like ram and cpu speed affecting performance. Price is not relevant because the price argument is that for a console the same performance is cheaper. We were purely talking about whether or not consoles can get more out of the same theoretical performance.



Very nice card, I may have to buy this on release, will go good with my Sky Lake i7 :)



sc94597 said:
eva01beserk said:

What are you comparing relevancy with? just gpu and putting in any cpu? any ram? ssd or hdd? I dont want to bring price again into this, but price is the only way to really compare 2 things cuz everything else is not quantifiable in the same way for consoles and pc. 

And people who say ecclusives dont matter are only kidding themselfs. It might not push hardwar as much as before, but exclusives do matter, mainly to push the consoles to their limits. Otherwie third party devs will coast on the same year one games the rest of the gen.

The original post which you quoted was talking about what GPU the next generation consoles would use. You said, it is okay if they only use Maxwell because consoles can get more out of the GPU than a PC due to optimization. Price doesn't matter in this discussion, because we already chose a particular GPU with particular theoretical limits, and I was disputing you argument that consoles can reach the same theoretical limits better than PC's. Both the PS4's pitcairn and the 750 Ti have similar specifications, and using your logic the PS4 should outperform the PC. That was not true. I chose  GPU-bound game (the Witcher 3) so that we could select for external factors like ram and cpu speed affecting performance. Price is not relevant because the price argument is that for a console the same performance is cheaper. We were purely talking about whether or not consoles can get more out of the same theoretical performance.

I get what your saying. I think you are sort of right. But I have to take big third party devs words over yours about optimization. 



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

torok said:
 

A game like this depends on the amount of time/money expent on making it run better. You can't use a gem to affirm that the platforms don't perform differently. In a similar way, I could use Arkham Knight to affirm that PS4 beats a 970, something that it surely does not. A Witcher 3 video only shows that the PS4 version of W3 runs similarly to the PC version on a 750ti and that's it. In other games, you would get other results, because it depends on each port.

It's only reasonable to assume that a closed platform, with closer to the metal API will perform better compared to a similar horsepower on an open platform with more abstract APIs. If you argument was valid, which isn't the case, we could affirm that DX12 is useless because it basically tries to bring less abstract calls to DX. If they are already equal, why would MS bother to use their money to develop such APIs? And why would devs expend more time/money to work with a less abstract (harder to develop for) variant of an API if they won't get anything on return?

You are also incorrect when assuming that 8th gen changed anything regarding console APIs. You are assuming that they are PCs now, so they could perform similarly. The 7th gen consoles were already closer to PC since they used regular GPUs instead of the custom solutions found on previous gens. Changing the CPUs from PPC to x86 actually doesn't make a single difference for anyone except the guys writting the compiler (or actually, just adapting an existing one). CPU optimization is more related to how many cores it has, how fast they are, how the cache architecture is and othe characteristics that aren't exactly architecture-dependant.

Of course firsty party titles are relevant, since we are discussing technical aspects. If you are talking about it from a sales perspective, we could forget visuals because CoD and GTA aren't exactly powerhouses. But even in this case, exclusives like Gears, Halo and Uncharted sells pretty well and are relevant games. Its important to bring them into discussion since we are trying to verify how much optimization can raise the bar. Just on PS4, games like Driveclub, The Order and Uncharted show how far it can go.

I understand that you have your opinion, but professional developers state different things and their opinion is much closer to a fact than we both could get. One of the Metro devs quoted it as roughly a 2 times advantage, something that's expected. The interview is pretty good and detailed and worth a read just to see the differences between gens and the general struggle that one of the most competent developers we have has daily with their budgetary issues (check it here: http://www.eurogamer.net/articles/digitalfoundry-2014-metro-redux-what-its-really-like-to-make-a-multi-platform-game).

Anyway, I think we are waaaaay off-topic and starting to derail the thread (my bad, actually).

Do you think CDPR put a different amount of effort in each version? I don't think they did. The same can't be said for Batman.

I used the Witcher 3 in particular, because it is a GPU-bound game (you can't say that the Jaguar in the PS4 is the limiting factor here.) Other games show similar performance though.



I said nothing about API's nor about the theoretical reasons behind why there might be better optimization. Nor am I disputing the legitimacy of the developers' claims. I even alluded to this by mentioning that it is true for first-party games in which developers have an incentive to make the game perform better. I solely spoke of what we ACTUALLY see. Last generation a GPU that was equivalent to those found in consoles did not last you as long. One or two years and you'd have to upgrade in order to keep up. This generation the 750 Ti and r9 270x are keeping up with the PS4 and still greatly outmoding the XBO. The entire scope of the discussion until now has been only about GPUs, so I don't know where you got the idea that I was discussing microarchitecture.  I was mostly talking about unified game-engines that make porting games easier (Unity, UE4, etc) and platform architecture in general (you don't have a crazy Cell with SPE's that make multiprocessing a nightmare to relearn if you want to make low-level optimizations (not even talking about a compiler here, just running costly loops in assembly, and such.))

We can also mention how PC API's have advanced this generation.



Around the Network
eva01beserk said:
sc94597 said:

The original post which you quoted was talking about what GPU the next generation consoles would use. You said, it is okay if they only use Maxwell because consoles can get more out of the GPU than a PC due to optimization. Price doesn't matter in this discussion, because we already chose a particular GPU with particular theoretical limits, and I was disputing you argument that consoles can reach the same theoretical limits better than PC's. Both the PS4's pitcairn and the 750 Ti have similar specifications, and using your logic the PS4 should outperform the PC. That was not true. I chose  GPU-bound game (the Witcher 3) so that we could select for external factors like ram and cpu speed affecting performance. Price is not relevant because the price argument is that for a console the same performance is cheaper. We were purely talking about whether or not consoles can get more out of the same theoretical performance.

I get what your saying. I think you are sort of right. But I have to take big third party devs words over yours about optimization. 

Fair enough. I'd rather look at the evidence that we can see in the real-world than some lofty claims by developers about theoretical optimizations. So far, I can't think of a multiplatform game that had proper work put into every version in which there were huge performance differences on similar hardware.



torok said:
JEMC said:

I don't know how could AMD keep Fury as their high-end cards with only 4GB. That's why I think (or hope) that the 490X will be faster than Fury and around GTX 1070 levels of performance.

Also, I think that AMD has Vega up and running, but they are waiting on HBM2 prices to fall down to launch it. They expect it will happen early next year, but if it happens before, they could release Vega late this year.

The decision to go with only 4GB was made to allow the use of HBM. AMD got exclusive access to HBM and a high share of the initial production of the HBM2 modules. They kind of screwed the Fury X, but managed to get a huge share of HBM2. Nvidia isn't using it on Pascal probably because the amount of modules they would be capable to secure was too low so they went with GDDR5X. It was a strategic sacrifice to "screw" Pascal. Let's see if it pays off.

They got the rights to use HBM becuase they co-developed it with Hynix, and used it on Fury to both show off what they had done and because HBM uses less power than GDDR5, and they needed that margin with athe beast that is Fury.

The thing with HBM2 is that Samsung also makes the modules and Nvidia has access to it. But it's too expensive to use it for now, which is why neither AMD nor Nvidia are using it.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

New info about the GTX 1080

The Geforce GTX 1080 Graphics Card Can Do Asynchronous Compute

http://wccftech.com/nvidia-gtx-1080-asynchronous-compute/#ixzz4858gdBA3

The Async Compute problem is probably one of the most controversial issues surrounding the older generation of Geforce graphics cards from Nvidia. Something very interesting, however, is present in the press release that they sent out to, well, the press. According to the official statement, the GTX 1080 is fully capable of performing Async Compute. If this turns out to be true, then this will give negate a major edge that Radeon graphics cards from AMD have enjoyed this past year.

If you read the press release, it is mentioned in the Five Marvels of Pascal

  • Superb Craftsmanship. Increases in bandwidth and power efficiency allow the GTX 1080 to run at clock speeds never before possible -- over 1700 MHz -- while consuming only 180 watts of power. New asynchronous compute advances improve efficiency and gaming performance. And new GPU Boost™ 3 technology supports advanced overclocking functionality.

 

NVIDIA GeForce GTX 1080 reviews out on May 17th

http://videocardz.com/59695/nvidia-geforce-gtx-1080-reviews-on-may-17th

Even though official ‘hard launch’ is set to May 27th, NVIDIA decided to let reviewers release their benchmarks sooner. Of course it is no secret that some reviewers already have their samples. Unfortunately not all information related to Pascal architecture was released at launch, so we are still waiting for confirmation about TMUs, ROPs and other GP104 specific data.

According to tweet that has since then been removed by the author, NVIDIA will lift NDA for reviews on May 17th:

FYI: Embargo for all the PASCAL info we’re getting today and @nvidia GTX 1080 reviews is May 17th

NVIDIA has been very generous during Editors day. Attendees received two cards for SLI reviews, so I’m confident there won’t be any problem finding reviews before official launch date.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

GTX1080 + Vulkan API = Doom @ 200fps

Just amazing!!!

http://www.gamespot.com/articles/see-doom-running-on-nvidias-gtx-1080-reaches-200fp/1100-6439598/

I'd have created a topic on this, but I can't yet. If someone do this, it'd be great! Just pass the information along.



taikamya said:
GTX1080 + Vulkan API = Doom @ 200fps

Just amazing!!!

http://www.gamespot.com/articles/see-doom-running-on-nvidias-gtx-1080-reaches-200fp/1100-6439598/

I'd have created a topic on this, but I can't yet. If someone do this, it'd be great! Just pass the information along.

Vasto did yesterday: http://gamrconnect.vgchartz.com/thread.php?id=216435



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.