By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - GTX 1080 unveiled; 9 teraflops

If we get reviews on the 17th it might not be a paper launch after all.

Gonna be really tough for me to hold back and wait for the 1080ti or Titan.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Around the Network
vivster said:
If we get reviews on the 17th it might not be a paper launch after all.

Gonna be really tough for me to hold back and wait for the 1080ti or Titan.

Hopefully it's a hard launch.
Hopefully nVidia' beating AMD's Vega launch will make AMD push Vega a little harder. :)

haxxiy said:
Pemalite said:

7nm is expensive because it can't be used for large complex chips, it's reserved for NAND untill they improve the situation.
Thus that "3 times better" can and will change.

However... nVidia and AMD constantly do well every year throwing out faster and faster GPU's.. Even on the same node, I.E. AMD Managed to more than double performance between the Radeon 7970 and Fury, both were 28nm, both were cosidered monolithic, low yield, expensive chips.

Thus... Even if we were to be using 16/14nm for the next 4 years, expect more than double the performance as AMD and nVidia are both entering this feature size conservatively with their initial batch of processors.

Nope? You need to conform to increasingly complex set of rules when designing smaller chips, and more advanced tools, processes and IP. There is no way around the need for about 500 man-years to design a mid-range SoC on 7nm, no matter how cheap the foundries make these pieces of silicon out to be. I'm not making this up by the way, you can search for the sources of anything I'm saying. So it cannot and will not change, since we can mathematically predict were we wil end up on more advanced nodes. And those predictions are best-case scenarios on themselves, since the foundries don't want to look bad on their own roadmaps, do they?

Now, to considerations on architecture. AMD didn't quite manage to double the performance. The Fiji is about 50% more efficient than the Tahiti chips, and that's factoring in the HBM.  Maybe if you couple the worst-reviewed first bath of Tahiti chips with an R9 Nano (which itself was an effort of desperation to look good on power efficiency, selling an underclocked 8.9B transistor chip to take on mini GTX 970s) you can sort of claim the power efficiency has doubled.  The GCN 28nm were on themselves terrible on power efficiency, beating the 40nm Evergreen chips by less than 50% on most instances. In fact, 28nm only did what was supposed to do and doubled 40nm on efficiency with the more recent (GCN 1.2 and Maxwell) architectures, a statement on how early GCN and Kepler sort of sucked. Again, not making up anything, the search engines are your friend here.

 

Unfortunately history does not backup your claim.

New feature sizes are always more expensive, untill the fabs get a better idea of power characterisitics and how the materials react and even improvements to the patterning and light. Yields will then increase, costs will decrease.

7nm is expensive right now and is thus only used for simple structures that have minimal leakage, like NAND, that will change over time.

Every other node has ALWAYS been expensive when it debuted, 28nm, 40nm, 32nm, 90nm they all brought with them issues related to costs and yields untill they got a better idea of the process involved.

Here: http://www.anandtech.com/show/2937/7
As you can see even 40nm was plagued with issues, today it will be extremely cheap to use, to a point.

AMD did double performance, or close enough. Here is the benchmarks, argument over.

http://www.anandtech.com/bench/product/1495?vs=1513

Performance has changed since 2015 though, Fury has certainly gotten a bigger focus in later driver releases to take advantage of the newer GCN nuances.
http://wccftech.com/amd-r9-fury-x-performance-ahead-nvidia-980-ti-latest-drivers/

Power efficieny gains weren't as pronounced between 40nm and 28nm because die sizes kinda exploded. AMD for instance went from 2.6 Billion transisters with the Radeon 6970 to 4.3 Billion with the 7970.
The 6970 built on 40nm though is about as fast as the 7850 built on 28nm in gaming tasks, which both have a similar amount of transisters, the 7850 though will use a good 100-150 watts less of energy.

http://anandtech.com/bench/product/1076?vs=1061

JEMC said:
Pemalite said:

AMD's Fury is high-end. AMD will abandon the high-end (Or just keep Fury around.) untill Vega.

Vega also might drop late this year. Depends how the cards (Pun intended) drop.

I don't know how could AMD keep Fury as their high-end cards with only 4GB. That's why I think (or hope) that the 490X will be faster than Fury and around GTX 1070 levels of performance.

Also, I think that AMD has Vega up and running, but they are waiting on HBM2 prices to fall down to launch it. They expect it will happen early next year, but if it happens before, they could release Vega late this year.

Having 4Gb is certainly a hinderance over the long term.
However... They were limited technically, HBM was new, AMD was on the cutting edge, they wanted to repeat the same success they did with the Radeon 4000 series, unfortunately HBM wasn't there all the way.
HBM2 things will change, larger capacities, more bandwidth... And then in 2017/2018 we should see the successor to HBM.

AMD will also be releasing a 200-300w APU with HBM, which should run rings around the PS4.

torok said:
JEMC said:

I agree with you that having an "halo" product improves the perception of a company, but given the financial situation of AMD, focusing on what sells the best is the right choice.

Yes, they are pretty beaten up. They lost tons os share in the GPU market and the CPU one is even worse. They have a console "monopoly", but integrated parts have slim profit margins, so that wouldn't keep them afloat. I'm a big fan of them, I would really like to see Zen put them back to the fight. I don't want my next CPU to be from Intel, but they need to step up their game right now.

Although the consoles aren't enough to make AMD profitable... It does give them one thing. Cash flow, cash flow can be just as important as profits as cash flow can be used to leverage new financing, it also gives a company a degree of financial stability.

Zen though, although I wish it would beat Intel... Probably won't. It will make up for lost ground though.... To be fair, AMD doesn't really need to beat Intel, they just need to be "Good enough" at the right price... And right now IMHO anyone who buys an AMD CPU is insane in 2016, Zen should change that.

JEMC said:
torok said:

The decision to go with only 4GB was made to allow the use of HBM. AMD got exclusive access to HBM and a high share of the initial production of the HBM2 modules. They kind of screwed the Fury X, but managed to get a huge share of HBM2. Nvidia isn't using it on Pascal probably because the amount of modules they would be capable to secure was too low so they went with GDDR5X. It was a strategic sacrifice to "screw" Pascal. Let's see if it pays off.

They got the rights to use HBM becuase they co-developed it with Hynix, and used it on Fury to both show off what they had done and because HBM uses less power than GDDR5, and they needed that margin with athe beast that is Fury.

The thing with HBM2 is that Samsung also makes the modules and Nvidia has access to it. But it's too expensive to use it for now, which is why neither AMD nor Nvidia are using it.

It also gives AMD experience in that technology, experience that might hopefully give them a leg-up over nVidia with a HBM2 implementation.

And the reason why it isn't being used is because nothing is ready to use it. :P You can't just drop it onto our current GPU's and call it a day unfortunately, you need to make modifications to the memory controller... PCB layouts need to change to account for the different traces and the interposer... Which is why it is releasing with the new Enthusiast level cards.
GDDR5X will also have a bandwidth bump over GDDR5, so cards with less hardware won't need HBM and nVidia/AMD/Partners can take a bit more profit home.




www.youtube.com/@Pemalite

Pemalite said:

...

JEMC said:

I don't know how could AMD keep Fury as their high-end cards with only 4GB. That's why I think (or hope) that the 490X will be faster than Fury and around GTX 1070 levels of performance.

Also, I think that AMD has Vega up and running, but they are waiting on HBM2 prices to fall down to launch it. They expect it will happen early next year, but if it happens before, they could release Vega late this year.

Having 4Gb is certainly a hinderance over the long term.
However... They were limited technically, HBM was new, AMD was on the cutting edge, they wanted to repeat the same success they did with the Radeon 4000 series, unfortunately HBM wasn't there all the way.
HBM2 things will change, larger capacities, more bandwidth... And then in 2017/2018 we should see the successor to HBM.

AMD will also be releasing a 200-300w APU with HBM, which should run rings around the PS4.

The big problem of HBM is that using the 1024-bit bus per stack take so much traces that AMD and Nvidia are only able to use 4 modules. Even the GP100 was only using 4 stacks, but of HBM2.

I wonder what that HBM successor might be. Has anyone heard anything about it?

That APU you talk about... I find it hard to believe it, to be honest. Not only because the first mention of it came from Fudzilla (at least, if you're talking about that 16 Zen cores + Greenland + HBM rumor) but because not even AMD would launch such a power hungry APU. It would need to come bundled with a CLC unit and that would make it too much expensive to make it a worth buy over a separate CPU+GPU, that could use less power and be even faster.



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

The traces aren't really the problem. That is what the interposer is for.

JEMC said:
Pemalite said:

...

Having 4Gb is certainly a hinderance over the long term.
However... They were limited technically, HBM was new, AMD was on the cutting edge, they wanted to repeat the same success they did with the Radeon 4000 series, unfortunately HBM wasn't there all the way.
HBM2 things will change, larger capacities, more bandwidth... And then in 2017/2018 we should see the successor to HBM.

AMD will also be releasing a 200-300w APU with HBM, which should run rings around the PS4.

The big problem of HBM is that using the 1024-bit bus per stack take so much traces that AMD and Nvidia are only able to use 4 modules. Even the GP100 was only using 4 stacks, but of HBM2.

I wonder what that HBM successor might be. Has anyone heard anything about it?

That APU you talk about... I find it hard to believe it, to be honest. Not only because the first mention of it came from Fudzilla (at least, if you're talking about that 16 Zen cores + Greenland + HBM rumor) but because not even AMD would launch such a power hungry APU. It would need to come bundled with a CLC unit and that would make it too much expensive to make it a worth buy over a separate CPU+GPU, that could use less power and be even faster.

The traces aren't really the problem. That is what the interposers are for.
However... Interposers are also built at 65nm and they do cost a fair chunk of coin, you could technically have two stacks of HBM to each interposer for a total of 8 if the Interposer was designed to accomodate it, but they aren't. It's one thing to do HBM version 2, doing an Interposer version 2 on a different fabrication process in order to accomidate multiple stacks is a different game entirely.

But then Costs will also blow out, only so much you can do per market segment, HBM is already expensive.

As for HBM's successor, AMD has just labelled it as "Next Gen memory". I would hazard a guess that as HBM is seen as "wide" AMD/JEDEC/Memory companies might go for a faster approach and take advantage of HBM's inherent advantages and drive up the speed.

As for the APU.
Anandtech at one point asked it's reader base what users wanted to see out of AMD and AMD was actually reading and responding to some of the comments... And without question everyone wanted a beefy APU with built-in GDDR5 or better Ram, AMD had actually done something similar with the 780 chipsets where they would bundle 64Mb-128Mb of DDR3 Ram for the motherboards graphics.
So AMD knows there is demand, at-least in the enthusiast community.

Now the APU in question isn't for your every day consumer, it is for the HPC market, Aka. Server grade stuff that already uses 16+ CPU cores and beefy GPU's for compute, it just happens to be that AMD sells both seperate one as Opteron the other as FireGL.
AMD themselves have already confirmed the existence of such a product, but what kind of hardware still remains to be seen, but keep in mind it is for the HPC market, it's likely to be extremely chunky, they don't typically do low-end stuff. ;)

Thus... Cooling isn't going to be an issue, these will likely come complete as a "compute add-in board" or an all-in-on motherboard with tons of ports for drives.

And even then, you don't need to go liquid cooling to cool 200-300w, a motherboard allows for a larger surface area than a GPU and GPU's have been cooling such wattages like that for years thanks to heat pipes, vapour chamber etc' designs.




www.youtube.com/@Pemalite

JEMC said:
taikamya said:
GTX1080 + Vulkan API = Doom @ 200fps

Just amazing!!!

http://www.gamespot.com/articles/see-doom-running-on-nvidias-gtx-1080-reaches-200fp/1100-6439598/

I'd have created a topic on this, but I can't yet. If someone do this, it'd be great! Just pass the information along.

Vasto did yesterday: http://gamrconnect.vgchartz.com/thread.php?id=216435

Awesome then! I don't log in everyday so I can't keep track of topics. Thanks!



Around the Network
Pemalite said:
JEMC said:

The big problem of HBM is that using the 1024-bit bus per stack take so much traces that AMD and Nvidia are only able to use 4 modules. Even the GP100 was only using 4 stacks, but of HBM2.

I wonder what that HBM successor might be. Has anyone heard anything about it?

That APU you talk about... I find it hard to believe it, to be honest. Not only because the first mention of it came from Fudzilla (at least, if you're talking about that 16 Zen cores + Greenland + HBM rumor) but because not even AMD would launch such a power hungry APU. It would need to come bundled with a CLC unit and that would make it too much expensive to make it a worth buy over a separate CPU+GPU, that could use less power and be even faster.

The traces aren't really the problem. That is what the interposers are for.
However... Interposers are also built at 65nm and they do cost a fair chunk of coin, you could technically have two stacks of HBM to each interposer for a total of 8 if the Interposer was designed to accomodate it, but they aren't. It's one thing to do HBM version 2, doing an Interposer version 2 on a different fabrication process in order to accomidate multiple stacks is a different game entirely.

But then Costs will also blow out, only so much you can do per market segment, HBM is already expensive.

As for HBM's successor, AMD has just labelled it as "Next Gen memory". I would hazard a guess that as HBM is seen as "wide" AMD/JEDEC/Memory companies might go for a faster approach and take advantage of HBM's inherent advantages and drive up the speed.

As for the APU.
Anandtech at one point asked it's reader base what users wanted to see out of AMD and AMD was actually reading and responding to some of the comments... And without question everyone wanted a beefy APU with built-in GDDR5 or better Ram, AMD had actually done something similar with the 780 chipsets where they would bundle 64Mb-128Mb of DDR3 Ram for the motherboards graphics.
So AMD knows there is demand, at-least in the enthusiast community.

Now the APU in question isn't for your every day consumer, it is for the HPC market, Aka. Server grade stuff that already uses 16+ CPU cores and beefy GPU's for compute, it just happens to be that AMD sells both seperate one as Opteron the other as FireGL.
AMD themselves have already confirmed the existence of such a product, but what kind of hardware still remains to be seen, but keep in mind it is for the HPC market, it's likely to be extremely chunky, they don't typically do low-end stuff. ;)

Thus... Cooling isn't going to be an issue, these will likely come complete as a "compute add-in board" or an all-in-on motherboard with tons of ports for drives.

And even then, you don't need to go liquid cooling to cool 200-300w, a motherboard allows for a larger surface area than a GPU and GPU's have been cooling such wattages like that for years thanks to heat pipes, vapour chamber etc' designs.

Cooling is an issue when all those 200 or 300W of heat come from a single chip the size of an APU. There are no CPU heatsinks designed to tame such a monster, which is why I talked about AMD having to go with a CLC unit to cool such thing.

But it doesn't really matter, because such an APU for the HTC market won't go into retail for us to buy it.

That said, there was a rumor a couple of months ago of AMD working on a Bristol Ridge APU with 16CUs, that would put it almost on the same level as an XboxOne (because with DDR4, the bandwidth would be quite lower). For something more powerful, we'll have to wait for Zen and Pascal, well into next year.

*Edit*

I found the rumor: http://wccftech.com/amd-bristol-ridge-16-cu-apu/



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

taikamya said:
JEMC said:

Vasto did yesterday: http://gamrconnect.vgchartz.com/thread.php?id=216435

Awesome then! I don't log in everyday so I can't keep track of topics. Thanks!

Well you could if you were using the Latest Topics ;)

One day is just about 1 page and a half.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Ashes of The Singularity benchmarks with the GTX 1080 and other unreleased Nvidia and AMD cards, have been leaked.

 

NVIDIA GeForce GTX 1080 DirectX 12 Benchmarks in Ashes of The Singularity Revealed

http://wccftech.com/nvidia-geforce-gtx-1080-dx12-benchmarks/#ixzz48BBQJtVh

 

NVIDIA GTX 1080, AMD Polaris 10/11 Ashes of Singularity DirectX12 benchmarks leaked

http://videocardz.com/59725/nvidia-gtx-1080-polaris-10-11-directx12-benchmarks

 

I don't know how the AotS benchmark works, but feel free to comment if you can understand it.

 

*Edit*

Here's a ghetto unboxing

NVIDIA GeForce GTX 1080 'Founder's Edition' (Ghetto) Unboxing

http://www.tweaktown.com/articles/7700/nvidia-geforce-gtx-1080-founders-edition-ghetto-unboxing/index.html



Please excuse my bad English.

Former gaming PC: i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Current gaming PC: R5-7600, 32GB RAM 6000MT/s (CL30) and a RX 9060XT 16GB

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

This is starting to get exciting I can't wait for new NV and AMD cards to hit the market. At this point obviously we have to keep in mind that the claims we heard are PR crap and nothing more - there's half a page of fine print after every "twice as powerful as the Titan" Still, I hope the new cards will not only be more power efficient, but also more powerful. I'm so happy I've waited and can't wait to see the benchmarks and finally getting my hands on my new dream GPU



Wii U is a GCN 2 - I called it months before the release!

My Vita to-buy list: The Walking Dead, Persona 4 Golden, Need for Speed: Most Wanted, TearAway, Ys: Memories of Celceta, Muramasa: The Demon Blade, History: Legends of War, FIFA 13, Final Fantasy HD X, X-2, Worms Revolution Extreme, The Amazing Spiderman, Batman: Arkham Origins Blackgate - too many no-gaemz :/

My consoles: PS2 Slim, PS3 Slim 320 GB, PSV 32 GB, Wii, DSi.

sc94597 said:

Do you think CDPR put a different amount of effort in each version? I don't think they did. The same can't be said for Batman.

I used the Witcher 3 in particular, because it is a GPU-bound game (you can't say that the Jaguar in the PS4 is the limiting factor here.) Other games show similar performance though.

  I said nothing about API's nor about the theoretical reasons behind why there might be better optimization. Nor am I disputing the legitimacy of the developers' claims. I even alluded to this by mentioning that it is true for first-party games in which developers have an incentive to make the game perform better. I solely spoke of what we ACTUALLY see. Last generation a GPU that was equivalent to those found in consoles did not last you as long. One or two years and you'd have to upgrade in order to keep up. This generation the 750 Ti and r9 270x are keeping up with the PS4 and still greatly outmoding the XBO. The entire scope of the discussion until now has been only about GPUs, so I don't know where you got the idea that I was discussing microarchitecture.  I was mostly talking about unified game-engines that make porting games easier (Unity, UE4, etc) and platform architecture in general (you don't have a crazy Cell with SPE's that make multiprocessing a nightmare to relearn if you want to make low-level optimizations (not even talking about a compiler here, just running costly loops in assembly, and such.))

We can also mention how PC API's have advanced this generation.

Oh, if you are talking about how devs now have access to similar APIs on all platforms and multiplatform developer tools so they basically make one build for all platforms, then you are correct. Exclusives wouldn't even be about incentives, but simply because they also have a more specialized build instead of a generic one.

I believe that, this gen, the ports are much better, and I'm talking about PC -> consoles and consoles -> PC. The APIs are more similar and multiplat dev tools are way better, so everyone will have more games in the end. But that's more of a reflex of companies figuring out that if dev costs went up like they did last gen, the costs would be prohibitive. So devs now can focus on a "single" build and just tune minimum aspects for each version. And that's OK. The only way to push visuals now is having direct funding to to so (first party) or having direct funding for that (Star Citizen).