| vivster said: If we get reviews on the 17th it might not be a paper launch after all. Gonna be really tough for me to hold back and wait for the 1080ti or Titan. |
Hopefully it's a hard launch.
Hopefully nVidia' beating AMD's Vega launch will make AMD push Vega a little harder. :)
haxxiy said:
Nope? You need to conform to increasingly complex set of rules when designing smaller chips, and more advanced tools, processes and IP. There is no way around the need for about 500 man-years to design a mid-range SoC on 7nm, no matter how cheap the foundries make these pieces of silicon out to be. I'm not making this up by the way, you can search for the sources of anything I'm saying. So it cannot and will not change, since we can mathematically predict were we wil end up on more advanced nodes. And those predictions are best-case scenarios on themselves, since the foundries don't want to look bad on their own roadmaps, do they? Now, to considerations on architecture. AMD didn't quite manage to double the performance. The Fiji is about 50% more efficient than the Tahiti chips, and that's factoring in the HBM. Maybe if you couple the worst-reviewed first bath of Tahiti chips with an R9 Nano (which itself was an effort of desperation to look good on power efficiency, selling an underclocked 8.9B transistor chip to take on mini GTX 970s) you can sort of claim the power efficiency has doubled. The GCN 28nm were on themselves terrible on power efficiency, beating the 40nm Evergreen chips by less than 50% on most instances. In fact, 28nm only did what was supposed to do and doubled 40nm on efficiency with the more recent (GCN 1.2 and Maxwell) architectures, a statement on how early GCN and Kepler sort of sucked. Again, not making up anything, the search engines are your friend here.
|
Unfortunately history does not backup your claim.
New feature sizes are always more expensive, untill the fabs get a better idea of power characterisitics and how the materials react and even improvements to the patterning and light. Yields will then increase, costs will decrease.
7nm is expensive right now and is thus only used for simple structures that have minimal leakage, like NAND, that will change over time.
Every other node has ALWAYS been expensive when it debuted, 28nm, 40nm, 32nm, 90nm they all brought with them issues related to costs and yields untill they got a better idea of the process involved.
Here: http://www.anandtech.com/show/2937/7
As you can see even 40nm was plagued with issues, today it will be extremely cheap to use, to a point.
AMD did double performance, or close enough. Here is the benchmarks, argument over.
http://www.anandtech.com/bench/product/1495?vs=1513
Performance has changed since 2015 though, Fury has certainly gotten a bigger focus in later driver releases to take advantage of the newer GCN nuances.
http://wccftech.com/amd-r9-fury-x-performance-ahead-nvidia-980-ti-latest-drivers/
Power efficieny gains weren't as pronounced between 40nm and 28nm because die sizes kinda exploded. AMD for instance went from 2.6 Billion transisters with the Radeon 6970 to 4.3 Billion with the 7970.
The 6970 built on 40nm though is about as fast as the 7850 built on 28nm in gaming tasks, which both have a similar amount of transisters, the 7850 though will use a good 100-150 watts less of energy.
http://anandtech.com/bench/product/1076?vs=1061
JEMC said:
I don't know how could AMD keep Fury as their high-end cards with only 4GB. That's why I think (or hope) that the 490X will be faster than Fury and around GTX 1070 levels of performance. Also, I think that AMD has Vega up and running, but they are waiting on HBM2 prices to fall down to launch it. They expect it will happen early next year, but if it happens before, they could release Vega late this year. |
Having 4Gb is certainly a hinderance over the long term.
However... They were limited technically, HBM was new, AMD was on the cutting edge, they wanted to repeat the same success they did with the Radeon 4000 series, unfortunately HBM wasn't there all the way.
HBM2 things will change, larger capacities, more bandwidth... And then in 2017/2018 we should see the successor to HBM.
AMD will also be releasing a 200-300w APU with HBM, which should run rings around the PS4.
torok said:
Yes, they are pretty beaten up. They lost tons os share in the GPU market and the CPU one is even worse. They have a console "monopoly", but integrated parts have slim profit margins, so that wouldn't keep them afloat. I'm a big fan of them, I would really like to see Zen put them back to the fight. I don't want my next CPU to be from Intel, but they need to step up their game right now. |
Although the consoles aren't enough to make AMD profitable... It does give them one thing. Cash flow, cash flow can be just as important as profits as cash flow can be used to leverage new financing, it also gives a company a degree of financial stability.
Zen though, although I wish it would beat Intel... Probably won't. It will make up for lost ground though.... To be fair, AMD doesn't really need to beat Intel, they just need to be "Good enough" at the right price... And right now IMHO anyone who buys an AMD CPU is insane in 2016, Zen should change that.
JEMC said:
They got the rights to use HBM becuase they co-developed it with Hynix, and used it on Fury to both show off what they had done and because HBM uses less power than GDDR5, and they needed that margin with athe beast that is Fury. The thing with HBM2 is that Samsung also makes the modules and Nvidia has access to it. But it's too expensive to use it for now, which is why neither AMD nor Nvidia are using it. |
It also gives AMD experience in that technology, experience that might hopefully give them a leg-up over nVidia with a HBM2 implementation.
And the reason why it isn't being used is because nothing is ready to use it. :P You can't just drop it onto our current GPU's and call it a day unfortunately, you need to make modifications to the memory controller... PCB layouts need to change to account for the different traces and the interposer... Which is why it is releasing with the new Enthusiast level cards.
GDDR5X will also have a bandwidth bump over GDDR5, so cards with less hardware won't need HBM and nVidia/AMD/Partners can take a bit more profit home.

www.youtube.com/@Pemalite








