By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Leaked benchmarks for AMD Radeon RX 480 hits minimum VR spec for $199

Looks like this card is going to be even better value once developers add proper Vulcan support:

https://www.youtube.com/watch?v=WOaHpZjQ73M

RX 480 now easily beats a GTX 980 in Doom and the Fury X beats a GTX 1070, even the R9 Nano is faster at 1080P.



Around the Network

Saw that earlier. Nice rise from the dead for the Fury X, which has always been sold above GTX1070 prices so its great to see it finally pulling its weight for its cost.

The question is will there be a better example where this matters than Doom? Getting 60 fps in Doom at quality settings is not quite an issue for many cards.

I would like to know how many upcoming games are set to be developed from the ground-up on Vulkan, besides idTech games.

 



PC I i7 3770K @4.5Ghz I 16GB 2400Mhz I GTX 980Ti FTW

Consoles I PS4 Pro I Xbox One S 2TB I Wii U I Xbox 360 S

AnthonyW86 said:

Looks like this card is going to be even better value once developers add proper Vulcan support:

https://www.youtube.com/watch?v=WOaHpZjQ73M

RX 480 now easily beats a GTX 980 in Doom and the Fury X beats a GTX 1070, even the R9 Nano is faster at 1080P.

Only to put things in context, AMD has released a new driver optimization for Doom and Vulkan while Nvidia still hasn't done it. That means that, right now, AMD cards have advantage.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Pemalite said:

[...]

Alby_da_Wolf said:

Probably when the new production process will be better refined, and firmware and drivers too, increasing power efficiency, AMD will be able to increase RX 470 performances a little, and RX 480 a little more, to differentiate them more from each other and also to deliver a more significant increase from previous generation.

What production process? It's 14nm like the rest of the Polaris 10 lineup.


Alby_da_Wolf said:

About the 460, I expect from it the same, and more, like being used together with Zen CPU architecture to make APUs a lot more powerful than now, but still keeping power consumption within reasonable limit.


APU's aren't built around the concept of taking desktop GPU's and throwing them together with a CPU.
What AMD does is reserve a % of the die space in terms of transister counts and whacks a GPU that will "fit" into that space to meet various price/performance targets.

AMD *will* have Graphics Core Next 4 APU's, which is the same technology derived from Polaris/Radeon 480, but it will be scaled differently.
Not only that but they will be limited  by Dual Channel DDR configurations unless they allow for motherboard companies to implement a Sideport kind of concept with soldered memory on the motherboard dedicated to the GPU.
Otherwise a fast GPU is essentially wasted.

AMD has also given out plans to build a 300w~ APU with a ton of GPU hardware directed towards the HPC market, which should be faster than the Playstation 4.

Alby_da_Wolf said:


About drivers, AMD (but NVidia too with SLI) should work  to make CrossFire drivers more transparent, and allow game devs to use it as if the multiple GPUs were just a single, more powerful GPU, possibly requiring them to write specific code only if they want to go at a lower level and push the limits higher, but this should be considered necessary only for the most graphics-whorish titles. Surely mid-range CrossFire and SLI solutions, like using multiple cheap GPUs, or the most basic Hybrid CrossFire scheme, adding a single discrete GPU to an onboard or APU one, not being extreme configs shouldn't require specific coding to be used in games (and othe graphics SW).


Game developers can and do just that. AMD and nVidia cannot force developers to do anything, but they do work with all major developers and publishers to build Multi-GPU "profiles" for their drivers.
Pretty much every major PC release these days has support for Multiple GPU's.... Unless of course you buy your games from the Windows Store which are confined to the Universal Windows Platform. (Ugh.)

What you are describing about having two GPU's seen as one has actually been tried before, known as Lucid's "Hydra Engine" it was a dedicated chip on some motherboards... It even allowed you to pair an AMD GPU with an nVidia GPU, the way it works is it intercepts Direct X/OpenGL calls and divides the work up that way.
However it was buggy, had artifacts, less than optimal performance and it was expensive.

There are Pro's and Con's to AMD and nVidia's approach though, they can retain a degree of quality, fix issues and provide optimal performance, the downside is of course compatability, but nVidia and AMD are pretty proactive on that approach, I've been running Multi-GPU's since the late-90's, things are certainly fantastic compared to then, most games today do support Crossfire/SLI or will soon after a games release.

I think the next step to making Crossfire/SLI more feasible is to allow the community to build, test and optimize profiles for games and have the community vote for them and then propogated via the cloud to everyone.

 

About process, yes, I know, and I was meaning that maybe the whole range could still receive tweakings. We know that 480's power problem was of power management, taking too much power from PCIe instead of from auxiliary power connector, but also after fixing that, we see that power efficiency is quite lower than promised, and while it's quite normal that real world performance be lower than ideal ones, maybe we can still expect improvements not only from drivers and from improved board design, but also from the HW itself if the production process can still be tweaked.

About APUs, I wasn't precise, I should have written the use of the latest AMD GPU cores in them, and not literally putting a 460 in a Zen APU, but simply putting the GPU power of a 460, obtained with the latest cores available, in it. Yes, a sideport for GDDR could be a nice idea to avoid wasting the GPU power of the  higher-end APUs, although faster DDR4, and organised in quad- or even better eight-channel architecture, could at least mitigate the problem.

About drivers and games, thanks for all the explanations, my doubts come from the existence of games that cannot use more than one GPU, or cannot without giving more problems than benefits, maybe it could be possible to transparently offer them at least part of all the benefits offered by multiple GPU with APIs that keep where it should be stuff that is at a level low enough to be better managed by those that designed the HW than by game devs.



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


JEMC said:
shikamaru317 said:

Just speculating based on the previous gen, where there was 390 and 2 main Fury cards, as well as the fact that AMD will probably want 3 competitors for 1070, 1080, and 1080 ti respectively. While there are only 2 Vega chips in production as you said, there is room for a cut-down version of the bigger Vega chip imo. 

Last gen AMD had way too many cards on the market, with Fury, R9 Nano and R9 390X, for example, basically competed against each other. AMD would be stupid to make the same mistake again.

But I agree with you that between the 2304 SPs of the 480 and the rumored 4096 SPs of the full Vega chip, there's enough room for another two cards with 2816 and 3584 SPs respectively (those are the specs of the 390X and Fury).

The entire 300 series was rebadges though with Fury added on top as a "Halo" product and a test vehicle for mass produced HBM and Nano reserved for a more Niche' market.

The 390 and 390X were just re-badged Radeon R9 290 and 290X parts with the bulk of the 200 series being rebadged 7000 parts.

AMD has just stagnated, not just in terms of rebadged hardware, but also prices and that reflected in their marketshare, they are turning that around... But it will sadly take another couple of years to see the fruits of AMD's new strategy.
Even the 400 series will likely be derived by a substantual amount of rebadged hardware, especially in the low-end.

If Vega doesn't launch untill next year, it might not launch under the 400 series lineup, but rather the 500 series instead, with Polaris and general GCN 1.0/1.1/1.2 rebadges.


JEMC said:
shikamaru317 said:

Same here. 490 needs to be able to compete against the 1070 in the $350-$400 range, and while a dual 480 card would be able to match the 1070 based on 480 Crossfire benchmarks, it would also likely cost more than the 1070 and use nearly double the power, not to mention the micro-stuttering issues you get in some games with Crossfire, and the fact that some games (such as Rise of the Tomb Raider) currently don't support Crossfire at all.

I know our discussion is a few days old, but I've realized something that may refute the idea that the 490 is a dual GPU card with Polaris 10 chips.

And it comes from AMD's new naming scheme, and the slide that came with it:

Realize that the tier number gets determined by the memory controller:

9: >256bit
8 - 7: 256bit
6 - 5: 128bit
4: 64bit

An hypothetical 2x480 GPUs card will still have the same 256bit controllers so, by AMD's own rule, it couldn't be named RX 490. If anything, it could be the RX 480X2.

AMD *could* also launch two dual-GPU cards.
With the Radeon 3000 series and 4000 series AMD actually released multiple x2 cards one based on their fastest GPU and the other on a slower GPU. Aka. Radeon 3850 X2 and 3870 X2 and 4850 X2 and 4870 X2. - Could be a return to form and we end up with both.

Alby_da_Wolf said:

 

About process, yes, I know, and I was meaning that maybe the whole range could still receive tweakings. We know that 480's power problem was of power management, taking too much power from PCIe instead of from auxiliary power connector, but also after fixing that, we see that power efficiency is quite lower than promised, and while it's quite normal that real world performance be lower than ideal ones, maybe we can still expect improvements not only from drivers and from improved board design, but also from the HW itself if the production process can still be tweaked.

Well, it goes without saying that there will be some refinement in production as time goes on that should result in some reduction in power consumption after a few revisions.
But it's not going to be anything significant...

For something more significant AMD will likely need to do another respin and that won't happen for a long time yet, if ever.
Basically the best refinement we can except is from custom cards.

Alby_da_Wolf said:

 

About APUs, I wasn't precise, I should have written the use of the latest AMD GPU cores in them, and not literally putting a 460 in a Zen APU, but simply putting the GPU power of a 460, obtained with the latest cores available, in it. Yes, a sideport for GDDR could be a nice idea to avoid wasting the GPU power of the  higher-end APUs, although faster DDR4, and organised in quad- or even better eight-channel architecture, could at least mitigate the problem.



Quad-Channel and Octo-Channel carry with it increased costs due to a substantual jump in PCB traces required, which means more PCB layers and thus more engineering to route everything properly.
It's not gonna' happen. :P Not in a consumer-grade APU anyway.

DDR4 and the bandwidth saving technology AMD has implemented in GCN 1.3/4.0 will help, but it's not going to allow you to have high-end GPU performance, mid-range maybe. Low-end certainly.

Alby_da_Wolf said:

 

About drivers and games, thanks for all the explanations, my doubts come from the existence of games that cannot use more than one GPU, or cannot without giving more problems than benefits, maybe it could be possible to transparently offer them at least part of all the benefits offered by multiple GPU with APIs that keep where it should be stuff that is at a level low enough to be better managed by those that designed the HW than by game devs.



That is where Microsoft is taking Direct X 12. Game developers can build their games to support their own multi-GPU implementation, this is what nVidia has backed when it comes to SLI support for more than 2 cards in games.

But... Leaving it in the hands of developers means it's never going to catch on considering that the vast majority of PC's use a single GPU and consoles all use a Single GPU, it's a waste of resources.

At the moment though if you want more than 2 GPU's for gaming, AMD is where it is at.



--::{PC Gaming Master Race}::--

Around the Network
Pemalite said:

 

JEMC said:

I know our discussion is a few days old, but I've realized something that may refute the idea that the 490 is a dual GPU card with Polaris 10 chips.

And it comes from AMD's new naming scheme, and the slide that came with it:

Realize that the tier number gets determined by the memory controller:

9: >256bit
8 - 7: 256bit
6 - 5: 128bit
4: 64bit

An hypothetical 2x480 GPUs card will still have the same 256bit controllers so, by AMD's own rule, it couldn't be named RX 490. If anything, it could be the RX 480X2.

AMD *could* also launch two dual-GPU cards.
With the Radeon 3000 series and 4000 series AMD actually released multiple x2 cards one based on their fastest GPU and the other on a slower GPU. Aka. Radeon 3850 X2 and 3870 X2 and 4850 X2 and 4870 X2. - Could be a return to form and we end up with both.

There's no need to go that back, AMD launched the 295X2 with two Hawaii XT chips and this year they finally launched the Radeon Pro Duo with two Fiji XT processors.

A dual Polaris card won't surprise anyone, it's only the name that's open to discussion. RX 480X2? RX 480 Duo?



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

3Dmark's Time Spy DX12 Dem is out today, and Rx480 on that API get near a GTX980 without Asynchronic Computes, and Above that high end GPU with it activated. Really nice for a 240$ GPU. here some tests:

http://www.pcworld.com/article/3095301/components-graphics/3dmark-time-spy-tested-we-pit-radeon-vs-geforce-in-this-major-new-dx12-benchmark.html



 

DoYou Want DOZENS OF NO GAEMZ?! then... Visit the Official PlayStation Vita Tread

shikamaru317 said:

More signs pointing to RX 490 releasing this year. Question is, is it a dual 480 card, a secret Polaris chip, or the first of the Vega chips? Tweaktown and WCCF Tech both seem to think it's a dual GPU card.

http://www.tweaktown.com/news/52997/amd-radeon-rx-490-teased-higher-end-card-launching-late-year/index.html
http://wccftech.com/amd-rx-490-mystery-4k-gaming-gpu-listed-sapphire/

I wouldn't put too much faith in that Tweaktown article, even more so containing bits like this one: "We know from AMD's revised naming system for the Radeon series that that the RX 490 will feature a 256-bit memory bus and will be aimed at 4K and VR gaming."

Looking at AMD's new naming slide, any 490 card will have a higher than 256bit bus memory, not a 256bit bus one. They are clearly writing their thoughts/guesses as facts.

 

Now, on another note:

SK Hynix to Ship HBM2 Memory by Q3-2016

https://www.techpowerup.com/224149/sk-hynix-to-ship-hbm2-memory-by-q3-2016

Korean memory and NAND flash giant SK Hynix announced that it will have HBM2 memory ready for order within Q3-2016 (July-September). The company will ship 4 gigabyte HBM2 stacks in the 4 Hi-stack (4-die stack) form-factor, in two speeds - 2.00 Gbps (256 GB/s per stack), bearing model number H5VR32ESM4H-20C; and 1.60 Gbps (204 GB/s per stack), bearing model number H5VR32ESM4H-12C. With four such stacks, graphics cards over a 4096-bit HBM2 interface, graphics cards with 16 GB of total memory can be built.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

 

Now, on another note:

SK Hynix to Ship HBM2 Memory by Q3-2016

https://www.techpowerup.com/224149/sk-hynix-to-ship-hbm2-memory-by-q3-2016

Korean memory and NAND flash giant SK Hynix announced that it will have HBM2 memory ready for order within Q3-2016 (July-September). The company will ship 4 gigabyte HBM2 stacks in the 4 Hi-stack (4-die stack) form-factor, in two speeds - 2.00 Gbps (256 GB/s per stack), bearing model number H5VR32ESM4H-20C; and 1.60 Gbps (204 GB/s per stack), bearing model number H5VR32ESM4H-12C. With four such stacks, graphics cards over a 4096-bit HBM2 interface, graphics cards with 16 GB of total memory can be built.

 

*Wets Pants*

Vega here we come!



--::{PC Gaming Master Race}::--

DemoniOtaku said:
3Dmark's Time Spy DX12 Dem is out today, and Rx480 on that API get near a GTX980 without Asynchronic Computes, and Above that high end GPU with it activated. Really nice for a 240$ GPU. here some tests:

http://www.pcworld.com/article/3095301/components-graphics/3dmark-time-spy-tested-we-pit-radeon-vs-geforce-in-this-major-new-dx12-benchmark.html

yea, the Maxwell gen cards will lose a lot of ground against competing AMD cards in newer APIs as they were extremely finetuned for DX11 performance