By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

QUAKECore89 said:

http://wccftech.com/amd-radeon-rx-vega-mining-performance-great/

https://youtu.be/XvuM3DjvYf0?t=1m14s

C'mon, at least pot something positive, even if it comes from TweakTown

 

AMD Radeon RX Vega 56 leaked benchmarks: GTX 1070 killer
http://www.tweaktown.com/news/58635/amd-radeon-rx-vega-56-leaked-benchmarks-gtx-1070-killer/index.html
An industry source of mine has provided me with some raw benchmark numbers on Radeon RX Vega 56, which looks to be the new $400 mainstream king, beating the GTX 1070 in some of the biggest games on the market. My source said that the RX Vega 56 card was running on an Intel Core i7-7700K @ 4.2GHz, had 16GB of DDR4-3000MHz RAM, and was running Windows 10.

The benchmarks were run at 2560x1440 with the AMD Radeon RX Vega 56 easily beating NVIDIA's GeForce GTX 1070 in Battlefield 1, DOOM, Civilization 6, and even Call of Duty: Infinite Warfare. My source said that Battlefield 1 was run on Ultra settings, Civ 6 was on Ultra with 4x MSAA, DOOM was at Ultra with 8x TSAA enabled, and COD:IW was running on its High preset.

  • Radeon RX Vega 56 benchmark results: Battlefield 1: 95.4FPS (GTX 1070: 72.2FPS)
  •  Civilization 6: 85.1FPS (GTX 1070: 72.2FPS)
  • DOOM: 101.2FPS (GTX 1070: 84.6FPS)
  • COD:IW: 99.9FPS (GTX 1070: 92.1FPS)

As you can see, the Radeon RX Vega 56 is quite the potent monster at $399... where it not only trades blows with the GeForce GTX 1070, but slaps it around considerably. It looks like we can expect the RX Vega 56 to be a huge 32% faster than the GTX 1070 in BF1 at 1440p.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
JEMC said:
QUAKECore89 said:

http://wccftech.com/amd-radeon-rx-vega-mining-performance-great/

https://youtu.be/XvuM3DjvYf0?t=1m14s

C'mon, at least pot something positive, even if it comes from TweakTown

 

AMD Radeon RX Vega 56 leaked benchmarks: GTX 1070 killer
http://www.tweaktown.com/news/58635/amd-radeon-rx-vega-56-leaked-benchmarks-gtx-1070-killer/index.html
An industry source of mine has provided me with some raw benchmark numbers on Radeon RX Vega 56, which looks to be the new $400 mainstream king, beating the GTX 1070 in some of the biggest games on the market. My source said that the RX Vega 56 card was running on an Intel Core i7-7700K @ 4.2GHz, had 16GB of DDR4-3000MHz RAM, and was running Windows 10.

The benchmarks were run at 2560x1440 with the AMD Radeon RX Vega 56 easily beating NVIDIA's GeForce GTX 1070 in Battlefield 1, DOOM, Civilization 6, and even Call of Duty: Infinite Warfare. My source said that Battlefield 1 was run on Ultra settings, Civ 6 was on Ultra with 4x MSAA, DOOM was at Ultra with 8x TSAA enabled, and COD:IW was running on its High preset.

  • Radeon RX Vega 56 benchmark results: Battlefield 1: 95.4FPS (GTX 1070: 72.2FPS)
  •  Civilization 6: 85.1FPS (GTX 1070: 72.2FPS)
  • DOOM: 101.2FPS (GTX 1070: 84.6FPS)
  • COD:IW: 99.9FPS (GTX 1070: 92.1FPS)

As you can see, the Radeon RX Vega 56 is quite the potent monster at $399... where it not only trades blows with the GeForce GTX 1070, but slaps it around considerably. It looks like we can expect the RX Vega 56 to be a huge 32% faster than the GTX 1070 in BF1 at 1440p.

My plan was Vega 64, cause i want to play games in 4K 60fps. :/

Sigh... Guess i'll buy a GTX 1080Ti. :sob:



Friday (yay!) news:

 

SALES & "SALES"/DEALS

Not here...

 

SOFTWARE

Destiny 2 will not be compatible with EVGA Precision, MSi Afterburner, FRAPS, OBS and XSplit
http://www.dsogaming.com/news/destiny-2-will-not-be-compatible-with-evga-precision-msi-afterburner-fraps-obs-and-xsplit/
Bungie has announced that Destiny 2 won’t be compatible with a lot of well known third-party monitoring programs. The dev team noted that it will resist attempts by third-party applications to insert code into the game client in order to prevent hackers from getting access to it. As such, programs like FRAPS, EVGA Precision and MSi Afterburner will not be supported.

 

Valve makes changes to Steam groups to fight spammers and simplify signups
http://www.pcgamer.com/valve-makes-changes-to-steam-groups-to-fight-spammers-and-simplify-signups/
The most recent round of Valve's Steam-tinkering brings in changes to Steam community groups, aimed at simplifying the process of joining them and cutting down on their misuse by spammers. Valve said the current system was fine when Steam "was a smaller and simpler place," but now neither of those things are true and it's all kind of schmozzle.

 

Steam has an average of 14 million concurrent users every day
http://www.pcgamer.com/steam-has-an-average-of-14-million-concurrent-users-every-day/
It's a truth universally acknowledged that Steam is huge, but sometimes it takes cold hard figures to really know how big something is. And thanks to a recent Valve presentation at the Casual Connect conference in Seattle earlier this week (via Geekwire), we have some interesting numbers.

 

MODS/EMULATORS

Quake 2 looks absolutely stunning – and noisy as hell – with real-time GPU pathtracing renderer
http://www.dsogaming.com/news/quake-2-absolutely-stunning-real-time-gpu-pathtracing-renderer/
Path tracing and ray tracing are two renderers that we strongly believe will benefit video-games in one or two decades. Yeah, we are talking about something that some people may never experience, however real-time pathtracing is capable of producing incredible reflections, shadows, and global illumination effects. And gamers can get an idea of what pathtracing is all about as Edd Biddulph has been working on a GPU pathtracer for Quake 2.

>>There's a video showing how it looks but, if you want to experience it, you can do it from here.

 

GAMING NEWS

F1 2017 – Career mode expanded and detailed
http://www.dsogaming.com/news/f1-2017-career-mode-expanded-and-detailed/
Codemasters and Deep Silver have today released the latest gameplay trailer for F1 2017, the official videogame of the 2017 FIA FORMULA ONE WORLD CHAMPIONSHIP, showcasing the extensive Career Mode that makes it the most complete game in the franchise’s history.

 

Blizzard Working on New IP’s and is Incubating New Ideas
http://www.dsogaming.com/news/blizzard-working-on-new-ips-and-is-incubating-new-ideas/
Blizzard has begun working on new IP’s for their upcoming games, but will not rush any new titles as there is many of such incubating within the developer.

 

Ys VIII: Lacrimosa of Dana – New trailer showcases a new cast of characters
http://www.dsogaming.com/videotrailer-news/ys-viii-lacrimosa-of-dana-new-trailer-showcases-a-new-cast-of-characters/
NIS America has released a new trailer for Ys VIII: Lacrimosa of Dana – New trailer showcases a new cast of characters, introducing a new cast of colorful characters who aid Adol on his journey. This trailer shows the young noblewoman Laxia, as well as fisherman Sahad, transporter Hummel, and the isle native Ricotta.

 

Highlander and Gladiator Class Added to For Honor Season 3
http://www.dsogaming.com/news/highlander-and-gladiator-class-added-to-for-honor-season-3/
Now that season three is on it’s way Ubisoft have begun to discuss what they will be adding to the new season to change things up a bit. New Highlander and Gladiator classes, gear and maps are just some of the specialities in the coming August update.

 

Path of Exile Expansion The Fall of Oriath Today
http://www.dsogaming.com/news/path-of-exile-expansion-the-fall-of-oriath-today/
Tonight Path of Exile will release their new expansion The Fall of Oriath adding Act Five to complete the story. Also, adding five more acts based on the regular five to replace the higher difficulty levels. The update will be going live tonight but the patch notes are live now and can be viewed here: v3.0.0 patch notes



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Part 2 of today news, and last for this week:

 

PUBG's new car horns have turned stream-snipers into 'stream-honkers'
http://www.pcgamer.com/pubgs-new-car-horns-have-turned-stream-snipers-into-stream-honkers/
One of the new features in the imminent update to PlayerUnknown's Battlegrounds is that cars will now have horns. The patch, due today, has been available to try on PUBG's public test servers where it has already become somewhat of an irritation to certain streamers, because some stream-snipers have become, well, stream-honkers.

 

Mount & Blade 2: Bannerlord dev promises more transparency, kicks off a weekly blog
http://www.pcgamer.com/mount-blade-2-bannerlord-dev-promises-more-transparency-kicks-off-a-weekly-blog/
We've been looking forward to Mount & Blade 2: Bannerlord for a long time now—a really long time. It's not that we haven't seen progress on it. We got some promising hands-on time with it just a couple of months ago at E3, in fact. But on the whole, developer TaleWorlds Entertainment hasn't been overly communicative with its fan base about the new game, a shortcoming it acknowledged—and promised to address—in a message posted today on Steam.

 

Wizards of the Coast unveils Magic: The Gathering - Arena
http://www.pcgamer.com/wizards-of-the-coast-unveils-magic-the-gathering-arena/
In June, Perfect World Entertainment and Neverwinter developer Cryptic Studios announced a new, "truly unique AAA game" based on the collectible card game Magic: The Gathering. About a week after that, Magic publisher Wizards of the Coast said that another Magic-branded game was also in development, as part of its "Magic Digital Next" program. Today, it revealed the title.

 

The Evil Within 2 pushes psychological horror 'much harder' than original
http://www.pcgamer.com/the-evil-within-2-pushes-psychological-horror-much-harder-than-original/
Announced at E3 earlier this year and due in October—on Friday the 13th, no less—The Evil Within 2 revisits the seemingly perpetual struggles of intrepid cop-turned-fallen hero Sebastian Castellanos. Weaving a more personalised tale this time round, Shinji Mikami's frightful sequel will adopt a more psychological horror-leaning guise than its forerunner.

 

Alexis Kennedy's part in next Dragon Age will 'immediately look familiar to anyone who's played Fallen London, Sunless Sea'
http://www.pcgamer.com/alexis-kennedys-part-in-next-dragon-age-will-immediately-look-familiar-to-anyone-whos-played-fallen-london-and-sunless-sea/
In February, we learned that Failbetter Games founder Alexis Kennedy is working on something within the Dragon Age franchise. "It's a bit of lore which has not been addressed much to date in Dragon Age," Kennedy told Eurogamer at the time. What exactly that is remains to be seen, but those familiar with the writer and game designer's back catalogue can expect something typically odd.

 

Daniel Licht, the Dexter and Dishonored composer, dies at 60
http://www.pcgamer.com/daniel-licht-the-dexter-and-dishonored-composer-dies-at-60/
Daniel Licht, the composer responsible for the scores of Dishonored and Dishonored 2 has died of cancer, aged 60-years-old.

 

With the news now out of the way, it's time to take a look at what's in store at GOG and Steam for us:

+GOG

We have the same two deals than at the start of the week:

+Steam

We have three deals at Steam to take advantage of:

 

And now we're finally over. Have a happy and gaming weekend.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

I don't know, it's true, but we know that AMD has been using the 1080 in all its Vega demostrations, which gives us an idea of what segment of the market they're going after.

Also, do you really, really think that AMD has the power to get into a battle with Nvidia using propietary tech? C'mon, they're in a position where, if they do that, it could cause them more harm than good... while losing a lot of the goodwill they've gotten promoting the open formats over the years.

The 680/7970 case was an oddity. The 7xx0 cards were the first GCN cards and AMD had a lot of work to do with their drivers, and it also needed an extra bump in speed to 1,000 MHz to top the Nvidia card.

You're overthinking it ... (It's HOW you use the silicon that matters and from that perspective Vega 10 does not even begin to compare with the GP104 in that department, 314mm^2 vs 484mm^2) 

Sure AMD has the power to get into battle with Nvidia in terms of proprietary tech usage ... (take a look at this list of hardware features) 

(These are just official DX12 extensions, this doesn't even count other hardware features that are inaccessible in most APIs.) 

Goodwill ? I think AMD is playing dumb with it's most loyal followers when HARDWARE FEATURES such as Async Compute or Rapid Packed Math (double rate FP16) are by the very definition proprietary! 

Nobody seems to have an issue with the above features being used so I don't know why people are so against the idea of AMD bringing their own competitor to gameworks when they could stand to make their competitor's top performer look slower by as much as 30% on a good day depending the gains or performance characteristics of these features ... (Is Quantum Break DX12 not a good example of this where AMD's competitor Maxwell's architecture cratered in performance in comparison to their own microachitecture ? If every modern AAA game engine was built and designed like the Northlight Engine we wouldn't have to bear seeing AMD agonizing so much.) 

Why stop at Async Compute or FP16 for AMD when there are lot's of possibilities out there ?! (This frown upon 'proprietary' technology mentality is just an excuse to make AMD's offerings less competitive and then people wonder why they don't follow up on buying AMD hardware when their less than impressed with the performance of current software.) 

I mean how many of you guys would be buying Nvidia hardware today if AMD was able to consistently deliver above GTX 1070 performance TODAY with Polaris 10 by using their 'proprietary' technology that everyone shames ? (It's clear enough that nobody wants AMD to deliver the goods tomorrow when we want the goods NOW as seen with Ryzen even though it's arguably less future proof with it's half rate AVX while Skylake-X will get AVX-512.)



Around the Network
fatslob-:O said:
JEMC said:

I don't know, it's true, but we know that AMD has been using the 1080 in all its Vega demostrations, which gives us an idea of what segment of the market they're going after.

Also, do you really, really think that AMD has the power to get into a battle with Nvidia using propietary tech? C'mon, they're in a position where, if they do that, it could cause them more harm than good... while losing a lot of the goodwill they've gotten promoting the open formats over the years.

The 680/7970 case was an oddity. The 7xx0 cards were the first GCN cards and AMD had a lot of work to do with their drivers, and it also needed an extra bump in speed to 1,000 MHz to top the Nvidia card.

You're overthinking it ... (It's HOW you use the silicon that matters and from that perspective Vega 10 does not even begin to compare with the GP104 in that department, 314mm^2 vs 484mm^2) 

Sure AMD has the power to get into battle with Nvidia in terms of proprietary tech usage ... (take a look at this list of hardware features) 

(These are just official DX12 extensions, this doesn't even count other hardware features that are inaccessible in most APIs.) 

Goodwill ? I think AMD is playing dumb with it's most loyal followers when HARDWARE FEATURES such as Async Compute or Rapid Packed Math (double rate FP16) are by the very definition proprietary! 

Nobody seems to have an issue with the above features being used so I don't know why people are so against the idea of AMD bringing their own competitor to gameworks when they could stand to make their competitor's top performer look slower by as much as 30% on a good day depending the gains or performance characteristics of these features ... (Is Quantum Break DX12 not a good example of this where AMD's competitor Maxwell's architecture cratered in performance in comparison to their own microachitecture ? If every modern AAA game engine was built and designed like the Northlight Engine we wouldn't have to bear seeing AMD agonizing so much.) 

Why stop at Async Compute or FP16 for AMD when there are lot's of possibilities out there ?! (This frown upon 'proprietary' technology mentality is just an excuse to make AMD's offerings less competitive and then people wonder why they don't follow up on buying AMD hardware when their less than impressed with the performance of current software.) 

I mean how many of you guys would be buying Nvidia hardware today if AMD was able to consistently deliver above GTX 1070 performance TODAY with Polaris 10 by using their 'proprietary' technology that everyone shames ? (It's clear enough that nobody wants AMD to deliver the goods tomorrow when we want the goods NOW as seen with Ryzen even though it's arguably less future proof with it's half rate AVX while Skylake-X will get AVX-512.)

You accuse me of overthinking? You, that write a whole post making Microsoft's DX12 look like some kind of AMD propietary tech? Yeah, right...

The only thing AMD has done is provide their architectures with enough capabilities to support DX 12 better than Nvidia. And now that we talk about it, we've had DX 12 for over two years now and neither AMD nor Nvidia have made a product that's fully DX 12 capable. They don't seem to have much interest in it.

And what AMD needed to do was make 480/Polaris 10 a 40 CU part to give it a proper edge over the 1060 and the 470. They focused so much on the "mainstream" market that all their products overlapped with each other, and launching 4 and 8GB versions was an even dumber move.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:

You accuse me of overthinking? You, that write a whole post making Microsoft's DX12 look like some kind of AMD propietary tech? Yeah, right...

The only thing AMD has done is provide their architectures with enough capabilities to support DX 12 better than Nvidia. And now that we talk about it, we've had DX 12 for over two years now and neither AMD nor Nvidia have made a product that's fully DX 12 capable. They don't seem to have much interest in it.

And what AMD needed to do was make 480/Polaris 10 a 40 CU part to give it a proper edge over the 1060 and the 470. They focused so much on the "mainstream" market that all their products overlapped with each other, and launching 4 and 8GB versions was an even dumber move.

I mean no ill intention with it and apologize if that was the case ... 

And my point was not about DX12, it's that AMD has proprietary technology whether people like it or not ... (DX12 is just simply an interface to expose that proprietary hardware extension) 

A 40 CU part doesn't change the fundamentals (still has the perf/area issue), it's that AMD needs to follow through with ISV support for current games and games in the near future so that these they can capitalize on those proprietary technology to give AMD the upper advantage ... (Doom is an example of this and I imagine even more so for Wolfenstein 2 with the addition of FP16) 

What's arguably dumb is AMD designing their hardware and never making use of it since that's wasted silicon and different SKUs exist to serve different segments ... (If AMD can't win in benchmarks today what they should do is build games in mind with future hardware features and that way get their new hardware releases to take advantage of current software by then! Nobody wants this 'FineWine' a year later, they want to see it on launch day and I bet most people would here too!) 



fatslob-:O said:

Goodwill ? I think AMD is playing dumb with it's most loyal followers when HARDWARE FEATURES such as Async Compute or Rapid Packed Math (double rate FP16) are by the very definition proprietary!

nVidia delivered Packed Math first with it's Tegra. AMD followed it second.
Now the implementation is propriety, the concept is not, same goes for Asynchronous Compute.
Mobile heavily leverages FP16 due to how much cheaper it is on the hardware, not just for performance, but power consumption as well, but it does have a ton of caveats.

Asynchronous Compute is a part of the Direct X 12 specification, so it's not really "propriety" as it's an Open Standard that is able to be used by everyone who is adhering to that specification. - Other API's are also exposing the functionality.

One thing to keep in mind is that nVidia and AMD's Asynchronous Compute are approached differently.
Keep in mind, It's all about running Compute and Graphics workloads concurrently. AMD tends to excel here thanks to it's ACE units.

The ACE units can just keep dishing out new work threads with very little latency impact.
nVidia's approach requires the CPU to do a heap of that initial heavy lifting that AMD's ACE units would typically do.

In short, if there are a ton of work threads, AMD's hardware doesn't stall, nVidia's will. Which is why AMD's hardware tends to excel in demanding Asynchronous Comute scenarios.
In-fact during the early Maxwell days if you pushed Asynchronous Compute to hard on nVidia hardware, the driver will stall and Windows would be forced to kill the driver.
If it was lighter Asynchronous Compute, nVidia's hardware was actually faster than AMD's.

fatslob-:O said:

You're overthinking it ... (It's HOW you use the silicon that matters and from that perspective Vega 10 does not even begin to compare with the GP104 in that department, 314mm^2 vs 484mm^2)

I would like for AMD to return to it's small Core strategy that is serving nVidia so well now and what made AMD competitive with the Radeon HD 4000/5000 series.
They did so well during that era.

fatslob-:O said:

Nobody seems to have an issue with the above features being used so I don't know why people are so against the idea of AMD bringing their own competitor to gameworks when they could stand to make their competitor's top performer look slower by as much as 30% on a good day depending the gains or performance characteristics of these features ... (Is Quantum Break DX12 not a good example of this where AMD's competitor Maxwell's architecture cratered in performance in comparison to their own microachitecture ? If every modern AAA game engine was built and designed like the Northlight Engine we wouldn't have to bear seeing AMD agonizing so much.)

Well. The difference here is that nVidia is allowed to take an AMD-styled approach to Asynchronous Compute, that is compatible with AMD's implementation.

Allot of the "features" in Gameworks, such as PhysX is walled off to AMD.
AMD pushed for things like TressFX, which is open source and available to everyone. - nVidia however, built it's own propriety standard and walled it off.

That is ultimately the difference between the two companies approaches.

It's like during the Direct X 10 era, nVidia refused to adopt Direct X 10.1, which forced games to not bothering to support Direct X 10.1 which would have allowed AMD's hardware to shine even better.
Heck, some games actually released with Direct X 10.1 support and were later patched to remove that support.

JEMC said:

And what AMD needed to do was make 480/Polaris 10 a 40 CU part to give it a proper edge over the 1060 and the 470. They focused so much on the "mainstream" market that all their products overlapped with each other, and launching 4 and 8GB versions was an even dumber move.

Polaris uses oddball counts of functional units. It is clearly a design that was compromised in order to reduce costs and price.

AMD probably needed more than 40 CU's though.
It would have brought the hardware from:
* 2304 Shaders - 144 Texture Mapping Units - 32 Rops.
To
* 2560 Shaders - 160 Texture Mapping Units - 32 Rops.

Or roughly an 11% increase in compute, 11% increase in fillrate.

I think a 48 CU design would have been better. It would have meant:
* 3072 shaders - 192 Texture Mapping Units - 32 Rops would have been more ideal.

Would have meant a good 35% increase in compute and fillrate, which would have made it far more attractive against the Geforce 1060.

The caveat to this is... Everyone would have fapped twice as hard over it's mining potential.

fatslob-:O said:

A 40 CU part doesn't change the fundamentals (still has the perf/area issue), it's that AMD needs to follow through with ISV support for current games and games in the near future so that these they can capitalize on those proprietary technology to give AMD the upper advantage ... (Doom is an example of this and I imagine even more so for Wolfenstein 2 with the addition of FP16)

We need Direct X 12 and Vulkan to be the defacto API's already. Then AMD's hardware would look a little more favourable overall.

It's hilariously almost the opposite issue as AMD's older VLIW5 architecture.

VLIW5 was designed so as to provide excellent performance in older workloads from the Direct X 9/10 era and wasn't very good at more modern workloads like Direct X 11.

Graphics Core Next is shit at older/current workloads, but excels in newer games that leverage it's architectures strengths.



--::{PC Gaming Master Race}::--

fatslob-:O said:
JEMC said:

You accuse me of overthinking? You, that write a whole post making Microsoft's DX12 look like some kind of AMD propietary tech? Yeah, right...

I mean no ill intention with it and apologize if that was the case ... 

Oh, don't worry about that because I didn't take that in a bad way, I was just surprised of you telling me that I was overthinking only to follow suit by mixing propietary hardware, propietary software and DX 12. My answer was more like "you should look yourself in a mirror before saying that to someone".

No biggie.

fatslob-:O said:

And my point was not about DX12, it's that AMD has proprietary technology whether people like it or not ... (DX12 is just simply an interface to expose that proprietary hardware extension) 

A 40 CU part doesn't change the fundamentals (still has the perf/area issue), it's that AMD needs to follow through with ISV support for current games and games in the near future so that these they can capitalize on those proprietary technology to give AMD the upper advantage ... (Doom is an example of this and I imagine even more so for Wolfenstein 2 with the addition of FP16) 

What's arguably dumb is AMD designing their hardware and never making use of it since that's wasted silicon and different SKUs exist to serve different segments ... (If AMD can't win in benchmarks today what they should do is build games in mind with future hardware features and that way get their new hardware releases to take advantage of current software by then! Nobody wants this 'FineWine' a year later, they want to see it on launch day and I bet most people would here too!) 

Again, DX 12 is MSoft's API, not a propietary software rom AMD. Did they just happen to have better hardware for it? Absolutely, but that doesn't make it their own software.

Also, I'm sure MSoft is pushing devs to move on from DX 11 to DX 12, mostly because of the X1, so there's little else AMD can do there as well. Devs simply seem to not care that much for DX 12.

Pemalite said:

 

JEMC said:

And what AMD needed to do was make 480/Polaris 10 a 40 CU part to give it a proper edge over the 1060 and the 470. They focused so much on the "mainstream" market that all their products overlapped with each other, and launching 4 and 8GB versions was an even dumber move.

Polaris uses oddball counts of functional units. It is clearly a design that was compromised in order to reduce costs and price.

AMD probably needed more than 40 CU's though.
It would have brought the hardware from:
* 2304 Shaders - 144 Texture Mapping Units - 32 Rops.
To
* 2560 Shaders - 160 Texture Mapping Units - 32 Rops.

Or roughly an 11% increase in compute, 11% increase in fillrate.

I think a 48 CU design would have been better. It would have meant:
* 3072 shaders - 192 Texture Mapping Units - 32 Rops would have been more ideal.

Would have meant a good 35% increase in compute and fillrate, which would have made it far more attractive against the Geforce 1060.

The caveat to this is... Everyone would have fapped twice as hard over it's mining potential.

Well, my comment of 40CUs was more in line with the performance of the 480 compared to both Nvidia's 1060 but also their own R9 390 cards.

When the new Polaris cards launched, there were a lot of games that performed better on the older Hawaii/Grenada GPUs, that happened to have 40 CUs. Now, I know that one is a x90 parts and the other is a x80 part, but we all expect newer cards to perform better than the last ones, and move the performance bar one step further. That the new 480 couldn't beat the older 390 parts put the 480 in a bad spot and disappointed a lot of people, specially considering what Nvidia had managed to do with Pascal and that Nvidia's third tier card, the 1060, was faster than it in DX 11 games.

A 40 CU Polaris, while not setting the world on fire, would have been able to avoid all of that, making the 480 clearly faster than the cards it was replacing while on par or faster than the 1060 in DX 11 games and much, much faster than it in DX 12/Vulkan games, putting AMD in a better position in front of us, the consumers.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Pemalite said:

nVidia delivered Packed Math first with it's Tegra. AMD followed it second.
Now the implementation is propriety, the concept is not, same goes for Asynchronous Compute.
Mobile heavily leverages FP16 due to how much cheaper it is on the hardware, not just for performance, but power consumption as well, but it does have a ton of caveats.

Asynchronous Compute is a part of the Direct X 12 specification, so it's not really "propriety" as it's an Open Standard that is able to be used by everyone who is adhering to that specification. - Other API's are also exposing the functionality.

One thing to keep in mind is that nVidia and AMD's Asynchronous Compute are approached differently.
Keep in mind, It's all about running Compute and Graphics workloads concurrently. AMD tends to excel here thanks to it's ACE units.

The ACE units can just keep dishing out new work threads with very little latency impact.
nVidia's approach requires the CPU to do a heap of that initial heavy lifting that AMD's ACE units would typically do.

In short, if there are a ton of work threads, AMD's hardware doesn't stall, nVidia's will. Which is why AMD's hardware tends to excel in demanding Asynchronous Comute scenarios.
In-fact during the early Maxwell days if you pushed Asynchronous Compute to hard on nVidia hardware, the driver will stall and Windows would be forced to kill the driver.
If it was lighter Asynchronous Compute, nVidia's hardware was actually faster than AMD's.

AMD still has the competitive advantage since Packed Math isn't built into any desktop Nvidia GPUs. FP16 and Async Compute implementation being proprietary was my point! Just because you're idea is openly available does not make it usable by anyone ... (Conservative rasterization is patented and just because all 3 DirectX graphics hardware vendors support it doesn't mean it's open since it's not available for every other graphics hardware vendor such as Qualcomm, ARM, and ImgTec. Actually scratch that out since Qualcomm has a patent for it.) 

Async Compute is not part of the DX12 specs, it's 'multi-engine' that that is in the DX12 specs but how vendors choose expose it is up to them much like anisotropic filtering. Also DX12 is NOT and open standard, the runtimes, graphics kernel, certification, and the spec is all determined by Microsoft. Much of it goes for Vulkan as well since the spec is determined by the Khronos Group's Architecture Review Board, you can only have an open implementation of Vulkan ... 

AMD tends to excel at async cause they have a rasterizer bottleneck ... (There's probably very few other reasons for it since AMD highly recommends running a compute shader when doing shadow map rendering which is coincidentally geometry throughput intensive. Nvidia doesn't need async compute all that much cause they have very good triangle throughput.) 

Pascal still has limits in it's async compute implementation but who cares since the architecture performs well in current games compared to having mostly dead silicon in AMD hardware cause the rest of the AAA industry doesn't bother with either DX12 or Vulkan aside from consoles ... 

Pemalite said:

I would like for AMD to return to it's small Core strategy that is serving nVidia so well now and what made AMD competitive with the Radeon HD 4000/5000 series.

They did so well during that era.

Even if it means having less hardware features ? 

Pemalite said:

Well. The difference here is that nVidia is allowed to take an AMD-styled approach to Asynchronous Compute, that is compatible with AMD's implementation. 

Allot of the "features" in Gameworks, such as PhysX is walled off to AMD. 
AMD pushed for things like TressFX, which is open source and available to everyone. - nVidia however, built it's own propriety standard and walled it off.

That is ultimately the difference between the two companies approaches.

It's like during the Direct X 10 era, nVidia refused to adopt Direct X 10.1, which forced games to not bothering to support Direct X 10.1 which would have allowed AMD's hardware to shine even better.
Heck, some games actually released with Direct X 10.1 support and were later patched to remove that support.

I don't know about that, might have to check some patents on that ... 

AMD can also have their own walled garden such as 'shader intrinsics' and 'rapid packed math' or even 'async compute' so that Nvidia doesn't benefit from these optimizations when these are AMD specific code paths that depend on driver extensions ... (It'd be nice if AMD can get devs to use underestimate conservative rasterization for GPU occlusion culling to gain an optimization advantage since their competitor doesn't offer that hardware feature currently.) 

Even nicer if AMD can get exclsuive graphics features around these hardware features ... 

Pemalite said:

We need Direct X 12 and Vulkan to be the defacto API's already. Then AMD's hardware would look a little more favourable overall.

It's hilariously almost the opposite issue as AMD's older VLIW5 architecture.

VLIW5 was designed so as to provide excellent performance in older workloads from the Direct X 9/10 era and wasn't very good at more modern workloads like Direct X 11.

Graphics Core Next is shit at older/current workloads, but excels in newer games that leverage it's architectures strengths.

Not only that but the extensive hardware features should also be used to make AMD look more favourable too ... 

VLIW5 is much better recieved than Vega. Vega is like R600, R600 could pass with a few DX11 features and I imagine Vega could be a prototype for DX13 but both are no good cause the hardware features aren't being used ... 

I wonder how many would prefer AMD more than they do now if they just made a bare minimum DX12 videocard but had better performance than their competitor in many current games ?