By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

Microsoft may not need S3 graphics anymore. But it is still supporting their technologies even in Direct X 12.
Besides. You are missing the entire point of this.

I was providing examples of where hardware manufacturers contributed to the Direct X specification. Nitpicking at a few edge-case scenarios isn't changing that fact.

As for Qualcomm. They likely license AMD's technologies and patents anyway.
ADRENO is a word play on "Radeon". - And Qualcomm bought the technology from AMD in the first place.

As for patenting computer graphics technology? Nope. Still happening. G-Sync, PhysX, Hairworks and so on are examples. Might not be happening at the same rates as it used to during the 3D boom, but it's still occuring.

My point wasn't even how IHVs did not contribute to the DirectX spec so that's just a strawman ... 

My argument was Microsoft has ABSOLUTE control over the DirectX specification and that remains true no matter the circumstances ... 

G-sync is not patented. (How else would the HDMI forum be able to standardize adaptive refresh rates without Nvidia's approval ? Nvidia knows this too since their one of the HDMI members too!) 

As for patenting software such as Physx or Hairworks, that's nearly impossible since tons of physics simulation software were already written or already had their patents expired before Nvidia rolled out their own solution ... (Better luck next time for them ?) 

I think the age of patenting computer graphics technology are really over once we head for real time physically based global illumination solutions like path tracing or other light transport methods because by then the field of real time computer graphics will have largely been *solved* ... (the only motivation behind creating patents is to capitalize on solutions for unsolved problems) 

Pemalite said:

It is a big issue.


AMD hasn't created "4x more graphics architectures since GCN".
Graphics Core Next 2, 3, 4, 5 and so on... Are all just iterative updates of GCN 1.0. They aren't brand new top-to-bottom designs.

Not only that, but AMD is rebadging GCN 1.0 parts from 2012 even today with the Radeon RX 520 and 530.

And by rebadging those parts, they miss out on some vital functionality... Their Tessellation performance is laughable, they have no Delta Colour Compression, no True Audio, no power saving and power state enhancements, no HEVC decoding, no HDMI 2.0... And if I remember correctly, also no 10-bit colour support, no Virtual Super Resolution... And I could go on, but I made my point.


As for Vega... Vega is just the GPU on top of the Radeon RX 500 stack. It is what Fury was to the Radeon 300 series.

I don't see the issue with it. That's what the vast majority of the chip designers do in this industry when creating *new* microachitectures ... (Why throw away millions of man hours or years of valid research the majority of which can be perfectly reused ? Heck, AMD Zen reused portions of their own in-house existing logic design while incorporating what it reverse engineered from their competitors designs. There's benefits too from reusing logic design like fixing CPU hardware bugs and believe it or not GPU hardware bugs too.) 

The 520 and the 530 are OEM exclusive and can not be puchased individually so that's not an issue to customers who are going to buy new graphics cards. (By then Raven Ridge APUs will be suitable replacements for them.) 

Tessellation performance is mostly linked to lower geometry performance so it's no secret why AMD GPUs compare badly with Nvidia counterparts. DCC is a performance enhancing feature so the lowest end parts lacking it is no big loss and True Audio never went big so hat's not a loss either. HDR10 support only requires WDDM 2.1 drivers (even original GCN can do HDR10 since that only requires changes to the content and software backends) and VSR can be programmed in the game instead or AMD could just choose to give the features to the original GCN since there's no good reason hardware couldn't do it ... 

You might have a point with HDMI 2.0 and maybe even HEVC (an open alternative like AV1 which is getting support from the biggest companies could supplant it in the end) ... 

Pemalite said:


Agreed.


But part of the issue is... AMD only made the Radeon group an independent entity again just a couple years ago, they are still abiding by their old plan, we won't see the fruits of AMD's seperate of the Radeon group for another year or two.

AMD needs to invest. Not just iterate. 

nVidia has done an excellent job innovating even without any real competition... And that has paid off for them.

I bet we won't ever see the fruits until software developers will start using these hardware features ... (You're right that AMD needs to invest but it's not in the hardware like you think, it's in the games.)

Pemalite said:


Just because they support FP16, doesn't mean there will be any gains with it over FP32.

The amount of GPU's available with double FP16 is miniscule. And then if you compare those GPU's with Steams statistics... Well. You get the point.

That depends, if your game is ALU bound or or has lot's of register pressure you will almost certainly see a gain with FP16 ... 

FP16 supported hardware need not be double rate and it's getting a lot of traction fast in hardware ... (AMD APUs with GCN3, discrete AMD GPUs with GCN 3/4, Intel Broadwell/Skylake/Kaby Lake CPUs and soon we'll have Vega, Raven Ridge, and Coffee Lake so FP16 support in hardware is far from 'miniscule'.) 

Pemalite said:

Then the performance gains will likely not be worth it for the significantly reduced precision. 

How would you know that ? Do you have any data to back up that claim ? (precision issues are handled by the developers if there's going to be concerns about graphical quality compared to last time)

Pemalite said:

On the contrary. That isn't what I think.

Apple isn't doing high-end AAA games in any great extent anyway.

That's changing with their Metal gfx API (which also has FP16 support) gaining traction with Windows ports. 

There's 3 gfx APIs (DX12, GNM/X, Metal) or you can make that 4 if you count in Vulkan with AMD's extension which will support FP16! (AMD, Apple, Intel, Microsoft and Sony are all in this together!) 

If I had to bet one thing it would be FP16 getting more traction than async compute (even though it's featured in every modern API) ...