By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:

nVidia and AMD describe Asynchronous Compute differently anyway.

If it's exposed in Direct X 12, regardless of how extensive, it's part of the Direct X 12 spec.

It really isn't, multi-engine goes far beyond just async compute. Just as multi-engine exposes the concept of stateless compute (async compute) it comes with a number of things like stateless resource transfers (async copy through GPU DMA engines) and near stateless graphics (future) but it also takes into account hardware that don't have stateless GPU design ... 

 

Ergo... Asynchronous Compute is exposed in Direct X 12. It doesn't matter how extensive that exposure is.

fatslob-:O said:

Microsoft still gets final say in the end regardless of AMD, Intel or Nvidia's contributions ... (I don't imagine NEC to have ever contributed to the DirectX specifications when they were just chip manufacturers for ImgTec much like TSMC is for Nvidia.) 

And yes Microsoft can say "get out" to S3 regardless of weather or not they adopted patented technology from them plus S3 holding S3TC hostage isn't going to be an issue any longer ... (the patent for S3TC is going to expire this October just like the patent for anisotropic filtering did a couple years ago)


Of course Microsoft sits on top of the food chain.

Also. NEC used to build graphics chips, Intel even licensed them at one point, same with Number Nine, they hold multiple patents in graphics technologies, including parts of the rendering pipeline that Direct X leans upon.

And no. They can't just tell S3 to "take a hike". Because patents and licensing agreements.
Even successive technologies like Direct X Texture Compression uses licensing from S3 as it was based around and improved upon over S3 Texture Compression.

And you are right, S3TC does expire soon.

fatslob-:O said:

But they need it in order for them to realistically compete ... (Until IHVs can agree upon a common ISA for GPUs it's just going to be the fact of life. Heck, Nvidia is using proprietary technology and it's making them boatloads of money.) 


AMD can compete without it just fine. Hell. They have been.

AMD's lack of a walled garden isn't the reason why they are in the current situation that they are in. It's because they have been rebadging Graphics hardware for over half a decade and not innovating.

fatslob-:O said:

But the fact that the PS4 Pro ALONE is already translating FP16 optimizations to two AAA games on the PC side immediately is amazing which means future games being built on the latest idTech 6 or Dunia 2 engine will automatically have FP16 optimizations ... (Maybe even more since the latest Frostbite 3 engine already supports FP16 on PS4 Pro and I imagine that will also soon translate to PC. AMD doesn't have to target wide-spread industry use, just AAA games in general and not necessarily AAA japanese games either since the vast majority of them are lost causes so that they can win in the most difficult benchmarks.) 

Lower precision is fine as long as the devs can control it ...

Even on PC. FP16 is  a rarity on the hardware side, which you have alluded to.
Untill it reaches mass adoption across the industry, it's support will be relatively niche'.



--::{PC Gaming Master Race}::--