By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

 

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34
eva01beserk said:
@thismentiel
What does more power really mean at tf higher than 12.9? Not more pixels since i doubt games will push more than 4k. I dont think framerate as more than 60fps is useles on consoles. They might focus on raytracing but even now most get show a comparison and they cant tell it apart.

There was a confirmation from sony that they are pushing to eliminate loading screens and the key is ssd's.

I think power wont mean anything next gen.

There is still loading times even with a pair of NVME SSD's in Raid on PC...
They should be able to minimize load times rather substantially though.

Ray Tracing is going to be the big key focus going forwards... And Ray Tracing is inherently compute limited, so that is where a developer can spend an inordinate amount of theoretical flops.

And it seems like 3D Positional Audio might be making a comeback?



--::{PC Gaming Master Race}::--

Around the Network

Seems like sony did flashed around its dick back then. Not as much as MS did. But still hopefully neither will push that much next gen.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Wasent lockhart meant to be a streaming focused device? Wich is why they dint need that much power. They probably dont even need that 4tf but it was just to at least play 1080p 60 so people could still have a low end choice. I mean there is no garanty the x or the s will still have games compatable next gen.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

@Permalite
I really dont see the appeal of ray teacing. Maybe next gen console games really put it to good use and ill see whats happening. But just reflexions as its being used now seems very lackluster for so much power needed.
Sound might be interesting. I could really see a good stealth game with it.

I really hope that next gen offers a some other feture. Maybe vr finally breaks the mainstream. That would be great.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Pemalite said:

Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions.
Graphics Core Next being a highly compute orientated architecture generally cannot get enough bandwidth.

In saying that... There is a point of diminishing returns... Despite the fact that Vega 7 increased bandwidth by 112% and compute by 9%... Performance only jumped by a modest 30-40% depending on game... So the "sweet spot" in terms of bandwidth is likely between Vega 64 and Vega 7. Maybe 768GB/s?

Vega 7's inherent architectural limitations tends to stem not from Compute or Bandwidth though... So when you overclock the Ram by an additional 20% (1.2TB/s!) you might only get a couple % points of performance... But bolstering core clock will net almost a linear increase, so it's not bandwidth starved by any measure.

Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)

Vega 64 was an increase in performance in comparison to the Fury X despite regressing in memory bandwidth so I don't think the Radeon VII needs 1 TB/s when just 640 GB/s could probably do the job just as effectively in giving the chip nearly the same performance uplift ... 

In fact, I don't think I've ever seen a benchmark where the Radeon VII ended up being 2x faster than the Vega 64 ... 

Pemalite said:

Not to mention rolling out a version of Direct X 12 for Windows 7.

-----------------------------------------------------------------------------------------------------------------------------------------------------------

EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse.

In short though, without a doubt nVidia does get more support in engines on the PC side of the equation over AMD... Despite the fact AMD has had it's hardware in the majority of console over the last few generations. (Wii, WiiU, Xbox 360, Xbox One, Playstation 4.)

Part of that is nVidias collaboration with developers... Which has been a thing for decades.

ATI did start meeting nVidia head on back in the R300 days though... Hence the battle-lines between Doom 3 and Half Life 2, but nothing of that level of competitiveness has been seen since.

They didn't take advantage of this during last generation. The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ... 

AMD only really started taking advantage of low level GPU optimizations during this generation ... 

Pemalite said:

nVidia can also afford to spend more time and effort on upkeep.

Both AMD and nVidia's drivers are more complex than some older Windows/Linux Kernels.

Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series) 

To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ... 

Pemalite said:

Actually I do! But it's not as extensive as you portray it to be.
I.E. Pascal and Maxwell share a significant amount of similarities from top to bottom... Kepler and Fermi could be grouped together also. Turing is a significant deviation from prior architectures, but shares a few similarities from Volta.

Even then AMD isn't as clean cut either... They have GCN 1.0, 2.0, 3.0, 4.0, 5.0 and soon 6.0.

With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ... 

Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs! 

At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ... 

Pemalite said:

Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node.

Think about it this way ... 

If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!

Pemalite said:

nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores.

For mobile your point about power is relevant, but for a fixed console... Not so much. You have orders of magnitude more TDP to play with.
An 8-core ARM SoC with a Geforce 1060 would give an Xbox One X with it's 8-core Jaguars a run for it's money.

They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...  

I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ... 

Pemalite said:

Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more.

nVidia is diversifying as... Which you alluded to... Their Console and PC gaming customer base isn't really growing, hence where they are seeing the bulk of their gains.
nVidia certainly does have a future, they aren't going anywhere soon... They have Billions in their war chest.

Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ... 

Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ... 

Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV) 

Pemalite said:

I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old.

Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ...

Pemalite said:

Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.


But like I said... If any company has the potential, it's certainly Intel.

Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ... 

Pemalite said:

Well... It was a game built for 7th gen hardware first and foremost.
However... Considering it's one of the largest selling games in history... Is played by millions of gamers around the world... And actually still pretty demanding even at 4k, it's a relevant game to add to any benchmark in my opinion.

It's one data point though, you do need others in a benchmark "suite" so you can get a comprehensive idea how a part performs in newer and older titles, better or worse.

It being specifically built for last generation is exactly why we should dump it ... 

Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ... 

"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ... 



Around the Network
eva01beserk said:
@Permalite
I really dont see the appeal of ray teacing.

There is a ton of appeal to Ray Tracing. Reflections, Lighting, Shadowing... All see marked improvements.

Before the current "variants" of Ray Tracing (Which is all the rage thanks to nVidia's RTX) we were heading down the road of Global Illumination Path Tracing which is variant of Ray Tracing even as far back as the 7th generation of consoles. (Especially towards the end of the generation, especially so on the PC releases.)

So despite you not seeing much "appeal" for the technology... You have actually been seeing it for years.

eva01beserk said:

Sound might be interesting. I could really see a good stealth game with it.

We actually had 3D positional Audio before... The PC was pioneering it with Aureal's A3D... And the Original Xbox with it's Soundstorm solution.
Then we sort of went backwards/stagnated for years in all markets.

It's not just Stealth games that see big benefits from it, it just makes everything feel more surreal and dynamic.

eva01beserk said:

I really hope that next gen offers a some other feture. Maybe vr finally breaks the mainstream. That would be great.

I doubt VR will ever gain much more traction. After all these years... It never materialized into anything substantive like motion controls did with the Wii.
But hey. Happy to be proven wrong.

fatslob-:O said:

Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU.

Overall the difference isn't that pronounced.
https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-7.html
https://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-18.html

Vega 7 has a clockspeed advantage sure, but it's not a significant one... And Vega 64 makes up for with it's extra CU's, meaning the different in overall compute isn't that dramatic.

fatslob-:O said:

The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)

Vega 7 is based on Radeon Instinct MI50 which is meant for deep-learning/machine-learning/GPGPU workloads.

There is also a successor part aka. MI60 with the full 64 CU compliment and higher clocks.

AMD had no answer to nVidia's high-end, so they took the instinct GPU's and rebaged it as Vega 7.

There is always a deviation in benchmarks from different time frames and even sources. - 30-40% is a rough ball park as is, no need to delve into semantics, the original point still stands.

fatslob-:O said:

They didn't take advantage of this during last generation.

And they didn't take advantage of it this generation either.

So how many generations do we give AMD before your statement can be regarded as true?

fatslob-:O said:

The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ... 

Functionally the Wii/Gamecube GPU's could technically do everything (I.E. From an effects perspective) the Xbox 360/Playstation 3 can do via TEV.
However due to the sheer lack of horsepower, such approaches were generally not considered.

As for the Xbox 360... It's certainly an R500 derived semi-custom part that adopted some features and ideas from R600/Terascale, I wouldn't say it closely resembled Adreno though... Because Adreno was originally derived from Radeon, so of course there are intrinsic similarities from the outset.

Why reinvent the wheel?

fatslob-:O said:

AMD only really started taking advantage of low level GPU optimizations during this generation ... 

As a Radeon user dating back over a decade... Haven't generally seen it.

fatslob-:O said:

Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series) 

To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ... 

Actually OpenGL support wasn't to bad for the Radeon 5000/6000 series, even when I was running quad-Crossfire Radeon 6950's unclocked into 6970's. - You might recall I did some bandwidth scaling benchmarks of those cards years ago.

The Radeon 5870 was certainly a fine card that stood the test of time though, more so than my 6950 cards. (I actually have one sitting on my shelf next to me!)

fatslob-:O said:

With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ... 

Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs! 

At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ... 

Nah. Maxwell and Pascal can be lumped together.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/4

Kepler and Fermi too.

Volta never really made a big appearance on Desktop... So Turing is the start of a fresh lineup.

And just like if you were building software targeting specific features in AMD's Next-Gen Compute unit... Compatibility will likely break with older variants of Graphics Core Next. It's just the nature of the game.

AMD re-badging hardware obviously does result in less occurrence of this happening... But I want more performance. AMD isn't delivering.

fatslob-:O said:

Think about it this way ... 

If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!

I am actually not disagreeing with you. I actually find AMD's drivers to actually be the best in the industry right now. - Which was typically the crown nVidia held during the early Graphics Core Next and prior years.

I do run hardware from both companies so I can anecdotally share my own experiences.

fatslob-:O said:

They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...  

I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ... 

ARM has higher performing cores. Certainly ones that would beat Denver.

fatslob-:O said:

Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ... 

Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ... 

Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV) 

DDR5 will not offer the appropriate levels of bandwidth to allow APU's to equal a Geforce 1080. Especially at 7nm... That claim is without any basis in reality.

Either way, Anandtechs analysis isn't painting nVidia outlook as bleak. They are growing, they are diversifying, they have billions in the bank, they are actually doing well. Which is a good thing for the entire industry, competition is a must.

fatslob-:O said:
Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ..

Minus the average performance the hardware puts out... And the lack of long term support.

You would literally need to pay me to use Intel Decelerator Graphics.

fatslob-:O said:
Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ... 

Intels 7nm seems to still be on track with it's original timeline. I am optimistic, but I am also realistic, Intel and Graphics have a shitty history, but you cannot deny that some of the features Intel is touting is wetting the nerd appetite?

fatslob-:O said:

It being specifically built for last generation is exactly why we should dump it ... 

Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ... 

"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ... 

Crysis is demanding for today's hardware because it wasn't designed for today's hardware.
Crysis was designed at a time when clockrates were ever-increasing and Crytek thought that trajectory would continue indefinitely.

Today CPU's have gotten wider with more cores, which means that newer successive Cry Engines tend to be more performant.

In saying that, GTA 5 isn't as old as Crysis anyway and certainly has an orders-of-magnitude greater player base.

I believe that benchmarks should represent the games that people are playing today. - If a game is older and based in a Direct X 9/11 era, so be it, it's still representative of what gamers are playing... In saying that, they also need newer titles to give an idea of performance in newer titles.

It's like when Terascale was on the market, it had amazing Direct X 9 performance... But it wasn't able to keep pace in newer Direct X 11 titles, hence why AMD architectured VLIW 4, to give a better balance leaning towards Direct X 11 titles as each unit was able to be utilized more consistently.

So having a representation of mixed older and newer games is important in my opinion as it gives you a more comprehensive data point to base everything on.



--::{PC Gaming Master Race}::--

Pemalite said:

And they didn't take advantage of it this generation either.


So how many generations do we give AMD before your statement can be regarded as true?

Maybe until the next generation arrives ? It'll be way more pronounced from then on ... 

Pemalite said:

Functionally the Wii/Gamecube GPU's could technically do everything (I.E. From an effects perspective) the Xbox 360/Playstation 3 can do via TEV.
However due to the sheer lack of horsepower, such approaches were generally not considered.

As for the Xbox 360... It's certainly an R500 derived semi-custom part that adopted some features and ideas from R600/Terascale, I wouldn't say it closely resembled Adreno though... Because Adreno was originally derived from Radeon, so of course there are intrinsic similarities from the outset.

Why reinvent the wheel?

No they really couldn't. The Wii's TEV was roughly the equivalent of shader model 1.1 for pixel shaders so they were still missing programmable vertex shaders. The Flipper still used hardware accelerated T&L for vertex and lighting transformations so you just couldn't straight add the feature as easily since it required redesigning huge parts of the graphics pipeline. Both the 360 and the PS3 came with a bunch of their own goodies as well. For the former, you had a GPU that was completely 'bindless' and 'memexport' instruction was nice a precursor to compute shaders. With the latter, you have SPUs which are amazingly flexible like the Larrabee concept and you effectively had the equivalent of compute shaders but it even supported hardware transnational memory(!) like as featured on Intel's Skylake CPU architecture ... (X360 and PS3 had forward looking hardware features that were far into the future) 

The Xbox 360 is definitely closer to an Adreno 2XX design than the R5XX design which is not a bad thing. Xbox 360 emulator developers especially seek Adreno 2XX devices for reverse engineering purposes since it much helps further their goals ... 

Pemalite said:

Actually OpenGL support wasn't to bad for the Radeon 5000/6000 series, even when I was running quad-Crossfire Radeon 6950's unclocked into 6970's. - You might recall I did some bandwidth scaling benchmarks of those cards years ago.


The Radeon 5870 was certainly a fine card that stood the test of time though, more so than my 6950 cards. (I actually have one sitting on my shelf next to me!)

That's not what I heard among the community. AMD's OpenGL drivers are already slow in content creation, professional visualization, games, and I consistently hear about how broken they are in emulation ... (for blender, AMD's GL drivers weren't even on the same level as the GeForce 200 series until GCN came along)

The last high-end game I remembered using OpenGL were No Man's Sky and Doom but until a Vulkan backend came out, AMD got smoked by Nvidia counterparts ...

OpenGL on AMD's pre-GCN arch is a no go ... 

Pemalite said:

Nah. Maxwell and Pascal can be lumped together.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/4

Kepler and Fermi too.

Volta never really made a big appearance on Desktop... So Turing is the start of a fresh lineup.

And just like if you were building software targeting specific features in AMD's Next-Gen Compute unit... Compatibility will likely break with older variants of Graphics Core Next. It's just the nature of the game.

AMD re-badging hardware obviously does result in less occurrence of this happening... But I want more performance. AMD isn't delivering.

CUDA documentation seems to disagree that you could just group Kepler with Fermi together. Kepler has FCHK, IMADSP, SHF, SHFL, LDG, STSCUL, and a totally new set of surface memory instructions. The other things Kepler has deprecated from Fermi were LD_LDU, LDS_LDU, STUL, STSUL, LONGJMP, PLONGJMP, LEPC but all of this is just on the surface(!) since NVPTX assembly is not Nvidia's GPUs native ISA. PTX is just a wrapper that makes Nvidia GPUs true ISA hidden so there could be many other changes going on underneath there ... (people have to use envytools to reverse engineer Nvidia's blob so that they could make open source drivers)

Even with Turing, Nvidia does not group it together with Volta since Turing has specialized uniform operations ... 

@Bold No developers are worried about new software breaking compatibility with old hardware but just about everyone is worried about new hardware breaking compatibility with old software and that is becoming unacceptable. AMD does NOT actively do this in comparison to Nvidia ... 

These are the combinatorial configurations that Nvidia has to currently maintain ... 

APIs: CUDA, D3D11/12, Metal (Apple doesn't do this for them), OpenCL, OpenGL, OpenGL ES, and Vulkan 

Platforms: Android (still releasing quality drivers despite zero market share), Linux, MacOS, and Windows (all of them have different graphics kernel architecture)

Nvidia has to support ALL of that on at least 3 major instruction encodings which are Kepler, Maxwell/Pascal, Volta, Turing and they only MAKE ENDS MEET in terms of compatibility with more employees than AMD by comparison which only have to maintain the following ... 

APIs: D3D11/12, OpenGL (half-assed effort over here), and Vulkan (Apple makes the Metal drivers for AMD's case plus AMD stopped developing OpenCL and OpenGL ES altogether) 

Platforms: Linux and Windows 

AMD have 2 major instruction encodings at most with GCN all of which are GCN1/GCN2/Navi (Navi is practically a return to consoles judging from LLVM activity) and GCN3/Vega (GCN4 shares the same exact ISA as GCN3) but despite focusing on less APIs/platforms they STILL CAN'T match Nvidia's OpenGL driver quality ... 

Pemalite said:

I am actually not disagreeing with you. I actually find AMD's drivers to actually be the best in the industry right now. - Which was typically the crown nVidia held during the early Graphics Core Next and prior years.

I do run hardware from both companies so I can anecdotally share my own experiences.

AMD drivers are pretty great but they'd be even better if they didn't have to deal with maintaining D3D11 drivers and by extension especially OpenGL drivers ... 

Pemalite said:

DDR5 will not offer the appropriate levels of bandwidth to allow APU's to equal a Geforce 1080. Especially at 7nm... That claim is without any basis in reality.

Either way, Anandtechs analysis isn't painting nVidia outlook as bleak. They are growing, they are diversifying, they have billions in the bank, they are actually doing well. Which is a good thing for the entire industry, competition is a must.

I don't see why not ? A single channel DDR4 module at 3.2Ghz can deliver 25.6 GB/s An octa-channel DDR5 modules clocked at 6.4Ghz can bring a little under 410 GB/s which is well above a 1080 and with 7nm, you play around with a more transistors in your design ...

Anandtech may not paint their outlook as bleak but Nvidia's latest numbers and attempt at acquisition doesn't bode well for them ... 

Pemalite said:

Intels 7nm seems to still be on track with it's original timeline. I am optimistic, but I am also realistic, Intel and Graphics have a shitty history, but you cannot deny that some of the features Intel is touting is wetting the nerd appetite?

I doubt it, Intel are not even committing to a HARD date for the launch of their 10nm products so no way am I going to trust them anytime soon with 7nm unless they've got a contingency plan in place with either Samsung or TSMC ... 

Pemalite said:

Crysis is demanding for today's hardware because it wasn't designed for today's hardware.
Crysis was designed at a time when clockrates were ever-increasing and Crytek thought that trajectory would continue indefinitely.

Today CPU's have gotten wider with more cores, which means that newer successive Cry Engines tend to be more performant.

In saying that, GTA 5 isn't as old as Crysis anyway and certainly has an orders-of-magnitude greater player base.

I believe that benchmarks should represent the games that people are playing today. - If a game is older and based in a Direct X 9/11 era, so be it, it's still representative of what gamers are playing... In saying that, they also need newer titles to give an idea of performance in newer titles.

It's like when Terascale was on the market, it had amazing Direct X 9 performance... But it wasn't able to keep pace in newer Direct X 11 titles, hence why AMD architectured VLIW 4, to give a better balance leaning towards Direct X 11 titles as each unit was able to be utilized more consistently.

So having a representation of mixed older and newer games is important in my opinion as it gives you a more comprehensive data point to base everything on.

We should not use old games because it's exactly as you said, "it wasn't designed for today's hardware" which is why we shouldn't skew a benchmark suite's to be heavily weighted in favour of past workloads ...



Guys a new REAL leak, videocardz.com is very reliable, as I have been predicting, Navi can reach 2ghz+

All those youtube rumors are fake and this one is real, it's Navi 10 and PS5 will have a cut-down navi 10 = 36 CU's clocked at 1,8ghz giving 8,3TF :) it's almost over now hehe.

Edit: Looked at pc gaming benchmarks to see where performance should land. The PS5 should have around geforce 1080/Vega64 performance if this information is correct.

Last edited by Trumpstyle - on 14 May 2019

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!

@Pemalite

And just like that, Avalanche Studio's Apex engine made the jump!  

AMD cards are doing very well in RAGE 2, better than their NVIDIA counterparts. For example, the Radeon VII is usually around 10% behind the GeForce RTX 2080, but in RAGE 2, it matches it almost exactly. The same is happening with Vega 64, which is usually 20% behind the RTX 2070, yet manages to trade blows with it in RAGE 2. For the lower end, the picture is similar: we'd expect the RX 590 to be a few percentage points behind the GTX 1660, but here, it delivers almost the same performance.

Not too long ago when they released Just Cause 4, AMD was significantly trailing behind Nvidia but with Avalanche's first release on Vulkan the tables are turning ... 

Only a couple of key engines left are holding out from this trend ... 



Trumpstyle said:

Guys a new REAL leak, videocardz.com is very reliable, as I have been predicting, Navi can reach 2ghz+

All those youtube rumors are fake and this one is real, it's Navi 10 and PS5 will have a cut-down navi 10 = 36 CU's clocked at 1,8ghz giving 8,3TF :) it's almost over now hehe.

Edit: Looked at pc gaming benchmarks to see where performance should land. The PS5 should have around geforce 1080/Vega64 performance if this information is correct.

It's an Anonymous source... We have no idea if it holds any credibility. Sub~40 CU's seems about what I would expect with high clockspeeds though, mainstream part and all that.
"Radeon Maxwell sauce" is bullshit though. Polaris and Vega brought many Maxwell-driving-efficiency to the table anyway... Plus Maxwell is getting old, shows how many years AMD is behind in many aspects that makes nVidia GPU's so efficient for gaming.

Also there is a larger performance gap between Polaris and the 2070 than just 30%... And the jump between the RX 580 and Vega 64 is a good 45-65% depending on application... Navi with 40 CU's will likely come up short against even Vega 56 and definitely Vega 64 and the RTX 2070.

Either way, like all rumors, grain of salt and all of that, don't take it as gospel until AMD does a revealing or we can see benchmarks from legitimate outlets like Anandtech.

fatslob-:O said:

Maybe until the next generation arrives ? It'll be way more pronounced from then on ... 

Three console generations is a bit of a stretch... Either way my stance is we can only base things on the information we have today, not future hypothetical's.
Thus the statement that consoles will drive AMD's GPU performance/efficiency on PC is a little optimistic when there are four consoles on the market today with AMD hardware and the needle hasn't really shifted in AMD's favor.

Not to say that AMD hasn't gained a slight boost out of it with development pipelines, but it just means sweet bugger all in the grand scheme of things... Which is certainly not a good thing.
I would like AMD to be toe-to-toe with nVidia, that is when innovation is at it's best and prices are at it's lowest.

fatslob-:O said:

No they really couldn't. The Wii's TEV was roughly the equivalent of shader model 1.1 for pixel shaders so they were still missing programmable vertex shaders. The Flipper still used hardware accelerated T&L for vertex and lighting transformations so you just couldn't straight add the feature as easily since it required redesigning huge parts of the graphics pipeline. Both the 360 and the PS3 came with a bunch of their own goodies as well. For the former, you had a GPU that was completely 'bindless' and 'memexport' instruction was nice a precursor to compute shaders. With the latter, you have SPUs which are amazingly flexible like the Larrabee concept and you effectively had the equivalent of compute shaders but it even supported hardware transnational memory(!) like as featured on Intel's Skylake CPU architecture ... (X360 and PS3 had forward looking hardware features that were far into the future) 

You can't really compare TEV to your standard shader model... TEV has more in line with nVidia's register combiners.

The Gamecube/Wii had Vertex Shader... But the transform and lighting engine was semi-programmable too, I would even argue more programmable than AMD's Morpheus parts back in the day.

In saying that... It's not actually an ATI designed part, it is an ArtX designed part so certain approaches will deviate.
For such a tiny chip, she could pull her weight, especially when you start doing multiple passes and leverage it's texturing capabilities.

Sadly overall, due to it's lack of overall performance relative to the Xbox 360, the effects did have to be paired back rather substantially.

And I am not denying that the Xbox 360 and Playstation 3 didn't come with their own bunch of goodies either... The fact that the Xbox 360 for example has Tessellation is one of them.

fatslob-:O said:

The Xbox 360 is definitely closer to an Adreno 2XX design than the R5XX design which is not a bad thing. Xbox 360 emulator developers especially seek Adreno 2XX devices for reverse engineering purposes since it much helps further their goals ... 

Don't think I will ever be able to agree with you on this point, not with the work I did on R500/R600 hardware.

And like I said before... Adreno was based upon AMD's desktop GPU efforts, so it will obviously draw similarities with parallel GPU releases for other markets... But Xenos is certainly based upon R500 with features taken from R600... But that just means the Adreno was also based on the R500 also.

fatslob-:O said:

That's not what I heard among the community. AMD's OpenGL drivers are already slow in content creation, professional visualization, games, and I consistently hear about how broken they are in emulation ... (for blender, AMD's GL drivers weren't even on the same level as the GeForce 200 series until GCN came along)

The last high-end game I remembered using OpenGL were No Man's Sky and Doom but until a Vulkan backend came out, AMD got smoked by Nvidia counterparts ...

OpenGL on AMD's pre-GCN arch is a no go ... 

When I say "wasn't to bad" I didn't mean industry leading or perfect. They just weren't absolutely shite.

Obviously AMD has always pushed it's Direct X capabilities harder than OpenGL... Even going back to the Radeon 9000 vs Geforce FX war with Half Life 2 (Direct X) vs Doom 3 (OpenGL.)

OpenGL wasn't to bad on the later Terascale parts.

I mean... Take Wolfenstein via OpenGL on id tech... AMD pretty much dominated nVidia on this title with it's Terascale parts.
Granted, it's an engine/game that favored AMD hardware, but the fact that it was OpenGL is an interesting aspect.

https://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/22

fatslob-:O said:

CUDA documentation seems to disagree that you could just group Kepler with Fermi together. Kepler has FCHK, IMADSP, SHF, SHFL, LDG, STSCUL, and a totally new set of surface memory instructions. The other things Kepler has deprecated from Fermi were LD_LDU, LDS_LDU, STUL, STSUL, LONGJMP, PLONGJMP, LEPC but all of this is just on the surface(!) since NVPTX assembly is not Nvidia's GPUs native ISA. PTX is just a wrapper that makes Nvidia GPUs true ISA hidden so there could be many other changes going on underneath there ... (people have to use envytools to reverse engineer Nvidia's blob so that they could make open source drivers)

Even with Turing, Nvidia does not group it together with Volta since Turing has specialized uniform operations ...

I think you are nitpicking a little to much.

Because even with successive updates to Graphics Core Next there is some deviations in various instructions, features and other aspects related to the ISA.
They aren't 1:1 with each other.

Same holds true for nVidia.

But from an overall design principle, Fermi and Kepler are related, just like Maxwell and Pascal.

fatslob-:O said:

@Bold No developers are worried about new software breaking compatibility with old hardware but just about everyone is worried about new hardware breaking compatibility with old software and that is becoming unacceptable. AMD does NOT actively do this in comparison to Nvidia ... 

These are the combinatorial configurations that Nvidia has to currently maintain ... 

APIs: CUDA, D3D11/12, Metal (Apple doesn't do this for them), OpenCL, OpenGL, OpenGL ES, and Vulkan 

Platforms: Android (still releasing quality drivers despite zero market share), Linux, MacOS, and Windows (all of them have different graphics kernel architecture)

Nvidia has to support ALL of that on at least 3 major instruction encodings which are Kepler, Maxwell/Pascal, Volta, Turing and they only MAKE ENDS MEET in terms of compatibility with more employees than AMD by comparison which only have to maintain the following ... 

APIs: D3D11/12, OpenGL (half-assed effort over here), and Vulkan (Apple makes the Metal drivers for AMD's case plus AMD stopped developing OpenCL and OpenGL ES altogether) 

Platforms: Linux and Windows 

AMD have 2 major instruction encodings at most with GCN all of which are GCN1/GCN2/Navi (Navi is practically a return to consoles judging from LLVM activity) and GCN3/Vega (GCN4 shares the same exact ISA as GCN3) but despite focusing on less APIs/platforms they STILL CAN'T match Nvidia's OpenGL driver quality ... 

End of the day... The little Geforce 1030 I have is still happily playing games from the DOS era... Compatibility is fine on nVidia hardware, developers don't tend to target for specific hardware most of the time anyway on PC.

nVidia can afford more employees than AMD anyway, so the complaint on that aspect is moot.

As for OpenGL, that is being depreciated in favor of Vulkan anyway for next gen. (Should have happened this gen, but I digress.)

fatslob-:O said:

I don't see why not ? A single channel DDR4 module at 3.2Ghz can deliver 25.6 GB/s An octa-channel DDR5 modules clocked at 6.4Ghz can bring a little under 410 GB/s which is well above a 1080 and with 7nm, you play around with a more transistors in your design ...

Anandtech may not paint their outlook as bleak but Nvidia's latest numbers and attempt at acquisition doesn't bode well for them ... 

DDR3 1600mhz on a 64bit bus = 12.8GB/s.
DDR4 3200mhz on a 64bit bus = 25.6GB/s.
DDR5 6400mhz on a 64bit bus = 51.2GB/s.

DDR3 1600mhz on a 128bit bus = 25.6GB/s.
DDR4 3200mhz on a 128bit bus = 51.2GB/s.
DDR5 6400mhz on a 128bit bus = 102.4GB/s.

DDR3 1600mhz on a 256bit bus = 51.2GB/s.
DDR4 3200mhz on a 256bit bus = 102.4GB/s.
DDR5 6400mhz on a 256bit bus = 204.8GB/s.

I wouldn't be so bold to assume that motherboards will start coming out with 512bit busses to drive 400GB/s+ of bandwidth. That would be prohibitively expensive.
The Xbox One X is running with a 384-bit bus... but that is a "premium" console... Even then that would mean...

DDR3 1600mhz on a 384bit bus = 76.8GB/s.
DDR4 3200mhz on a 384bit bus = 153.6GB/s.
DDR5 6400mhz on a 384bit bus = 307.2GB/s.

That's an expensive implementation for only 307GB/s of bandwidth, considering that ends up being less than the Xbox One X... And that is using GDDR5X... And no way is that approaching Geforce 1080 levels of performance.
AMD would need to make some catastrophic leaps in efficiency to get there... And while we are still shackled to Graphics Core Next... Likely isn't happening anytime soon.

You would simply be better off using GDDR6 on a 256bit bus.

fatslob-:O said:
I doubt it, Intel are not even committing to a HARD date for the launch of their 10nm products so no way am I going to trust them anytime soon with 7nm unless they've got a contingency plan in place with either Samsung or TSMC ... 

They stuffed up with 10nm... And allot of work has had to go into making that line usable.
However, the team that is working on 7nm at Intel hasn't had the setbacks that the 10nm team has... And for good reason.

I remain cautiously optimistic though.

fatslob-:O said:
We should not use old games because it's exactly as you said, "it wasn't designed for today's hardware" which is why we shouldn't skew a benchmark suite's to be heavily weighted in favour of past workloads ...

We should if said games are the most popular games in history that are actively played by millions of gamers.

Those individuals who are upgrading their hardware probably wants to know how it will handle their favorite game... And considering we aren't at a point yet where low-end parts or IGP's aren't capable of driving GTA5 at 1080P ultra... Well. There is still relevance to including it in a benchmark suite.

Plus it gives a good representation of how the hardware handles older API's/work loads.

fatslob-:O said:

@Pemalite

And just like that, Avalanche Studio's Apex engine made the jump!  

AMD cards are doing very well in RAGE 2, better than their NVIDIA counterparts. For example, the Radeon VII is usually around 10% behind the GeForce RTX 2080, but in RAGE 2, it matches it almost exactly. The same is happening with Vega 64, which is usually 20% behind the RTX 2070, yet manages to trade blows with it in RAGE 2. For the lower end, the picture is similar: we'd expect the RX 590 to be a few percentage points behind the GTX 1660, but here, it delivers almost the same performance.

Not too long ago when they released Just Cause 4, AMD was significantly trailing behind Nvidia but with Avalanche's first release on Vulkan the tables are turning ... 

Only a couple of key engines left are holding out from this trend ... 

Good to hear. Fully expected though. But doesn't really change the landscape much.

Last edited by Pemalite - on 14 May 2019

--::{PC Gaming Master Race}::--