Quantcast
Navi Made in Colab with sony, MS still Using it?

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0.00%
 
Xbox +$100 > PS5 5 16.67%
 
Xbox +$50> PS5 3 10.00%
 
PS5 = Xbox With slight performance bost 6 20.00%
 
PS5 = Xbox With no performance boos 2 6.67%
 
Xbox will not have the pe... 3 10.00%
 
Still to early, wait for MS PR 11 36.67%
 
Total:30
EricHiggin said:
eva01beserk said:
@Thismentiel
I dont remember sony pushing the most powerfull when they had the advantage. I could be wrong. But i believe only MS did and it was after most people came to accept that MS had problems with exclusive quantity and quality. I dont think that would fly in th 9th gen anymore so it might be ok to say it a few time but like now where is basicly every add would be stupid.

I don't think PS really used it for base PS4, but at that time PS themselves weren't really pushing the power narrative. Yet PS used it for Pro, after XB already started using it for Project Scorpio and will use it again if Anaconda has weaker performance on paper. After the marketing push MS had behind XB1X using this slogan, PS will surely want to throw it in the face of MS since that's one of the things their customers have been using to justify remaining on the XB platform.

It's hard to believe MS will go above $499, but if the two SKU approach is true, they could very well get away with it, especially if PS5 is in fact $499 at launch. If you have a $499 PS5, then MS can launch Lockhart anywhere from $299-$399, and they can launch Anaconda at $599. If Lockhart were 6TF give or take, at $399, it would fall right into place where XB1X would have dropped to next, which is the "sweet spot". That should lead to much better sales for Lockhart in comparison to XB1, which means it doesn't really matter who buys Anaconda, just as long as enough are sold to make the upper tier worth it going forward. Even if Anaconda didn't sell much, it would still allow MS the bragging rights and marketing to be able to say we have the strongest hardware on the market, even if it's only say 10% of total next gen sales.

If this is the case, it will be interesting to see what PS does, and whether or not they drop $50 or so to try and better compete in terms of price with Lockhart, because there won't be anything they can do to compete with a $599 Anaconda aside from large multi game bundles.

On the other hand, if Lockhart was $299-$349, and Anaconda is $500-$600, then it very well may be in the best interest of PS to keep the price of PS5 higher at $499 if possible. This would be done to make customers question Lockhart. If you're buying a next gen console, and one is $299, and the other two are $499 or higher, your going to ask yourself which one is out of place and does it belong? If PS only has one console at $499, and MS also has one around that price, but they also have a $299 SKU, is it really going to cut it? Is it going to lead to buying $100 or more in accessories down the line to make up for the initial low price? If PS5 is expensive enough to manufacture that they can't bring the single SKU price down to compete with Lockhart, then keeping the price up around Anaconda will give them the best chance of consumers passing up on Lockhart for the next step up in price, which would be PS5.

If they aim to copy the features that Sony announced, such as load times being near nonexistent and ray tracing HW, in both SKUs, my guess is the Lockhart is going to be $399.  This means that the PS5 is going to be just $100 more for a system that is 3x the power, if leaks are accurate.  And with PS5 being B/C with the PS4, I see most people who gamed on the PS4, sticking with the PS5, since they can sell their PS4s and get a system that plays both last gen and next gen games.  In this case, I really don't see the Lockhart selling any better than the XBO.  There were times when the XBO was on sale for $50+ less than the PS4, yet that still didn't change its fortunes, or even cause it to outsell the PS4 in the XBO's strongest market, the US.  And that was against a system that was ~40% more powerful.  This would be one that is 200% more powerful if the 4Tflops leak is accurate.  God help them if Sony takes a bigger hit and sell for $449.  Or if by some magic they still hit that $399 price.

The only way I think MS can sell the Lockhart at $299 is if they either take a huge loss, which MS has shown this gen they really don't have much interest in, or it lacks quite a few features that Sony is making standard next gen for consoles, i.e. ray tracing HW and a SSD to practically eliminate load times, as well as 4K (at least 4K CB for the more graphic intensive games.)  At that point, you are correct, and people are going to start wondering what's lacking in the Lockhart to make it that low.  Of course, that will be the general consumer audience.  The core gamers that make up the majority of early adopters will already know the deal with the Lockhart and will avoid it like the plague.  They will want a real jump over this gen, not a half-step that seems almost pointless.



Around the Network
thismeintiel said:
EricHiggin said:

I don't think PS really used it for base PS4, but at that time PS themselves weren't really pushing the power narrative. Yet PS used it for Pro, after XB already started using it for Project Scorpio and will use it again if Anaconda has weaker performance on paper. After the marketing push MS had behind XB1X using this slogan, PS will surely want to throw it in the face of MS since that's one of the things their customers have been using to justify remaining on the XB platform.

It's hard to believe MS will go above $499, but if the two SKU approach is true, they could very well get away with it, especially if PS5 is in fact $499 at launch. If you have a $499 PS5, then MS can launch Lockhart anywhere from $299-$399, and they can launch Anaconda at $599. If Lockhart were 6TF give or take, at $399, it would fall right into place where XB1X would have dropped to next, which is the "sweet spot". That should lead to much better sales for Lockhart in comparison to XB1, which means it doesn't really matter who buys Anaconda, just as long as enough are sold to make the upper tier worth it going forward. Even if Anaconda didn't sell much, it would still allow MS the bragging rights and marketing to be able to say we have the strongest hardware on the market, even if it's only say 10% of total next gen sales.

If this is the case, it will be interesting to see what PS does, and whether or not they drop $50 or so to try and better compete in terms of price with Lockhart, because there won't be anything they can do to compete with a $599 Anaconda aside from large multi game bundles.

On the other hand, if Lockhart was $299-$349, and Anaconda is $500-$600, then it very well may be in the best interest of PS to keep the price of PS5 higher at $499 if possible. This would be done to make customers question Lockhart. If you're buying a next gen console, and one is $299, and the other two are $499 or higher, your going to ask yourself which one is out of place and does it belong? If PS only has one console at $499, and MS also has one around that price, but they also have a $299 SKU, is it really going to cut it? Is it going to lead to buying $100 or more in accessories down the line to make up for the initial low price? If PS5 is expensive enough to manufacture that they can't bring the single SKU price down to compete with Lockhart, then keeping the price up around Anaconda will give them the best chance of consumers passing up on Lockhart for the next step up in price, which would be PS5.

If they aim to copy the features that Sony announced, such as load times being near nonexistent and ray tracing HW, in both SKUs, my guess is the Lockhart is going to be $399.  This means that the PS5 is going to be just $100 more for a system that is 3x the power, if leaks are accurate.  And with PS5 being B/C with the PS4, I see most people who gamed on the PS4, sticking with the PS5, since they can sell their PS4s and get a system that plays both last gen and next gen games.  In this case, I really don't see the Lockhart selling any better than the XBO.  There were times when the XBO was on sale for $50+ less than the PS4, yet that still didn't change its fortunes, or even cause it to outsell the PS4 in the XBO's strongest market, the US.  And that was against a system that was ~40% more powerful.  This would be one that is 200% more powerful if the 4Tflops leak is accurate.  God help them if Sony takes a bigger hit and sell for $449.  Or if by some magic they still hit that $399 price.

The only way I think MS can sell the Lockhart at $299 is if they either take a huge loss, which MS has shown this gen they really don't have much interest in, or it lacks quite a few features that Sony is making standard next gen for consoles, i.e. ray tracing HW and a SSD to practically eliminate load times, as well as 4K (at least 4K CB for the more graphic intensive games.)  At that point, you are correct, and people are going to start wondering what's lacking in the Lockhart to make it that low.  Of course, that will be the general consumer audience.  The core gamers that make up the majority of early adopters will already know the deal with the Lockhart and will avoid it like the plague.  They will want a real jump over this gen, not a half-step that seems almost pointless.

Well a $399 and $499 MS strategy could make sense, like the $299 and $399 PS4 strategy this gen. That would allow consumers who wanted a better SKU but not something too expensive, to move up to PS5 or Anaconda at $499. It does leave the door open for PS to subsidize PS5 and potentially drop it $50-$100, but you would likely assume that MS would have anticipated this and would just make sure to keep a $100 gap between Lockhart and PS5 at all times.

The big problem XB1 had by the time it had worthy price cuts and better looking hardware, is that it's name was tarnished by then. PS4 already had the momentum and there was little MS could do at that point. The bundles in addition to the price cut's that didn't help much further prove that. Once a new gen starts though, assuming XB1X and maybe even XB1S get scrapped, a quality low priced SKU with the games to back it up, will have another chance to shake things up next gen.

I've always thought the 4TF $299 model just didn't make sense. If XB1X is 6TF I find it hard to believe that Lockhart would be any less than that. Now if it has all the bells and whistles, then maybe 4TF was necessary to keep the price down below $399, which shouldn't matter all that much if it's clearly marketed as a 1080p/60 unit. If Lockhart ends up below $399, I don't see any logic in making Anaconda more than $499. Too much gap will also make PS5 more appealing.

Now if PS5 can hit $499 at it's leaked specs, then you could also guess that Lockhart at $399 could very well be more than 6TF. The better the performance Lockhart was at that price, the tougher it would be for PS5 at $499. However, the closer Lockhart would be to PS5, the less room either company probably may have to drop the price, so if PS bites the bullet and takes another $50 hit, MS may not have the wiggle room to move Lockhart down $50 as well to keep the $100 gap.

Adding just the third console to the mix really complicates things, because this can play out in so many different ways with that combo. The advantage PS has is that unless 'XB2's' launch much later than PS5, PS should be able to sit back and watch MS to announce first, and can plan around it like they did with Pro. They can't completely change the console, but they can make adjustments so PS5 can fit in wherever they think works best for them.



The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.

eva01beserk said:
@thismentiel
What does more power really mean at tf higher than 12.9? Not more pixels since i doubt games will push more than 4k. I dont think framerate as more than 60fps is useles on consoles. They might focus on raytracing but even now most get show a comparison and they cant tell it apart.

There was a confirmation from sony that they are pushing to eliminate loading screens and the key is ssd's.

I think power wont mean anything next gen.

There is still loading times even with a pair of NVME SSD's in Raid on PC...
They should be able to minimize load times rather substantially though.

Ray Tracing is going to be the big key focus going forwards... And Ray Tracing is inherently compute limited, so that is where a developer can spend an inordinate amount of theoretical flops.

And it seems like 3D Positional Audio might be making a comeback?



Seems like sony did flashed around its dick back then. Not as much as MS did. But still hopefully neither will push that much next gen.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Wasent lockhart meant to be a streaming focused device? Wich is why they dint need that much power. They probably dont even need that 4tf but it was just to at least play 1080p 60 so people could still have a low end choice. I mean there is no garanty the x or the s will still have games compatable next gen.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Around the Network

@Permalite
I really dont see the appeal of ray teacing. Maybe next gen console games really put it to good use and ill see whats happening. But just reflexions as its being used now seems very lackluster for so much power needed.
Sound might be interesting. I could really see a good stealth game with it.

I really hope that next gen offers a some other feture. Maybe vr finally breaks the mainstream. That would be great.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Pemalite said:

Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions.
Graphics Core Next being a highly compute orientated architecture generally cannot get enough bandwidth.

In saying that... There is a point of diminishing returns... Despite the fact that Vega 7 increased bandwidth by 112% and compute by 9%... Performance only jumped by a modest 30-40% depending on game... So the "sweet spot" in terms of bandwidth is likely between Vega 64 and Vega 7. Maybe 768GB/s?

Vega 7's inherent architectural limitations tends to stem not from Compute or Bandwidth though... So when you overclock the Ram by an additional 20% (1.2TB/s!) you might only get a couple % points of performance... But bolstering core clock will net almost a linear increase, so it's not bandwidth starved by any measure.

Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)

Vega 64 was an increase in performance in comparison to the Fury X despite regressing in memory bandwidth so I don't think the Radeon VII needs 1 TB/s when just 640 GB/s could probably do the job just as effectively in giving the chip nearly the same performance uplift ... 

In fact, I don't think I've ever seen a benchmark where the Radeon VII ended up being 2x faster than the Vega 64 ... 

Pemalite said:

Not to mention rolling out a version of Direct X 12 for Windows 7.

-----------------------------------------------------------------------------------------------------------------------------------------------------------

EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse.

In short though, without a doubt nVidia does get more support in engines on the PC side of the equation over AMD... Despite the fact AMD has had it's hardware in the majority of console over the last few generations. (Wii, WiiU, Xbox 360, Xbox One, Playstation 4.)

Part of that is nVidias collaboration with developers... Which has been a thing for decades.

ATI did start meeting nVidia head on back in the R300 days though... Hence the battle-lines between Doom 3 and Half Life 2, but nothing of that level of competitiveness has been seen since.

They didn't take advantage of this during last generation. The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ... 

AMD only really started taking advantage of low level GPU optimizations during this generation ... 

Pemalite said:

nVidia can also afford to spend more time and effort on upkeep.

Both AMD and nVidia's drivers are more complex than some older Windows/Linux Kernels.

Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series) 

To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ... 

Pemalite said:

Actually I do! But it's not as extensive as you portray it to be.
I.E. Pascal and Maxwell share a significant amount of similarities from top to bottom... Kepler and Fermi could be grouped together also. Turing is a significant deviation from prior architectures, but shares a few similarities from Volta.

Even then AMD isn't as clean cut either... They have GCN 1.0, 2.0, 3.0, 4.0, 5.0 and soon 6.0.

With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ... 

Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs! 

At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ... 

Pemalite said:

Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node.

Think about it this way ... 

If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!

Pemalite said:

nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores.

For mobile your point about power is relevant, but for a fixed console... Not so much. You have orders of magnitude more TDP to play with.
An 8-core ARM SoC with a Geforce 1060 would give an Xbox One X with it's 8-core Jaguars a run for it's money.

They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...  

I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ... 

Pemalite said:

Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more.

nVidia is diversifying as... Which you alluded to... Their Console and PC gaming customer base isn't really growing, hence where they are seeing the bulk of their gains.
nVidia certainly does have a future, they aren't going anywhere soon... They have Billions in their war chest.

Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ... 

Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ... 

Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV) 

Pemalite said:

I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old.

Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ...

Pemalite said:

Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.


But like I said... If any company has the potential, it's certainly Intel.

Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ... 

Pemalite said:

Well... It was a game built for 7th gen hardware first and foremost.
However... Considering it's one of the largest selling games in history... Is played by millions of gamers around the world... And actually still pretty demanding even at 4k, it's a relevant game to add to any benchmark in my opinion.

It's one data point though, you do need others in a benchmark "suite" so you can get a comprehensive idea how a part performs in newer and older titles, better or worse.

It being specifically built for last generation is exactly why we should dump it ... 

Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ... 

"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ... 



eva01beserk said:
@Permalite
I really dont see the appeal of ray teacing.

There is a ton of appeal to Ray Tracing. Reflections, Lighting, Shadowing... All see marked improvements.

Before the current "variants" of Ray Tracing (Which is all the rage thanks to nVidia's RTX) we were heading down the road of Global Illumination Path Tracing which is variant of Ray Tracing even as far back as the 7th generation of consoles. (Especially towards the end of the generation, especially so on the PC releases.)

So despite you not seeing much "appeal" for the technology... You have actually been seeing it for years.

eva01beserk said:

Sound might be interesting. I could really see a good stealth game with it.

We actually had 3D positional Audio before... The PC was pioneering it with Aureal's A3D... And the Original Xbox with it's Soundstorm solution.
Then we sort of went backwards/stagnated for years in all markets.

It's not just Stealth games that see big benefits from it, it just makes everything feel more surreal and dynamic.

eva01beserk said:

I really hope that next gen offers a some other feture. Maybe vr finally breaks the mainstream. That would be great.

I doubt VR will ever gain much more traction. After all these years... It never materialized into anything substantive like motion controls did with the Wii.
But hey. Happy to be proven wrong.

fatslob-:O said:

Radeon VII had far better sustained boost clocks than the Vega 64 did. A Radeon VII could reach a maximum of 2Ghz while Vega 64 was at most 1.7Ghz when both were OC'd. I imagine that there was at least a 20% uplift in compute performance in comparison to the Vega 64. The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU.

Overall the difference isn't that pronounced.
https://www.tomshardware.com/reviews/amd-radeon-vii-vega-20-7nm,5977-7.html
https://www.tomshardware.com/reviews/amd-radeon-rx-vega-64,5173-18.html

Vega 7 has a clockspeed advantage sure, but it's not a significant one... And Vega 64 makes up for with it's extra CU's, meaning the different in overall compute isn't that dramatic.

fatslob-:O said:

The Radeon VII probably doesn't need 1 TB/s since it's a gaming GPU. The only way I can reason why the Radeon VII has as much bandwidth as it does is meant to be competitive in machine learning applications with top end hardware nearly all of which are sporting HBM memory modules one way or another ... (also the Radeon VII was closer to 20-30% faster than the Vega 64 rather than 30-40% because once the Radeon VII released, the Vega 64 was already marginally ahead of the 1080)

Vega 7 is based on Radeon Instinct MI50 which is meant for deep-learning/machine-learning/GPGPU workloads.

There is also a successor part aka. MI60 with the full 64 CU compliment and higher clocks.

AMD had no answer to nVidia's high-end, so they took the instinct GPU's and rebaged it as Vega 7.

There is always a deviation in benchmarks from different time frames and even sources. - 30-40% is a rough ball park as is, no need to delve into semantics, the original point still stands.

fatslob-:O said:

They didn't take advantage of this during last generation.

And they didn't take advantage of it this generation either.

So how many generations do we give AMD before your statement can be regarded as true?

fatslob-:O said:

The Wii used an overclocked Flipper GPU which was arguably a DX7/8 feature set design and the X360 is exactly like the Adreno 2XX(!) according to an emulator developer rather than either ATI's R500/R600 ... 

Functionally the Wii/Gamecube GPU's could technically do everything (I.E. From an effects perspective) the Xbox 360/Playstation 3 can do via TEV.
However due to the sheer lack of horsepower, such approaches were generally not considered.

As for the Xbox 360... It's certainly an R500 derived semi-custom part that adopted some features and ideas from R600/Terascale, I wouldn't say it closely resembled Adreno though... Because Adreno was originally derived from Radeon, so of course there are intrinsic similarities from the outset.

Why reinvent the wheel?

fatslob-:O said:

AMD only really started taking advantage of low level GPU optimizations during this generation ... 

As a Radeon user dating back over a decade... Haven't generally seen it.

fatslob-:O said:

Far more so for Nvidia than AMD because with the latter they just stop updating extremely dissimilar architectures very quickly ... (this is why OpenGL support sucks for pre-GCN GPUs like the HD 5000/6000 series) 

To this day, Nvidia still managed to release WDDM 2.x/DX12 drivers for Fermi ... 

Actually OpenGL support wasn't to bad for the Radeon 5000/6000 series, even when I was running quad-Crossfire Radeon 6950's unclocked into 6970's. - You might recall I did some bandwidth scaling benchmarks of those cards years ago.

The Radeon 5870 was certainly a fine card that stood the test of time though, more so than my 6950 cards. (I actually have one sitting on my shelf next to me!)

fatslob-:O said:

With Pascal and Maxwell, the other day I heard from Switch emulator developers that their shaders were NOT compatible and that emulating half precision instructions on Pascal broke things. I VERY much doubt you can group Kepler with Fermi because you don't even have bindless texture handles or support for subgroup operations on Fermi ... 

Things were worse on the CUDA side where Nvidia publicly decided to deprecate a feature known as "warp-synchronous programming" on Volta and this lead to real world breakage in applications that relied on previous hardware behaviour. Nvidia even with their OWN APIs and their humongous intermediate representation (PTX instrinsics) backend, they CAN'T even promise that their sample codes or features will actually be compatible with future versions CUDA SDKs! 

At least with AMD and their GCN iterations, developers won't have to worry about application breakage no matter how tiny AMD's driver teams may be ... 

Nah. Maxwell and Pascal can be lumped together.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/4

Kepler and Fermi too.

Volta never really made a big appearance on Desktop... So Turing is the start of a fresh lineup.

And just like if you were building software targeting specific features in AMD's Next-Gen Compute unit... Compatibility will likely break with older variants of Graphics Core Next. It's just the nature of the game.

AMD re-badging hardware obviously does result in less occurrence of this happening... But I want more performance. AMD isn't delivering.

fatslob-:O said:

Think about it this way ... 

If AMD weren't burdened by maintaining their software stack such as their drivers, they could be using those resources instead to SOLELY improve their GCN implementations much like how Intel has been evolving x86 for over 40 years!

I am actually not disagreeing with you. I actually find AMD's drivers to actually be the best in the industry right now. - Which was typically the crown nVidia held during the early Graphics Core Next and prior years.

I do run hardware from both companies so I can anecdotally share my own experiences.

fatslob-:O said:

They could but there's no point since ARM's designs are much too low power/performance for Nvidia's tastes so they practically have to design their own "high performance" ARM cores just like every other licensee especially if they want to compete in home consoles. Nvidia's CPU designs are trash that compiler backend writers have to work around ...  

I doubt Nvidia will be able to offer backwards compatibility as well which is another hard requirement ... 

ARM has higher performing cores. Certainly ones that would beat Denver.

fatslob-:O said:

Nvidia's newer report seems to paint a much darker picture than it did over 6 months ago so their growth is hardly organic ... 

Plus Nvidia spent nearly $7B to defend Mellanox from an Intel takeover just to protect their own cloud/data center business LOL ... 

Nvidia's acquisition of Mellanox is at the mercy of Chinese regulators as well just like Qualcomm's acquisition of NXP. If the deal falls apart (likely because of China), what other 'friends' do Nvidia have to fallback to ? What happens if AMD or Intel get more ambitious with their APUs and start targeting GTX 1080 levels of graphics performance ? (possible with DDR5 and 7nm EUV) 

DDR5 will not offer the appropriate levels of bandwidth to allow APU's to equal a Geforce 1080. Especially at 7nm... That claim is without any basis in reality.

Either way, Anandtechs analysis isn't painting nVidia outlook as bleak. They are growing, they are diversifying, they have billions in the bank, they are actually doing well. Which is a good thing for the entire industry, competition is a must.

fatslob-:O said:
Their Haswell and up lineup is fine. Certainly, older Intel parts had tons of hardware issues but that was well in the past so all they need is good drivers ..

Minus the average performance the hardware puts out... And the lack of long term support.

You would literally need to pay me to use Intel Decelerator Graphics.

fatslob-:O said:
Meh, I'm not as optimistic as you are unless they use another foundry to manufacture Xe because I don't trust that they'll actually launch 10nm ... 

Intels 7nm seems to still be on track with it's original timeline. I am optimistic, but I am also realistic, Intel and Graphics have a shitty history, but you cannot deny that some of the features Intel is touting is wetting the nerd appetite?

fatslob-:O said:

It being specifically built for last generation is exactly why we should dump it ... 

Crysis is relatively demanding even for today's hardware but no sane benchmark suite will include it because of it's flaw in relying heavily on single threaded performance ... 

"Demanding" is not a sign of technical excellence like we saw with ARK Survival Evolved. A benchmark suite should be designed to represent the workload demands of current generation AAA game graphics, not last generation AAA game graphics ... 

Crysis is demanding for today's hardware because it wasn't designed for today's hardware.
Crysis was designed at a time when clockrates were ever-increasing and Crytek thought that trajectory would continue indefinitely.

Today CPU's have gotten wider with more cores, which means that newer successive Cry Engines tend to be more performant.

In saying that, GTA 5 isn't as old as Crysis anyway and certainly has an orders-of-magnitude greater player base.

I believe that benchmarks should represent the games that people are playing today. - If a game is older and based in a Direct X 9/11 era, so be it, it's still representative of what gamers are playing... In saying that, they also need newer titles to give an idea of performance in newer titles.

It's like when Terascale was on the market, it had amazing Direct X 9 performance... But it wasn't able to keep pace in newer Direct X 11 titles, hence why AMD architectured VLIW 4, to give a better balance leaning towards Direct X 11 titles as each unit was able to be utilized more consistently.

So having a representation of mixed older and newer games is important in my opinion as it gives you a more comprehensive data point to base everything on.



Pemalite said:

And they didn't take advantage of it this generation either.


So how many generations do we give AMD before your statement can be regarded as true?

Maybe until the next generation arrives ? It'll be way more pronounced from then on ... 

Pemalite said:

Functionally the Wii/Gamecube GPU's could technically do everything (I.E. From an effects perspective) the Xbox 360/Playstation 3 can do via TEV.
However due to the sheer lack of horsepower, such approaches were generally not considered.

As for the Xbox 360... It's certainly an R500 derived semi-custom part that adopted some features and ideas from R600/Terascale, I wouldn't say it closely resembled Adreno though... Because Adreno was originally derived from Radeon, so of course there are intrinsic similarities from the outset.

Why reinvent the wheel?

No they really couldn't. The Wii's TEV was roughly the equivalent of shader model 1.1 for pixel shaders so they were still missing programmable vertex shaders. The Flipper still used hardware accelerated T&L for vertex and lighting transformations so you just couldn't straight add the feature as easily since it required redesigning huge parts of the graphics pipeline. Both the 360 and the PS3 came with a bunch of their own goodies as well. For the former, you had a GPU that was completely 'bindless' and 'memexport' instruction was nice a precursor to compute shaders. With the latter, you have SPUs which are amazingly flexible like the Larrabee concept and you effectively had the equivalent of compute shaders but it even supported hardware transnational memory(!) like as featured on Intel's Skylake CPU architecture ... (X360 and PS3 had forward looking hardware features that were far into the future) 

The Xbox 360 is definitely closer to an Adreno 2XX design than the R5XX design which is not a bad thing. Xbox 360 emulator developers especially seek Adreno 2XX devices for reverse engineering purposes since it much helps further their goals ... 

Pemalite said:

Actually OpenGL support wasn't to bad for the Radeon 5000/6000 series, even when I was running quad-Crossfire Radeon 6950's unclocked into 6970's. - You might recall I did some bandwidth scaling benchmarks of those cards years ago.


The Radeon 5870 was certainly a fine card that stood the test of time though, more so than my 6950 cards. (I actually have one sitting on my shelf next to me!)

That's not what I heard among the community. AMD's OpenGL drivers are already slow in content creation, professional visualization, games, and I consistently hear about how broken they are in emulation ... (for blender, AMD's GL drivers weren't even on the same level as the GeForce 200 series until GCN came along)

The last high-end game I remembered using OpenGL were No Man's Sky and Doom but until a Vulkan backend came out, AMD got smoked by Nvidia counterparts ...

OpenGL on AMD's pre-GCN arch is a no go ... 

Pemalite said:

Nah. Maxwell and Pascal can be lumped together.
https://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/4

Kepler and Fermi too.

Volta never really made a big appearance on Desktop... So Turing is the start of a fresh lineup.

And just like if you were building software targeting specific features in AMD's Next-Gen Compute unit... Compatibility will likely break with older variants of Graphics Core Next. It's just the nature of the game.

AMD re-badging hardware obviously does result in less occurrence of this happening... But I want more performance. AMD isn't delivering.

CUDA documentation seems to disagree that you could just group Kepler with Fermi together. Kepler has FCHK, IMADSP, SHF, SHFL, LDG, STSCUL, and a totally new set of surface memory instructions. The other things Kepler has deprecated from Fermi were LD_LDU, LDS_LDU, STUL, STSUL, LONGJMP, PLONGJMP, LEPC but all of this is just on the surface(!) since NVPTX assembly is not Nvidia's GPUs native ISA. PTX is just a wrapper that makes Nvidia GPUs true ISA hidden so there could be many other changes going on underneath there ... (people have to use envytools to reverse engineer Nvidia's blob so that they could make open source drivers)

Even with Turing, Nvidia does not group it together with Volta since Turing has specialized uniform operations ... 

@Bold No developers are worried about new software breaking compatibility with old hardware but just about everyone is worried about new hardware breaking compatibility with old software and that is becoming unacceptable. AMD does NOT actively do this in comparison to Nvidia ... 

These are the combinatorial configurations that Nvidia has to currently maintain ... 

APIs: CUDA, D3D11/12, Metal (Apple doesn't do this for them), OpenCL, OpenGL, OpenGL ES, and Vulkan 

Platforms: Android (still releasing quality drivers despite zero market share), Linux, MacOS, and Windows (all of them have different graphics kernel architecture)

Nvidia has to support ALL of that on at least 3 major instruction encodings which are Kepler, Maxwell/Pascal, Volta, Turing and they only MAKE ENDS MEET in terms of compatibility with more employees than AMD by comparison which only have to maintain the following ... 

APIs: D3D11/12, OpenGL (half-assed effort over here), and Vulkan (Apple makes the Metal drivers for AMD's case plus AMD stopped developing OpenCL and OpenGL ES altogether) 

Platforms: Linux and Windows 

AMD have 2 major instruction encodings at most with GCN all of which are GCN1/GCN2/Navi (Navi is practically a return to consoles judging from LLVM activity) and GCN3/Vega (GCN4 shares the same exact ISA as GCN3) but despite focusing on less APIs/platforms they STILL CAN'T match Nvidia's OpenGL driver quality ... 

Pemalite said:

I am actually not disagreeing with you. I actually find AMD's drivers to actually be the best in the industry right now. - Which was typically the crown nVidia held during the early Graphics Core Next and prior years.

I do run hardware from both companies so I can anecdotally share my own experiences.

AMD drivers are pretty great but they'd be even better if they didn't have to deal with maintaining D3D11 drivers and by extension especially OpenGL drivers ... 

Pemalite said:

DDR5 will not offer the appropriate levels of bandwidth to allow APU's to equal a Geforce 1080. Especially at 7nm... That claim is without any basis in reality.

Either way, Anandtechs analysis isn't painting nVidia outlook as bleak. They are growing, they are diversifying, they have billions in the bank, they are actually doing well. Which is a good thing for the entire industry, competition is a must.

I don't see why not ? A single channel DDR4 module at 3.2Ghz can deliver 25.6 GB/s An octa-channel DDR5 modules clocked at 6.4Ghz can bring a little under 410 GB/s which is well above a 1080 and with 7nm, you play around with a more transistors in your design ...

Anandtech may not paint their outlook as bleak but Nvidia's latest numbers and attempt at acquisition doesn't bode well for them ... 

Pemalite said:

Intels 7nm seems to still be on track with it's original timeline. I am optimistic, but I am also realistic, Intel and Graphics have a shitty history, but you cannot deny that some of the features Intel is touting is wetting the nerd appetite?

I doubt it, Intel are not even committing to a HARD date for the launch of their 10nm products so no way am I going to trust them anytime soon with 7nm unless they've got a contingency plan in place with either Samsung or TSMC ... 

Pemalite said:

Crysis is demanding for today's hardware because it wasn't designed for today's hardware.
Crysis was designed at a time when clockrates were ever-increasing and Crytek thought that trajectory would continue indefinitely.

Today CPU's have gotten wider with more cores, which means that newer successive Cry Engines tend to be more performant.

In saying that, GTA 5 isn't as old as Crysis anyway and certainly has an orders-of-magnitude greater player base.

I believe that benchmarks should represent the games that people are playing today. - If a game is older and based in a Direct X 9/11 era, so be it, it's still representative of what gamers are playing... In saying that, they also need newer titles to give an idea of performance in newer titles.

It's like when Terascale was on the market, it had amazing Direct X 9 performance... But it wasn't able to keep pace in newer Direct X 11 titles, hence why AMD architectured VLIW 4, to give a better balance leaning towards Direct X 11 titles as each unit was able to be utilized more consistently.

So having a representation of mixed older and newer games is important in my opinion as it gives you a more comprehensive data point to base everything on.

We should not use old games because it's exactly as you said, "it wasn't designed for today's hardware" which is why we shouldn't skew a benchmark suite's to be heavily weighted in favour of past workloads ...



Guys a new REAL leak, videocardz.com is very reliable, as I have been predicting, Navi can reach 2ghz+

All those youtube rumors are fake and this one is real, it's Navi 10 and PS5 will have a cut-down navi 10 = 36 CU's clocked at 1,8ghz giving 8,3TF :) it's almost over now hehe.

Edit: Looked at pc gaming benchmarks to see where performance should land. The PS5 should have around geforce 1080/Vega64 performance if this information is correct.

Last edited by Trumpstyle - on 14 May 2019

"Donald Trump is the greatest president that god has ever created" - Trumpstyle

6x master league achiever in starcraft2

Beaten Sigrun on God of war mode

Beaten DOOM ultra-nightmare with NO endless ammo-rune, 2x super shotgun and no decoys on ps4 pro.

1-0 against Grubby in Wc3 frozen throne ladder!!