Quantcast
Navi Made in Colab with sony, MS still Using it?

Forums - General Discussion - Navi Made in Colab with sony, MS still Using it?

Pricing of Xbox VS PS%

Xbox +$150 > PS5 0 0.00%
 
Xbox +$100 > PS5 5 14.71%
 
Xbox +$50> PS5 4 11.76%
 
PS5 = Xbox With slight performance bost 7 20.59%
 
PS5 = Xbox With no performance boos 2 5.88%
 
Xbox will not have the pe... 3 8.82%
 
Still to early, wait for MS PR 13 38.24%
 
Total:34
fatslob-:O said:
Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ...

Which is why I stipulated it's "pretty awesome" for a game that is from "2007".
If a game released today, I would expect a different approach.

It's no less "hacky" than say... ENB anyway.

fatslob-:O said:

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

It is as bad as I make it out to be.
The Radeon VII is packaged with far more expensive HBM2 memory... And despite it being built at 7nm will still consume 40w+ more energy during gaming.

Not an ideal scenario for AMD to be in... Which is why they couldn't undercut nVidia's already high-priced 2080 to lure gamers in. In short... It's a bad buy.

In saying that, I have to give credit where credit is due... Radeon VII is an absolute compute monster.

fatslob-:O said:

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Things are always changing. The Xbox One has Direct X 12, some developers use it and it's features... But any serious developer will target the low-level API's anyway.

fatslob-:O said:

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

The Switch isn't as fixed as we think it is... Considering it's plethora of performance states... But I digress. But Maxwell is pretty easy to target for anyway.

nVidia has most engines onboard... And this has been a long historical trend, hence their "nVidia, the way it's meant to be played" campaign, CryEngine, Unreal Engine, Unity... List goes on.
They work closely with allot of industry bodies, more so than what AMD has historically done... Which has both it's Pro's and Con's.

It does mean that AMD is less likely to engage in building up technologies which are exclusive to their hardware.

fatslob-:O said:
Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

AMD does the same, hence why they cut off Terascale support in their drivers a couple years after they were releasing Terascale based APU's.
There are obviously Pro's and Con's to each companies approach.

fatslob-:O said:

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

The recycling results in stagnation... It's as simple as that. AMD has stagnated for years, nVidia stagnated when they were recycling hardware.
The other issue is... It's not a good thing for the consumer, when you buy a new series of GPU's, you are hoping for something new, not old with a different sticker... It's far from a good thing.

fatslob-:O said:

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Agreed. There is still room for things to become disrupted in the console space though if IBM, Intel or nVidia etc' offer a compelling solution to Sony or Microsoft, but the chances of that is pretty slim to non existent anyway.
No one is able to offer such high performing graphics with a capable CPU other than nVidia... And nVidia is expensive, meaning not ideal for a cost-sensitive platform.

fatslob-:O said:

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

nVidia does have some options. They don't need x86 or Power to remain relevant, ARM is making inroads into cloud computer/server space, albeit slowly.
I mean, ARM was such a serious threat that AMD has even invested in it.
https://www.amd.com/en/amd-opteron-a1100

nVidia is also seeing substantial growth in the Datacenter environment with increases of 85% in revenue.
https://www.anandtech.com/show/13235/nvidia-announces-q2-fy-2019-results-record-revenue

So I wouldn't discount them just yet... They have some substantial pull.

fatslob-:O said:

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Indeed. Although parts like the MX110/MX150 got a TON of design wins in notebooks, which were devices that went up against AMD's Ryzen APU's and often had the advantage in terms of graphics performance.

Mobile is a very fickle space... You have Qualcomm. And that is it... Apple, Huawei, Samsung all build their own SoC's, so there is very little market for nVidia to latch onto... I guess AMD made the right decision years ago to spin off Adreno to Qualcomm.

And even Chinese manufacturers like Xiaomi are entering the SoC game for their budget handsets... Meaning the likes of MediaTek and so on probably looks tenuous over the long term.

However, Tegra isn't done and dusted yet though, nVidia is seeing growth in Vehicles, IoT and so on.

fatslob-:O said:

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

Apple not only has impressive graphics technology... But equally as impressive energy efficiency.
Even their CPU cores tend to be extremely efficient... But also have substantial performance ceilings, it's actually impressive with what they achieve.

In saying that... They do own everything from top to bottom, so they are able to garner some efficiency advantages that Android just cannot match.

fatslob-:O said:

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Intels Graphics have historically been shit as well.
Even when things played out in Intels favour and had optimized it's graphics for games like Half Life... They still trailed the likes of ATI/AMD/nVidia.

Even back in the late 90's/early 2000's I would have opted for an S3/Matrox part over an Intel solution... And that says something... And they were arguably more competitive back then!

But drivers are probably Intel's largest Achilles heels, they are investing more on that front... And they absolutely must if they wish to be a force in the PC Gaming market.

fatslob-:O said:

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

Haswell was a big step up, but still pretty uninspiring... Haswells Iris Pro did manage to double the performance of AMD's Trinity mobile APU's in some instances... But you would hope so with a chunky amount of eDRAM and without the TDP restrictions.

A large portion of Haswell's advantages in the Integrated Graphics Space back then was also partly attributed to Intels vastly superior CPU capability as well... Which is partly why the 5800K was starting to catch the Haswell Iris Pro thanks to a dramatic uplift in CPU performance.

However, then AMD pretty much left Intels Decelerator graphics in the dust going forward... Not to mention better 99th percentile, frame pacing and game compatibility with AMD's solutions.

I would take Vega 10/Vega 11 integrated graphics over any of Intels efforts currently.

fatslob-:O said:

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

They are working on it!
https://www.anandtech.com/show/14117/intel-releases-new-graphics-control-panel-the-intel-graphics-command-center

They have years worth of catching up to do, but they are making inroads... If anyone can do it though, Intel probably can.

fatslob-:O said:

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Depends on the 470... The 1060 is superior in every meaningful metric with an advantage of upwards of 50%.
https://www.anandtech.com/bench/product/1872?vs=1771

Anandtech does need to update it's benchmark suite... But even with dated titles like Grand Theft Auto 5... That game is still played heavily with Millions of gamers, so I suppose it's important to retain for awhile longer yet... Plus it's still a fairly demanding title at 4k all things considered.

End of the day, a Geforce 1060 is a superior choice for gaming over a Radeon RX 470 or 570, unquestionably.

fatslob-:O said:
I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

There wasn't supposed to be? I was pointing out that Forza 7 had a patch to fix performance issues?

fatslob-:O said:
Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

In short, yes. The 1070 is a step up over an RX 580.

fatslob-:O said:
Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ... 

By then, the RX 580 and Xbox One X will be irrelevant anyway with next gen GPU's and Consoles in our hands.

No point playing "what-ifs" on hypotheticals, we can only go by with the information we have for today.



Around the Network

Leakers keep saying that is a lock that the ps5 will be $500. Makes the 56 commute unit more realistic. If the ps5 is $500 and anaconda is supost to be better. How much more could it be. If it was $600 i think it might price itself out the market. Specially since i doubt the benefits that $100 could make on that higher braket.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

Pemalite said:

It is as bad as I make it out to be.
The Radeon VII is packaged with far more expensive HBM2 memory... And despite it being built at 7nm will still consume 40w+ more energy during gaming.

Not an ideal scenario for AMD to be in... Which is why they couldn't undercut nVidia's already high-priced 2080 to lure gamers in. In short... It's a bad buy.

In saying that, I have to give credit where credit is due... Radeon VII is an absolute compute monster.

I'm not sure if memory bandwidth is all that much of an issue with the Radeon VII ? Maybe it has an overabundance of memory bandwidth ? Honestly, all that memory bandwidth is probably more useful for machine learning frameworks like Tensorflow or PyTorch ... 

Pemalite said:

Things are always changing. The Xbox One has Direct X 12, some developers use it and it's features... But any serious developer will target the low-level API's anyway.

The X1 has a pretty customized version of DirectX altogether so definitely a no go on PC but all those serious developers don't want to lose Windows 7 compatibility as well so they delay making changes to their code base and as a result AMD graphics hardware suffers for it ... 

Things need to change faster and quicker ... 

Pemalite said:

The Switch isn't as fixed as we think it is... Considering it's plethora of performance states... But I digress. But Maxwell is pretty easy to target for anyway.

nVidia has most engines onboard... And this has been a long historical trend, hence their "nVidia, the way it's meant to be played" campaign, CryEngine, Unreal Engine, Unity... List goes on.
They work closely with allot of industry bodies, more so than what AMD has historically done... Which has both it's Pro's and Con's.

It does mean that AMD is less likely to engage in building up technologies which are exclusive to their hardware.

It has different performance states but it's still pretty much fixed hardware just like how "boost clocks" on PC doesn't change the underlying hardware components. Rest assured on a console like the Switch, developers can afford to keep Maxwell specific shader optimizations ... 

Nvidia maybe able to spend more for optimizations but they can't afford to control every engine developer out there so more often than not engine developers will prioritize consoles and to a lesser extent AMD hardware as well like we see on Snowdrop, Frostbite (despite RTX integration), RE Engine, Dunia, and ForzaTech ... (Nvidia literally cannot control every aspects of the engine design because there are at least thousands of employees in the industry that contributes here) 

Pemalite said:

AMD does the same, hence why they cut off Terascale support in their drivers a couple years after they were releasing Terascale based APU's.
There are obviously Pro's and Con's to each companies approach.

I think you took that out of context. Nvidia's driver stack is nowhere near as extensive as AMD's ... (no way in hell does AMD spend the amount of time like Nvidia does for the upkeep)

An Nvidia 'driver' could be like discovering a new operating system altogether in comparison to AMD drivers. Nvidia has a nauseating amount of hacks in their drivers to maintain just to keep their performance wins ... 

I don't think you understand just how much of a maintenance burden Nvidia's approach is in comparison to AMD's ... (keeping the entire evolving software stack up to date with new exotic hardware releases EVERY 15 MONTHS is utter insanity)

Pemalite said:

The recycling results in stagnation... It's as simple as that. AMD has stagnated for years, nVidia stagnated when they were recycling hardware.
The other issue is... It's not a good thing for the consumer, when you buy a new series of GPU's, you are hoping for something new, not old with a different sticker... It's far from a good thing.

It's really not since AMD has to meet it's end of the obligations with other partners and I don't believe Nvidia has ever stagnated a whole lot even with releases of less than desirable architectures ... 

It's for consumers to decide what they want to expect, whether that being totally new or "old with a different sticker" (massive amount of hyperbole) so it's not our business to demand what they truly desire ... 

Pemalite said:

Agreed. There is still room for things to become disrupted in the console space though if IBM, Intel or nVidia etc' offer a compelling solution to Sony or Microsoft, but the chances of that is pretty slim to non existent anyway.
No one is able to offer such high performing graphics with a capable CPU other than nVidia... And nVidia is expensive, meaning not ideal for a cost-sensitive platform.

I doubt Nvidia can meet the needs of either Sony or Microsoft and Nvidia are the second worst if not the worst CPU designer out of all the major corporations ... (their ARM cores just like their GPUs in the mobile space guzzles a lot of power but what's more is that they are buggy as well)

Pemalite said:

nVidia does have some options. They don't need x86 or Power to remain relevant, ARM is making inroads into cloud computer/server space, albeit slowly.
I mean, ARM was such a serious threat that AMD has even invested in it.
https://www.amd.com/en/amd-opteron-a1100

nVidia is also seeing substantial growth in the Datacenter environment with increases of 85% in revenue.
https://www.anandtech.com/show/13235/nvidia-announces-q2-fy-2019-results-record-revenue

So I wouldn't discount them just yet... They have some substantial pull.

Most of Nvidia's YoY revenue growth comes down to raising their prices ... 

They are not growing their customer base as much as they nearly used to. Without x86 or POWER, Nvidia has no future ... (they should've settled with Intel for x86 patents instead some cash because now they're locked out of a race between where only AMD or Intel may compete) 

ARM is only ever truly competitive in the mobile space and we all know how that turned out ... 

Pemalite said:

Indeed. Although parts like the MX110/MX150 got a TON of design wins in notebooks, which were devices that went up against AMD's Ryzen APU's and often had the advantage in terms of graphics performance.

Mobile is a very fickle space... You have Qualcomm. And that is it... Apple, Huawei, Samsung all build their own SoC's, so there is very little market for nVidia to latch onto... I guess AMD made the right decision years ago to spin off Adreno to Qualcomm.

And even Chinese manufacturers like Xiaomi are entering the SoC game for their budget handsets... Meaning the likes of MediaTek and so on probably looks tenuous over the long term.

However, Tegra isn't done and dusted yet though, nVidia is seeing growth in Vehicles, IoT and so on.

It was probably for the best that AMD sold off Adreno because Nvidia figured it out the hard way that just having good graphics technology is not enough to succeed. Having to design ARM cores AND modems was more than what AMD or Nvidia were willing to chew on ... (Qualcomm instills a massive amount fear towards even giants like Intel)

Tegra is done for man. It was initially branded for mobile but plans changed since Nvidia couldn't design LTE modems for crap like Qualcomm did so Nvidia rebranded it for automotive but if a struggling auto manufacturer like Tesla are able to competently source their own chip designs then I have no doubt in my mind that Nvidia can be easily dumped in this sector as well ... 

IoT ? Nvidia solutions are even less compelling ever since they killed off Icera to stop developing wireless technology like 5G ... 

Pemalite said:

Apple not only has impressive graphics technology... But equally as impressive energy efficiency.

Even their CPU cores tend to be extremely efficient... But also have substantial performance ceilings, it's actually impressive with what they achieve.

In saying that... They do own everything from top to bottom, so they are able to garner some efficiency advantages that Android just cannot match.

Except for wireless technology like modems (Qualcomm), image sensors/cameras (Sony), flash memory and DRAM (Samsung) ... (a lot of corporations would kill to be in Apple's envious position in which they have strong control over their own ecosystem)

Apple is what we could say to be a semi-vertically integrated business since they don't own all of the supply chain ... 

Pemalite said:

Intels Graphics have historically been shit as well.

Even when things played out in Intels favour and had optimized it's graphics for games like Half Life... They still trailed the likes of ATI/AMD/nVidia.

Even back in the late 90's/early 2000's I would have opted for an S3/Matrox part over an Intel solution... And that says something... And they were arguably more competitive back then!

But drivers are probably Intel's largest Achilles heels, they are investing more on that front... And they absolutely must if they wish to be a force in the PC Gaming market.

I'd be okay with Intel graphics for low end gaming if their drivers weren't so bad ...

Pemalite said:

Haswell was a big step up, but still pretty uninspiring... Haswells Iris Pro did manage to double the performance of AMD's Trinity mobile APU's in some instances... But you would hope so with a chunky amount of eDRAM and without the TDP restrictions.

A large portion of Haswell's advantages in the Integrated Graphics Space back then was also partly attributed to Intels vastly superior CPU capability as well... Which is partly why the 5800K was starting to catch the Haswell Iris Pro thanks to a dramatic uplift in CPU performance.

However, then AMD pretty much left Intels Decelerator graphics in the dust going forward... Not to mention better 99th percentile, frame pacing and game compatibility with AMD's solutions.

I would take Vega 10/Vega 11 integrated graphics over any of Intels efforts currently.

Haswell was a really good improvement IMO since it was a first for Intel to come with it's own unique advantages in comparison to either AMD or Nvidia and they managed to standardize some of that stuff in DX12 as well! (particularly their work on ROVs and conservative rasterization) 

Skylake went to a whole new level and Gen 11/Xe will only take that further ... 

Pemalite said:

They are working on it!
https://www.anandtech.com/show/14117/intel-releases-new-graphics-control-panel-the-intel-graphics-command-center

They have years worth of catching up to do, but they are making inroads... If anyone can do it though, Intel probably can.

Not exactly what I'm looking for. Intel needs a catchy graphics optimized platform 'slogan' like 'GE' or 'TIMTBP' and I want Intel specific graphics solutions like Nvidia GameWorks library such as HairWorks etc ... 

Pemalite said:

Depends on the 470... The 1060 is superior in every meaningful metric with an advantage of upwards of 50%.
https://www.anandtech.com/bench/product/1872?vs=1771

Anandtech does need to update it's benchmark suite... But even with dated titles like Grand Theft Auto 5... That game is still played heavily with Millions of gamers, so I suppose it's important to retain for awhile longer yet... Plus it's still a fairly demanding title at 4k all things considered.

End of the day, a Geforce 1060 is a superior choice for gaming over a Radeon RX 470 or 570, unquestionably.

GTA V is an awful benchmark since it doesn't match up to the capabilities of modern hardware or modern games ... 

Pemalite said:

By then, the RX 580 and Xbox One X will be irrelevant anyway with next gen GPU's and Consoles in our hands.

No point playing "what-ifs" on hypotheticals, we can only go by with the information we have for today.

Touche!



fatslob-:O said:

I'm not sure if memory bandwidth is all that much of an issue with the Radeon VII ? Maybe it has an overabundance of memory bandwidth ? Honestly, all that memory bandwidth is probably more useful for machine learning frameworks like Tensorflow or PyTorch ... 

Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions.
Graphics Core Next being a highly compute orientated architecture generally cannot get enough bandwidth.

In saying that... There is a point of diminishing returns... Despite the fact that Vega 7 increased bandwidth by 112% and compute by 9%... Performance only jumped by a modest 30-40% depending on game... So the "sweet spot" in terms of bandwidth is likely between Vega 64 and Vega 7. Maybe 768GB/s?

Vega 7's inherent architectural limitations tends to stem not from Compute or Bandwidth though... So when you overclock the Ram by an additional 20% (1.2TB/s!) you might only get a couple % points of performance... But bolstering core clock will net almost a linear increase, so it's not bandwidth starved by any measure.

fatslob-:O said:

The X1 has a pretty customized version of DirectX altogether so definitely a no go on PC but all those serious developers don't want to lose Windows 7 compatibility as well so they delay making changes to their code base and as a result AMD graphics hardware suffers for it ... 

Things need to change faster and quicker ... 

Not to mention rolling out a version of Direct X 12 for Windows 7.

fatslob-:O said:

It has different performance states but it's still pretty much fixed hardware just like how "boost clocks" on PC doesn't change the underlying hardware components. Rest assured on a console like the Switch, developers can afford to keep Maxwell specific shader optimizations ... 

Nvidia maybe able to spend more for optimizations but they can't afford to control every engine developer out there so more often than not engine developers will prioritize consoles and to a lesser extent AMD hardware as well like we see on Snowdrop, Frostbite (despite RTX integration), RE Engine, Dunia, and ForzaTech ... (Nvidia literally cannot control every aspects of the engine design because there are at least thousands of employees in the industry that contributes here) 

EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse.

In short though, without a doubt nVidia does get more support in engines on the PC side of the equation over AMD... Despite the fact AMD has had it's hardware in the majority of console over the last few generations. (Wii, WiiU, Xbox 360, Xbox One, Playstation 4.)

Part of that is nVidias collaboration with developers... Which has been a thing for decades.

ATI did start meeting nVidia head on back in the R300 days though... Hence the battle-lines between Doom 3 and Half Life 2, but nothing of that level of competitiveness has been seen since.

fatslob-:O said:

I think you took that out of context. Nvidia's driver stack is nowhere near as extensive as AMD's ... (no way in hell does AMD spend the amount of time like Nvidia does for the upkeep)

An Nvidia 'driver' could be like discovering a new operating system altogether in comparison to AMD drivers. Nvidia has a nauseating amount of hacks in their drivers to maintain just to keep their performance wins ... 

nVidia can also afford to spend more time and effort on upkeep.

Both AMD and nVidia's drivers are more complex than some older Windows/Linux Kernels.

fatslob-:O said:
I don't think you understand just how much of a maintenance burden Nvidia's approach is in comparison to AMD's ... (keeping the entire evolving software stack up to date with new exotic hardware releases EVERY 15 MONTHS is utter insanity)

Actually I do! But it's not as extensive as you portray it to be.
I.E. Pascal and Maxwell share a significant amount of similarities from top to bottom... Kepler and Fermi could be grouped together also. Turing is a significant deviation from prior architectures, but shares a few similarities from Volta.

Even then AMD isn't as clean cut either... They have GCN 1.0, 2.0, 3.0, 4.0, 5.0 and soon 6.0.

fatslob-:O said:

It's really not since AMD has to meet it's end of the obligations with other partners and I don't believe Nvidia has ever stagnated a whole lot even with releases of less than desirable architectures ... 

It's for consumers to decide what they want to expect, whether that being totally new or "old with a different sticker" (massive amount of hyperbole) so it's not our business to demand what they truly desire ... 

Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node.

fatslob-:O said:
I doubt Nvidia can meet the needs of either Sony or Microsoft and Nvidia are the second worst if not the worst CPU designer out of all the major corporations ... (their ARM cores just like their GPUs in the mobile space guzzles a lot of power but what's more is that they are buggy as well)

nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores.

For mobile your point about power is relevant, but for a fixed console... Not so much. You have orders of magnitude more TDP to play with.
An 8-core ARM SoC with a Geforce 1060 would give an Xbox One X with it's 8-core Jaguars a run for it's money.

fatslob-:O said:

Most of Nvidia's YoY revenue growth comes down to raising their prices ... 

They are not growing their customer base as much as they nearly used to. Without x86 or POWER, Nvidia has no future ... (they should've settled with Intel for x86 patents instead some cash because now they're locked out of a race between where only AMD or Intel may compete) 

ARM is only ever truly competitive in the mobile space and we all know how that turned out ... 

Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more.

nVidia is diversifying as... Which you alluded to... Their Console and PC gaming customer base isn't really growing, hence where they are seeing the bulk of their gains.
nVidia certainly does have a future, they aren't going anywhere soon... They have Billions in their war chest.

fatslob-:O said:

It was probably for the best that AMD sold off Adreno because Nvidia figured it out the hard way that just having good graphics technology is not enough to succeed. Having to design ARM cores AND modems was more than what AMD or Nvidia were willing to chew on ... (Qualcomm instills a massive amount fear towards even giants like Intel)

Tegra is done for man. It was initially branded for mobile but plans changed since Nvidia couldn't design LTE modems for crap like Qualcomm did so Nvidia rebranded it for automotive but if a struggling auto manufacturer like Tesla are able to competently source their own chip designs then I have no doubt in my mind that Nvidia can be easily dumped in this sector as well ... 

IoT ? Nvidia solutions are even less compelling ever since they killed off Icera to stop developing wireless technology like 5G ... 

nVidia did figure it out the hard way. But it wasn't lessons lost.
Allot of the efforts that went into making Tegra more efficient... Paid off for Maxwell and Pascal... And we know how superior those parts are compared to their AMD equivalents at every single turn.

Plus they are seeing massive gains in the automotive industry.

fatslob-:O said:

Except for wireless technology like modems (Qualcomm), image sensors/cameras (Sony), flash memory and DRAM (Samsung) ... (a lot of corporations would kill to be in Apple's envious position in which they have strong control over their own ecosystem)

Apple is what we could say to be a semi-vertically integrated business since they don't own all of the supply chain ... 

Indeed.

fatslob-:O said:
I'd be okay with Intel graphics for low end gaming if their drivers weren't so bad ...

I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old.

fatslob-:O said:

Haswell was a really good improvement IMO since it was a first for Intel to come with it's own unique advantages in comparison to either AMD or Nvidia and they managed to standardize some of that stuff in DX12 as well! (particularly their work on ROVs and conservative rasterization) 

Skylake went to a whole new level and Gen 11/Xe will only take that further ... 

Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.

But like I said... If any company has the potential, it's certainly Intel.

fatslob-:O said:
Not exactly what I'm looking for. Intel needs a catchy graphics optimized platform 'slogan' like 'GE' or 'TIMTBP' and I want Intel specific graphics solutions like Nvidia GameWorks library such as HairWorks etc ... 

Still early days yet.

fatslob-:O said:
GTA V is an awful benchmark since it doesn't match up to the capabilities of modern hardware or modern games ... 

Well... It was a game built for 7th gen hardware first and foremost.
However... Considering it's one of the largest selling games in history... Is played by millions of gamers around the world... And actually still pretty demanding even at 4k, it's a relevant game to add to any benchmark in my opinion.

It's one data point though, you do need others in a benchmark "suite" so you can get a comprehensive idea how a part performs in newer and older titles, better or worse.



eva01beserk said:
Leakers keep saying that is a lock that the ps5 will be $500. Makes the 56 commute unit more realistic. If the ps5 is $500 and anaconda is supost to be better. How much more could it be. If it was $600 i think it might price itself out the market. Specially since i doubt the benefits that $100 could make on that higher braket.

Which if the Gonzalo is the PS5s chip, that means 12.9 Tflops, according to DF.  I highly doubt MS is going to top that, as I thought a rumor for the Anaconda was 12 Tflops. Even if they do, I doubt it will be by even 1 whole Tflop.  Xbox is going to have a tough time if it can't tout being vastly more powerful.



Around the Network

@thismentiel
What does more power really mean at tf higher than 12.9? Not more pixels since i doubt games will push more than 4k. I dont think framerate as more than 60fps is useles on consoles. They might focus on raytracing but even now most get show a comparison and they cant tell it apart.

There was a confirmation from sony that they are pushing to eliminate loading screens and the key is ssd's.

I think power wont mean anything next gen.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

eva01beserk said:
@thismentiel
What does more power really mean at tf higher than 12.9? Not more pixels since i doubt games will push more than 4k. I dont think framerate as more than 60fps is useles on consoles. They might focus on raytracing but even now most get show a comparison and they cant tell it apart.

There was a confirmation from sony that they are pushing to eliminate loading screens and the key is ssd's.

I think power wont mean anything next gen.

Well, I'm sure whoever gets the crown will still try to push the most powerful console ever made point. Of course, if the difference is only ~10% and they are the same price, it won't make much of a difference. PS5 will still have the advantage there.

It will be interesting to see what MS uses to push the XB2. If they don't copy Sony SSD solution for next gen, they are going to be at a huge disadvantage. And if they do, will they even be able to stick it in the cheaper SKU?



@Thismentiel
I dont remember sony pushing the most powerfull when they had the advantage. I could be wrong. But i believe only MS did and it was after most people came to accept that MS had problems with exclusive quantity and quality. I dont think that would fly in th 9th gen anymore so it might be ok to say it a few time but like now where is basicly every add would be stupid.



It takes genuine talent to see greatness in yourself despite your absence of genuine talent.

eva01beserk said:
@Thismentiel
I dont remember sony pushing the most powerfull when they had the advantage. I could be wrong. But i believe only MS did and it was after most people came to accept that MS had problems with exclusive quantity and quality.

PS4: https://www.gamespot.com/articles/ps4-the-world-s-most-powerful-console-according-to/1100-6427532/

PS4 Pro: https://wccftech.com/playstation-4-pro-most-powerful-console/



thismeintiel said:
eva01beserk said:
Leakers keep saying that is a lock that the ps5 will be $500. Makes the 56 commute unit more realistic. If the ps5 is $500 and anaconda is supost to be better. How much more could it be. If it was $600 i think it might price itself out the market. Specially since i doubt the benefits that $100 could make on that higher braket.

Which if the Gonzalo is the PS5s chip, that means 12.9 Tflops, according to DF.  I highly doubt MS is going to top that, as I thought a rumor for the Anaconda was 12 Tflops. Even if they do, I doubt it will be by even 1 whole Tflop.  Xbox is going to have a tough time if it can't tout being vastly more powerful.

eva01beserk said:
@Thismentiel
I dont remember sony pushing the most powerfull when they had the advantage. I could be wrong. But i believe only MS did and it was after most people came to accept that MS had problems with exclusive quantity and quality. I dont think that would fly in th 9th gen anymore so it might be ok to say it a few time but like now where is basicly every add would be stupid.

I don't think PS really used it for base PS4, but at that time PS themselves weren't really pushing the power narrative. Yet PS used it for Pro, after XB already started using it for Project Scorpio and will use it again if Anaconda has weaker performance on paper. After the marketing push MS had behind XB1X using this slogan, PS will surely want to throw it in the face of MS since that's one of the things their customers have been using to justify remaining on the XB platform.

It's hard to believe MS will go above $499, but if the two SKU approach is true, they could very well get away with it, especially if PS5 is in fact $499 at launch. If you have a $499 PS5, then MS can launch Lockhart anywhere from $299-$399, and they can launch Anaconda at $599. If Lockhart were 6TF give or take, at $399, it would fall right into place where XB1X would have dropped to next, which is the "sweet spot". That should lead to much better sales for Lockhart in comparison to XB1, which means it doesn't really matter who buys Anaconda, just as long as enough are sold to make the upper tier worth it going forward. Even if Anaconda didn't sell much, it would still allow MS the bragging rights and marketing to be able to say we have the strongest hardware on the market, even if it's only say 10% of total next gen sales.

If this is the case, it will be interesting to see what PS does, and whether or not they drop $50 or so to try and better compete in terms of price with Lockhart, because there won't be anything they can do to compete with a $599 Anaconda aside from large multi game bundles.

On the other hand, if Lockhart was $299-$349, and Anaconda is $500-$600, then it very well may be in the best interest of PS to keep the price of PS5 higher at $499 if possible. This would be done to make customers question Lockhart. If you're buying a next gen console, and one is $299, and the other two are $499 or higher, your going to ask yourself which one is out of place and does it belong? If PS only has one console at $499, and MS also has one around that price, but they also have a $299 SKU, is it really going to cut it? Is it going to lead to buying $100 or more in accessories down the line to make up for the initial low price? If PS5 is expensive enough to manufacture that they can't bring the single SKU price down to compete with Lockhart, then keeping the price up around Anaconda will give them the best chance of consumers passing up on Lockhart for the next step up in price, which would be PS5.

Last edited by EricHiggin - on 12 May 2019

The Canadian National Anthem According To Justin Trudeau

 

Oh planet Earth! The home of native lands, 
True social law, in all of us demand.
With cattle farts, we view sea rise,
Our North sinking slowly.
From far and snide, oh planet Earth, 
Our healthcare is yours free!
Science save our land, harnessing the breeze,
Oh planet Earth, smoke weed and ferment yeast.
Oh planet Earth, ell gee bee queue and tee.