By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:

I'm not sure if memory bandwidth is all that much of an issue with the Radeon VII ? Maybe it has an overabundance of memory bandwidth ? Honestly, all that memory bandwidth is probably more useful for machine learning frameworks like Tensorflow or PyTorch ... 

Bandwidth is insanely important... Especially for Graphics Core Next and especially at higher resolutions.
Graphics Core Next being a highly compute orientated architecture generally cannot get enough bandwidth.

In saying that... There is a point of diminishing returns... Despite the fact that Vega 7 increased bandwidth by 112% and compute by 9%... Performance only jumped by a modest 30-40% depending on game... So the "sweet spot" in terms of bandwidth is likely between Vega 64 and Vega 7. Maybe 768GB/s?

Vega 7's inherent architectural limitations tends to stem not from Compute or Bandwidth though... So when you overclock the Ram by an additional 20% (1.2TB/s!) you might only get a couple % points of performance... But bolstering core clock will net almost a linear increase, so it's not bandwidth starved by any measure.

fatslob-:O said:

The X1 has a pretty customized version of DirectX altogether so definitely a no go on PC but all those serious developers don't want to lose Windows 7 compatibility as well so they delay making changes to their code base and as a result AMD graphics hardware suffers for it ... 

Things need to change faster and quicker ... 

Not to mention rolling out a version of Direct X 12 for Windows 7.

fatslob-:O said:

It has different performance states but it's still pretty much fixed hardware just like how "boost clocks" on PC doesn't change the underlying hardware components. Rest assured on a console like the Switch, developers can afford to keep Maxwell specific shader optimizations ... 

Nvidia maybe able to spend more for optimizations but they can't afford to control every engine developer out there so more often than not engine developers will prioritize consoles and to a lesser extent AMD hardware as well like we see on Snowdrop, Frostbite (despite RTX integration), RE Engine, Dunia, and ForzaTech ... (Nvidia literally cannot control every aspects of the engine design because there are at least thousands of employees in the industry that contributes here) 

EA has proven to be pretty flexible though. They worked with AMD to introduce Mantle... Which was a white elephant... AMD eventually gave up on it... And then Khronos used it for Vulkan for better or worse.

In short though, without a doubt nVidia does get more support in engines on the PC side of the equation over AMD... Despite the fact AMD has had it's hardware in the majority of console over the last few generations. (Wii, WiiU, Xbox 360, Xbox One, Playstation 4.)

Part of that is nVidias collaboration with developers... Which has been a thing for decades.

ATI did start meeting nVidia head on back in the R300 days though... Hence the battle-lines between Doom 3 and Half Life 2, but nothing of that level of competitiveness has been seen since.

fatslob-:O said:

I think you took that out of context. Nvidia's driver stack is nowhere near as extensive as AMD's ... (no way in hell does AMD spend the amount of time like Nvidia does for the upkeep)

An Nvidia 'driver' could be like discovering a new operating system altogether in comparison to AMD drivers. Nvidia has a nauseating amount of hacks in their drivers to maintain just to keep their performance wins ... 

nVidia can also afford to spend more time and effort on upkeep.

Both AMD and nVidia's drivers are more complex than some older Windows/Linux Kernels.

fatslob-:O said:
I don't think you understand just how much of a maintenance burden Nvidia's approach is in comparison to AMD's ... (keeping the entire evolving software stack up to date with new exotic hardware releases EVERY 15 MONTHS is utter insanity)

Actually I do! But it's not as extensive as you portray it to be.
I.E. Pascal and Maxwell share a significant amount of similarities from top to bottom... Kepler and Fermi could be grouped together also. Turing is a significant deviation from prior architectures, but shares a few similarities from Volta.

Even then AMD isn't as clean cut either... They have GCN 1.0, 2.0, 3.0, 4.0, 5.0 and soon 6.0.

fatslob-:O said:

It's really not since AMD has to meet it's end of the obligations with other partners and I don't believe Nvidia has ever stagnated a whole lot even with releases of less than desirable architectures ... 

It's for consumers to decide what they want to expect, whether that being totally new or "old with a different sticker" (massive amount of hyperbole) so it's not our business to demand what they truly desire ... 

Back before this re-badging... Performance used to increase at a frantically rapid rate even on the same node.

fatslob-:O said:
I doubt Nvidia can meet the needs of either Sony or Microsoft and Nvidia are the second worst if not the worst CPU designer out of all the major corporations ... (their ARM cores just like their GPUs in the mobile space guzzles a lot of power but what's more is that they are buggy as well)

nVidia is an ARM licensee. They can use ARM's design instead of Denver... From there they really aren't going to be that different from any other ARM manufacturer that uses vanilla ARM cores.

For mobile your point about power is relevant, but for a fixed console... Not so much. You have orders of magnitude more TDP to play with.
An 8-core ARM SoC with a Geforce 1060 would give an Xbox One X with it's 8-core Jaguars a run for it's money.

fatslob-:O said:

Most of Nvidia's YoY revenue growth comes down to raising their prices ... 

They are not growing their customer base as much as they nearly used to. Without x86 or POWER, Nvidia has no future ... (they should've settled with Intel for x86 patents instead some cash because now they're locked out of a race between where only AMD or Intel may compete) 

ARM is only ever truly competitive in the mobile space and we all know how that turned out ... 

Your claim doesn't hold water. nVidia increased margins by only 4.9%, but revenues still shot up far more.

nVidia is diversifying as... Which you alluded to... Their Console and PC gaming customer base isn't really growing, hence where they are seeing the bulk of their gains.
nVidia certainly does have a future, they aren't going anywhere soon... They have Billions in their war chest.

fatslob-:O said:

It was probably for the best that AMD sold off Adreno because Nvidia figured it out the hard way that just having good graphics technology is not enough to succeed. Having to design ARM cores AND modems was more than what AMD or Nvidia were willing to chew on ... (Qualcomm instills a massive amount fear towards even giants like Intel)

Tegra is done for man. It was initially branded for mobile but plans changed since Nvidia couldn't design LTE modems for crap like Qualcomm did so Nvidia rebranded it for automotive but if a struggling auto manufacturer like Tesla are able to competently source their own chip designs then I have no doubt in my mind that Nvidia can be easily dumped in this sector as well ... 

IoT ? Nvidia solutions are even less compelling ever since they killed off Icera to stop developing wireless technology like 5G ... 

nVidia did figure it out the hard way. But it wasn't lessons lost.
Allot of the efforts that went into making Tegra more efficient... Paid off for Maxwell and Pascal... And we know how superior those parts are compared to their AMD equivalents at every single turn.

Plus they are seeing massive gains in the automotive industry.

fatslob-:O said:

Except for wireless technology like modems (Qualcomm), image sensors/cameras (Sony), flash memory and DRAM (Samsung) ... (a lot of corporations would kill to be in Apple's envious position in which they have strong control over their own ecosystem)

Apple is what we could say to be a semi-vertically integrated business since they don't own all of the supply chain ... 

Indeed.

fatslob-:O said:
I'd be okay with Intel graphics for low end gaming if their drivers weren't so bad ...

I don't think even good drivers could actually solve the issues some of their IGP's have had... Especially parts like the x3000/x3100 from old.

fatslob-:O said:

Haswell was a really good improvement IMO since it was a first for Intel to come with it's own unique advantages in comparison to either AMD or Nvidia and they managed to standardize some of that stuff in DX12 as well! (particularly their work on ROVs and conservative rasterization) 

Skylake went to a whole new level and Gen 11/Xe will only take that further ... 

Xe has me excited. Legit. But I am remaining optimistically cautious... Because just like with all their other claims to fame in regards to Graphics and Gaming... Has always resulted in a product that was stupidly underwhelming or ended up cancelled.

But like I said... If any company has the potential, it's certainly Intel.

fatslob-:O said:
Not exactly what I'm looking for. Intel needs a catchy graphics optimized platform 'slogan' like 'GE' or 'TIMTBP' and I want Intel specific graphics solutions like Nvidia GameWorks library such as HairWorks etc ... 

Still early days yet.

fatslob-:O said:
GTA V is an awful benchmark since it doesn't match up to the capabilities of modern hardware or modern games ... 

Well... It was a game built for 7th gen hardware first and foremost.
However... Considering it's one of the largest selling games in history... Is played by millions of gamers around the world... And actually still pretty demanding even at 4k, it's a relevant game to add to any benchmark in my opinion.

It's one data point though, you do need others in a benchmark "suite" so you can get a comprehensive idea how a part performs in newer and older titles, better or worse.



--::{PC Gaming Master Race}::--