By using this site, you agree to our Privacy Policy and our Terms of Use. Close
fatslob-:O said:
Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ...

Which is why I stipulated it's "pretty awesome" for a game that is from "2007".
If a game released today, I would expect a different approach.

It's no less "hacky" than say... ENB anyway.

fatslob-:O said:

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

It is as bad as I make it out to be.
The Radeon VII is packaged with far more expensive HBM2 memory... And despite it being built at 7nm will still consume 40w+ more energy during gaming.

Not an ideal scenario for AMD to be in... Which is why they couldn't undercut nVidia's already high-priced 2080 to lure gamers in. In short... It's a bad buy.

In saying that, I have to give credit where credit is due... Radeon VII is an absolute compute monster.

fatslob-:O said:

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Things are always changing. The Xbox One has Direct X 12, some developers use it and it's features... But any serious developer will target the low-level API's anyway.

fatslob-:O said:

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

The Switch isn't as fixed as we think it is... Considering it's plethora of performance states... But I digress. But Maxwell is pretty easy to target for anyway.

nVidia has most engines onboard... And this has been a long historical trend, hence their "nVidia, the way it's meant to be played" campaign, CryEngine, Unreal Engine, Unity... List goes on.
They work closely with allot of industry bodies, more so than what AMD has historically done... Which has both it's Pro's and Con's.

It does mean that AMD is less likely to engage in building up technologies which are exclusive to their hardware.

fatslob-:O said:
Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

AMD does the same, hence why they cut off Terascale support in their drivers a couple years after they were releasing Terascale based APU's.
There are obviously Pro's and Con's to each companies approach.

fatslob-:O said:

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

The recycling results in stagnation... It's as simple as that. AMD has stagnated for years, nVidia stagnated when they were recycling hardware.
The other issue is... It's not a good thing for the consumer, when you buy a new series of GPU's, you are hoping for something new, not old with a different sticker... It's far from a good thing.

fatslob-:O said:

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Agreed. There is still room for things to become disrupted in the console space though if IBM, Intel or nVidia etc' offer a compelling solution to Sony or Microsoft, but the chances of that is pretty slim to non existent anyway.
No one is able to offer such high performing graphics with a capable CPU other than nVidia... And nVidia is expensive, meaning not ideal for a cost-sensitive platform.

fatslob-:O said:

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

nVidia does have some options. They don't need x86 or Power to remain relevant, ARM is making inroads into cloud computer/server space, albeit slowly.
I mean, ARM was such a serious threat that AMD has even invested in it.
https://www.amd.com/en/amd-opteron-a1100

nVidia is also seeing substantial growth in the Datacenter environment with increases of 85% in revenue.
https://www.anandtech.com/show/13235/nvidia-announces-q2-fy-2019-results-record-revenue

So I wouldn't discount them just yet... They have some substantial pull.

fatslob-:O said:

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Indeed. Although parts like the MX110/MX150 got a TON of design wins in notebooks, which were devices that went up against AMD's Ryzen APU's and often had the advantage in terms of graphics performance.

Mobile is a very fickle space... You have Qualcomm. And that is it... Apple, Huawei, Samsung all build their own SoC's, so there is very little market for nVidia to latch onto... I guess AMD made the right decision years ago to spin off Adreno to Qualcomm.

And even Chinese manufacturers like Xiaomi are entering the SoC game for their budget handsets... Meaning the likes of MediaTek and so on probably looks tenuous over the long term.

However, Tegra isn't done and dusted yet though, nVidia is seeing growth in Vehicles, IoT and so on.

fatslob-:O said:

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

Apple not only has impressive graphics technology... But equally as impressive energy efficiency.
Even their CPU cores tend to be extremely efficient... But also have substantial performance ceilings, it's actually impressive with what they achieve.

In saying that... They do own everything from top to bottom, so they are able to garner some efficiency advantages that Android just cannot match.

fatslob-:O said:

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Intels Graphics have historically been shit as well.
Even when things played out in Intels favour and had optimized it's graphics for games like Half Life... They still trailed the likes of ATI/AMD/nVidia.

Even back in the late 90's/early 2000's I would have opted for an S3/Matrox part over an Intel solution... And that says something... And they were arguably more competitive back then!

But drivers are probably Intel's largest Achilles heels, they are investing more on that front... And they absolutely must if they wish to be a force in the PC Gaming market.

fatslob-:O said:

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

Haswell was a big step up, but still pretty uninspiring... Haswells Iris Pro did manage to double the performance of AMD's Trinity mobile APU's in some instances... But you would hope so with a chunky amount of eDRAM and without the TDP restrictions.

A large portion of Haswell's advantages in the Integrated Graphics Space back then was also partly attributed to Intels vastly superior CPU capability as well... Which is partly why the 5800K was starting to catch the Haswell Iris Pro thanks to a dramatic uplift in CPU performance.

However, then AMD pretty much left Intels Decelerator graphics in the dust going forward... Not to mention better 99th percentile, frame pacing and game compatibility with AMD's solutions.

I would take Vega 10/Vega 11 integrated graphics over any of Intels efforts currently.

fatslob-:O said:

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

They are working on it!
https://www.anandtech.com/show/14117/intel-releases-new-graphics-control-panel-the-intel-graphics-command-center

They have years worth of catching up to do, but they are making inroads... If anyone can do it though, Intel probably can.

fatslob-:O said:

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Depends on the 470... The 1060 is superior in every meaningful metric with an advantage of upwards of 50%.
https://www.anandtech.com/bench/product/1872?vs=1771

Anandtech does need to update it's benchmark suite... But even with dated titles like Grand Theft Auto 5... That game is still played heavily with Millions of gamers, so I suppose it's important to retain for awhile longer yet... Plus it's still a fairly demanding title at 4k all things considered.

End of the day, a Geforce 1060 is a superior choice for gaming over a Radeon RX 470 or 570, unquestionably.

fatslob-:O said:
I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

There wasn't supposed to be? I was pointing out that Forza 7 had a patch to fix performance issues?

fatslob-:O said:
Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

In short, yes. The 1070 is a step up over an RX 580.

fatslob-:O said:
Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ... 

By then, the RX 580 and Xbox One X will be irrelevant anyway with next gen GPU's and Consoles in our hands.

No point playing "what-ifs" on hypotheticals, we can only go by with the information we have for today.



--::{PC Gaming Master Race}::--