By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:

In Crysis's case. It's actually pretty awesome, it's just a mod for a game released in 2007. - Are there better approaches? Sure.

But considering how amazing Crysis can look with the Path Tracing via the Depth Buffer and a heap of graphics mods... The game can look draw-droppingly gorgeous, despite being 12+ years old.

Trust me, you do not want to know the horrors of how hacky the mod is ... 

The mod does not trace according to lighting information but it traces according to the brightness of each pixels so bounce lighting even in screen space is already incorrect but if you want proper indirect lighting as well then you need a global scene representation data structure such as an octree, BVH. or a kd-tree for correct ray traversal. Using local scene representation data structure such as a depth buffer will cause a lot of issues once the rays "goes outside" the data structure ... 

As decent as Crysis looks today, it hurts painfully that it's still not physically based ... 

Pemalite said:

Sure. New features can make a difference.
But AMD has been bolting on new features to Graphics Core Next since the beginning and simply has not been able to keep pace with nVidia's efforts.

But to state that something may change and that AMD's long-term efforts might come into fruition on the next console hardware cycle is being a little disingenuous, developers have had how many years with the current Graphics Core Next hardware? Fact of the matter is, we have no idea if the state of things is going to be changing at all in AMD's favor or if the Status quo will continue.

AMD not being able to keep pace with Nvidia is mostly down to the latter releasing bigger dies. A Radeon VII is nearly like for like to the RTX 2080 in performance given both of their transistor counts ... (it's honestly not as bad as you make it out to be) 

Things have been changing but PCs lag consoles by a generation in terms of graphics programming. DX11 wasn't the standard until the PS4/X1 released and it's likely DX12 will end up being the same. The situation is fine as it is but things can improve if a couple of engines make the jump like the Dunia Engine 2.0, AnvilNEXT 2.0, and especially Bethesda's Creation Engine ... (it would help even more if reviewers didn't use outdated titles like GTA V or Crysis 3 for their benchmark suite) 

Some more extensions in DX12 would help like OoO raster and rectangle primitive ... 

Pemalite said:

It's actually extremely easy to develop for nVidia hardware though... I mean, the Switch is also a testament to that very fact, outside of the lack of pixel pushing power Tegra has, developers have been praising the Maxwell derived hardware since the very beginning.

Obviously there are some Pro's and Con's to whichever path AMD and nVidia take, nVidia does tend to work with Developers, Publishers, Game Engines far more extensively than what AMD has historically done... Mostly that is due to a lack of resources on AMD's behalf.

By targeting a standardized API like DX11 ? Sure. Targeting low level details of their hardware ? Not so because Nvidia rarely values compatibility so optimizations can easily break and the Switch is an exception to this since it's a fixed hardware design so developers can be bothered some to invest ... (Switch software is not nearly as investment heavy in comparison to current home consoles so developers might not care all that much if it's successor isn't backwards compatible)

Nvidia dedicates far more resources on maintaining their entire software stack rather than focusing on working with developers. When they release a new architecture, they need to make a totally different shader compiler but they waste a lot of other engineering resources as well on non-gaming things such as CUDA and arguably OpenGL ... 

Pemalite said:

The Pro's and Con's of AMD and nVidia is something I have been weighing for decades, often AMD's Pro's outweigh it's Con's for my own PC builds for various reasons. (Compute, Price and features like Eyefinity and so on.)

Don't take me for someone who only favors nVidia hardware, that will be extremely far from the truth.

I am just at that point where AMD has been recycling the same architecture for an extremely long time... And has been trailing nVidia for a long while, that I just don't have any faith in AMD's hardware efforts until their next-gen hardware comes along, aka. Not Navi.

One thing is for sure... AMD's design wins in the console space is a good thing for the company, it's certainly the counterbalance to nVidia in the video game development community as nVidia dominates the PC landscape... And it's also helped AMD's bottom line significantly over the years to keep them in the game and viable as a company. Competition is a good thing.

This 'recycling' has it's advantages as seen in x86. Hardware designers get to focus on what's really important which are the hardware features and software developers get to keep compatibility ...

If AMD can't dominate PC gaming performance then they just need to exceed it with higher console performance so hopefully we can see high-end console SKUs at $700 or maybe even up to $1000 to truly take on Nvidia in the gaming space ... 

Pemalite said:

The thing is... Console and PC landscapes aren't that different from a gamers point of view anymore, there is significant overlap there, consoles are becoming more PC-like.
You can bet that nVidia is keeping a close eye on AMD as AMD takes design wins in the console and cloud spaces... nVidia has been very focused on the cloud for a very very long time, hence Titan/Tesla... And have seen substantial growth in that sector.

The other issue is that mobile is one of the largest sectors in gaming... Where AMD is non-existent and nVidia has a couple of wet toes and who has leveraged it's lessons learned in the mobile space and implemented those ideas into Maxwell/Pascal for great strides in efficiency.

Sure... You have Adreno which is based upon AMD's older efforts, but it's certainly not equivalent to Graphics Core Next in features or capability, plus AMD doesn't own that design anymore anyway.

Both consoles and PCs are taking notes from each other. Consoles are getting more features from PCs like backwards compatibility while PCs are becoming more closed platforms (we don't get to choose our OS or CPU ISA anymore) than ever before ...

Nvidia may very well have been focused on cloud computing but the future won't be GPU compute or closed APIs like CUDA anymore. The future of cloud is going to be able to offload from x86 or design specialized AI ASICs so Nvidia's future is relatively fickle if they can't maintain long-term developer partnerships and their also at the mercy of other CPU ISA's like x86 or POWER ... 

Nvidia is just as non-existent as AMD are in the mobile space. In fact, graphics technology is not all that important given that the driver quality over at Android makes Intel look amazing by comparison! The last time Nvidia had a 'design win' in the 'mobile' (read phones) space was with the Tegra 4i ? 

Honestly, if anyone has good graphics technology in the mobile space then it is Apple because their GPU designs are amazing and it doesn't hurt that the Metal API is a much simpler alternative to either Vulkan or OpenGL ES while also being nearly as powerful as the other (DX12/Vulkan) modern gfx APIs so developers will happily port their games over to Metal. Connectivity is more important like the latest settlement between Apple and Qualcomm showed us. Despite Apple being a superior graphics system architect in comparison to the Adreno team which is owned by Qualcomm, the former capitulated to the latter since they couldn't design state of the art mobile 5G modems. 5G is more important than superior graphics performance in the mobile space ... 

The likes of Huawei, Qualcomm, or Samsung are destined to reap the vast majority of the rewards in mobile space since they have independent 5G technology while the likes of Intel (they couldn't make 5G modems)/Nvidia (GPUs are too power hungry) have already deserted the mobile space and others like Apple will have to settle for scraps (even though most profitable) as they sit this one out whenever they can figure out how to make their own 5G modems ... 

Pemalite said:

Intel historically hasn't reserved the same amount of die-space as AMD was willing to go in regards to it's Integrated Graphics... There is probably some good reasons for that, AMD markets it's APU's as being "capable" of gaming, Intel hasn't historically gone to similar lengths in it's graphics marketing.

Intel's efforts in graphics have historically been the laughing stock of the industry as well. i740? Yuck. Larrabee? Failure.
Extreme Graphics? Eww. GMA? No thanks. Intel HD/Iris? Pass.

That doesn't mean Intel isn't capable of some good things, their EDRAM approach proved interesting and also benefited the CPU side of the equation in some tasks... But Intel and decent graphics is something I will need to "see to believe" because honestly... Intel has been promising things for decades and simply hasn't delivered. - And that is before I even touch upon the topic of drivers...

I have done allot of work prior in getting Intel parts like the Intel 940 running games like Oblivion/Fallout due to various lacking hardware features, so Intels deficiencies isn't lost on me in the graphics space. Heck even their x3100 had to have a special driver "switch" to switch TnL from being hardware accelerated to being performed on the CPU on a per-game basis as Intels hardware implementation of TnL was extremely poor performing.

So when it comes to Intel Graphics and gaming... I will believe it when I see it... Plus AMD and nVidia have invested far more man hours and money into their graphics efforts than Intel has over the decades, that's not a small gap to jump across.

Intel graphics hardware designs aren't the biggest problems IMO. It's that nearly no developers prioritize Intel's graphics stack so poor end user experience is mostly a culprit of poor drivers and poor developer relations ... (sure there hardware designs are on the more underwhelming side but what kills it for people are that drivers DON'T WORK)

Older Intel integrated graphics hardware designs sure stunk but Haswell/Skylake changed this dramatically and they look to be ahead in terms of a feature set standpoint compared to either AMD or Nvidia but whether it'll come in handy in the face of the other aforementioned problems is another matter entirely ... 

More importantly, when are we EVER going to see the equivalent brand/library optimization of either AMD's Gaming Evolved/GPUOpen or Nvidia's TWIMP/GameWorks from Intel ?

Pemalite said:

I am being cautious with Xe. Intel has promised big before and hasn't delivered. But some of the ideas being shouted like "Ray Tracing" has piqued my interest.

I doubt AMD will let that go without an answer though, nVidia is one thing, but Integrated Graphics has been one of AMD's biggest strengths for years, even during the Bulldozer days.

------------------------------------------------------------------------------------------------------------------------------------------------

Yeah. We definitely have different views on how Ray Tracing is supposed to be approached... And that is fine.
I am just looking at the past mistakes nVidia has done with the Geforce FX and to an extent... Turing.

Consoles are going this route regardless so everybody including AMD and Intel will have it ... 

Pemalite said:

Either way. The Xbox One X is punching around the same level as a 1060, even if the 1060 is a couple frames under 30, the Xbox gets away with lower API and driver overheads.

--------------------------------------------------------------------------------------------------------------------------------------------------

Like what has been established prior... Some games will perform better on AMD hardware than nVidia and vice-versa, that has always been the case. Always.
But... In 2 years time I would certainly prefer a Geforce 1060 6Gb over a Radeon RX 470... The 1060 is in another league entirely with performance almost 50% better in some titles.
https://www.anandtech.com/bench/product/1872?vs=1771

Modern Id Tech powered games loves it's VRAM, it's been one of the largest Achilles heels of nVidia's hardware in recent years.. Which is ironic because if you go back to the Doom 3 days, it ran best on nVidia hardware.

An X1X demolishes the 1060 in SWBF II and yikes, most of Anandtech's becnhmarks are using DX11 titles especially the dreaded GTA V ... 

An RX 470/570 is nowhere near as bad against the 1060 in DX12 or Vulkan titles ... 

Benchmark suite testing design is a big factor in terms of performance comparisons ... 

Pemalite said:

Forza 7's performance issues were notorious in it's early days that got patched out. (Which greatly improved the 99th percentile benches.)

https://www.game-debate.com/news/23926/forza-motorsport-7s-stuttering-appears-to-be-fixed-by-windows-10-fall-creators-update

You are right of course that drivers also improved things substantially as well.
https://www.hardocp.com/article/2017/10/16/forza_motorsport_7_video_card_performance_update/3

In short, a Geforce 1060 6GB can do Forza 7 at 4k with a similar experience to that of the Xbox One X.

I don't see any benchmarks specific to a 1060 in those links that suggests a 1060 is actually up to par with the X1X ... 

Pemalite said:

It would have to be a very shit PC port for it to equal or better a Geforce 1070. No doubt about it.

----------------------------------------------------------------------------------------------------------------------------------------------------

No way. A 1070 at the end of the day is going to provide you with a far better experience, especially once you dial up the visual settings.

Would it be a very shit PC port if a 580 somehow matched a 1070 ? 

Pemalite said:

A 1060 is overrated. But so it the Xbox One X.

The 1060, RX 580, Xbox One X are all in the same rough ballpark on expected capability.
Of course because the Xbox One is a console, it does have the advantage of having developers optimize for the specific hardware and it's software base, but the fact that the Geforce 1060 is still able to turn in competitive results to the Xbox One X is a testament to that specific part.

And if I was in a position again to choose a Radeon RX 580 or a Geforce 1060 6Gb... It will be the RX 580 every day, which is the Xbox One X equivalent for the most part.

Sooner or later, a 580 or an X1X will definitively pull through a 1060 by a noticeably bigger margin than they do now ...