Pemalite said:
vivster said:
That's a dangerous stance when it comes to innovation. You have to start at some point. It's basically the perfect time now that AMD does literally nothing. It paves the way for devs to get familiar with the tech and eggs on AMD to provide similar solutions. And come next generation we can start making actual use of it.
Speaking of wasted silicon, how the fuck does Nvidia not have a feature like zero core yet?
|
I meant that it's wasted silicon because nothing is using it right now... I recognize the chicken and egg scenario happening here.
There are also other ways you can implement Ray Tracing whilst not sacrificing rasterization capability, we will just have to wait and see how Intel and AMD responds. (Which could be years from now.)
In-Fact... Hybrid Ray Tracing+Rasterization has been a "thing" for years now anyway, so it's only natural we continue along that path, it's just nVidia is willing to "bet" on it by dedicating die area to the specific problem.
|
Considering that RT is the single most important graphics advancement since going 3D I'm willing to sacrifice a bit of silicon for it. It seems like the perfect thing to make dedicated hardware for, seeing how it's so simple and so easy to parallelize.
It'd be cool to have whole dedicated GPUs just for RT but that's probably too inconvenient to ever reach the mass market. Just imagine all the Gigarays on a chip as big as Turing filled exclusively with RT cores.
Last edited by vivster - on 20 September 2018