Pemalite said:
vivster said:
You cannot really expect devs to code for hardware that doesn't exist. When has that ever been the case? And yes, for now they are gimmicks, but they're starting a framework. I'd rather devs start to get familiar with it now and learn techniques to make their stuff more efficient than later when they do get the hardware power but put on such inefficient code that it won't matter anyway. We should be embracing both Nvidia and the devs who pioneer so early with technology that will inevitably become the future. We lose a few percent of performance in this and and the next few gens but we'll gain so much more in the long run.
|
When AMD started to design Terascale... Aka, Very Long Instruction World-5 Way Aka. VLIW5... It was an architecture that was highly optimized for Direct X 9.0 titles... And it actually showed, it gave nVidia a run for it's money. With the advent of the Radeon 5000 series AMD started to bolt on more advanced features like Tessellation... However, it was starting to become clear that VLIW was not an architecture well suited to the increasing compute demands of the future... So a more balanced approach was born with VLIW4... But that was just a stop-gap solution to get more performance out of a node whilst not blowing out transistor budgets.
Eventually GCN will come along and the rest is history.
The point is... AMD's Terascale was designed for good performance in titles of the day... And features were introduced as they were ratified in Direct X, but weren't done in a comprehensive fashion so as to take away from the chips capabilities elsewhere that games of the era were demanding.
And despite the fact that the Radeon 5000 series had absolutely terrible geometry performance, that part still held up extremely well (More so than nVidia direct counterpart, that's for sure. - Just ask JEMC.) as AMD didn't spend as much transistor space on it's geometry engines as nVidia did with it's Polymorph Engines.
The point I am saying is that, there is certainly better ways than dedicating such a large silicon area for features that doesn't really have much use at the moment. Ramp it up when the demand arises.
vivster said:
I really hope AMD will not go completely conservative with their next GPU set to compete with Nvidia's flagship and at least take the first steps to integrate the new features.
|
It's going to be conservative. Navi is just an iterative update to Graphics Core Next that we have been seeing on the market for half a decade or more.
|
Demand won't arise if nobody makes a first step. This is the first step. After this gen people will certainly demand more from the next gen and also more from the games. Nvidia can also not afford to lose the edge in conventional rendering, so they will have a careful eye on that. If it so happens that AMD is marginally beating them, they will certainly overthink their design. I should be thankful for AMD's lack of competition to allow Nvidia this freedom.
Do you expect Navi to beat the 2080 Ti? Because that's why I said the next AMD GPU that tries to compete with Nvidia. Navi doesn't seem to do that.
Last edited by vivster - on 21 August 2018