By using this site, you agree to our Privacy Policy and our Terms of Use. Close
vivster said:

When was the last time the performance jump was groundbreaking? 15-20% gen to gen seems moderate.

nVidia Maxwell had gains of about 40-55% over Kepler.
https://www.anandtech.com/show/9306/the-nvidia-geforce-gtx-980-ti-review/8

Kepler had gains of 45-75% over Fermi.
https://www.anandtech.com/show/5805/nvidia-geforce-gtx-690-review-ultra-expensive-ultra-rare-ultra-fast/11

So... It happens more often than we think?

vivster said:

You cannot really expect devs to code for hardware that doesn't exist.

I never said anything to the contrary.

vivster said:

You cannot really expect devs to code for hardware that doesn't exist. When has that ever been the case? And yes, for now they are gimmicks, but they're starting a framework. I'd rather devs start to get familiar with it now and learn techniques to make their stuff more efficient than later when they do get the hardware power but put on such inefficient code that it won't matter anyway. We should be embracing both Nvidia and the devs who pioneer so early with technology that will inevitably become the future. We lose a few percent of performance in this and and the next few gens but we'll gain so much more in the long run.


When AMD started to design Terascale... Aka, Very Long Instruction World-5 Way Aka. VLIW5... It was an architecture that was highly optimized for Direct X 9.0 titles... And it actually showed, it gave nVidia a run for it's money.
With the advent of the Radeon 5000 series AMD started to bolt on more advanced features like Tessellation... However, it was starting to become clear that VLIW was not an architecture well suited to the increasing compute demands of the future... So a more balanced approach was born with VLIW4... But that was just a stop-gap solution to get more performance out of a node whilst not blowing out transistor budgets.

Eventually GCN will come along and the rest is history.

The point is... AMD's Terascale was designed for good performance in titles of the day... And features were introduced as they were ratified in Direct X, but weren't done in a comprehensive fashion so as to take away from the chips capabilities elsewhere that games of the era were demanding.

And despite the fact that the Radeon 5000 series had absolutely terrible geometry performance, that part still held up extremely well (More so than nVidia direct counterpart, that's for sure. - Just ask JEMC.) as AMD didn't spend as much transistor space on it's geometry engines as nVidia did with it's Polymorph Engines.

The point I am saying is that, there is certainly better ways than dedicating such a large silicon area for features that doesn't really have much use at the moment.
Ramp it up when the demand arises.

vivster said:


I really hope AMD will not go completely conservative with their next GPU set to compete with Nvidia's flagship and at least take the first steps to integrate the new features.

It's going to be conservative. Navi is just an iterative update to Graphics Core Next that we have been seeing on the market for half a decade or more.


vivster said:

Question: How feasible are large chips? Is there like an upper limit where we reach the ceiling of what's possible with engineering, is it a cost issue? How far are we with stacked chips?

There is most certainly an upper limit.
nVidia just has such large profit margins and prices that it's less of an issue... They can build monolithic chips and die-harvest them to more premium-price points like the 2080Ti and 2080... Eventually when they get enough fully functional chips stockpiled they will sell it as a Titan.

Stacked chips won't be around for a long time yet, cooling tends to be a bit of an issue currently, plus leakage.




www.youtube.com/@Pemalite