By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Pemalite said:
vivster said:

I think people are overlooking the fact that they didn't actually compromise much space in the chip and instead just made it bigger to compensate. The 2080Ti still has a considerable upgrade in CUDA cores.

The point is, they could have had more CUDA cores for the same die space.

The 2080Ti is 18.6 Billion transistors large.
The 1080Ti is 12 Billion transistors large.
That is an increase of 55% in transistor counts.

The 2080Ti has 4352 CUDA cores.
The 1080Ti has 3584 CUDA cores.
That is an increase of 21%.

We should have been looking at closer to 5,500 CUDA cores.

See the problem now?

Combine that with a slight reduction in clocks between the 1080Ti and 2080Ti... I don't expect the performance gains in typical rasterized scenario's to be generationally-ground breaking.

vivster said:


Also, the RT and Tensor cores aren't like PhysX, as in a very niche and proprietary feature. They work together with DX and they are very broadly applicable to games without much special input from the devs. RT is without a doubt the future of gaming and Tensor cores can help with a variety of tasks. For now it's just gimmicks because the hardware behind it is still too weak to actually accomplish properly what it promises but you have to start somewhere. The sooner devs get familiar with these new opportunities, the better.

Now we just have to hope that the RT and AI functionality that will eventually pop up in AMD GPUs are similar enough in framework to that in Nvidia chips.

We don't know if the GPU is going to be any good in Ray Traced scenario's anyway. This is essentially the "base line" for next-gen graphics.
Chicken and Egg and all that.
I would prefer hardware that works well in games today, not years later.

So until we actually have games and enough hardware on the market that these features become proven... Then they are all gimmicks at this stage.

When was the last time the performance jump was groundbreaking? 15-20% gen to gen seems moderate.

You cannot really expect devs to code for hardware that doesn't exist. When has that ever been the case? And yes, for now they are gimmicks, but they're starting a framework. I'd rather devs start to get familiar with it now and learn techniques to make their stuff more efficient than later when they do get the hardware power but put on such inefficient code that it won't matter anyway. We should be embracing both Nvidia and the devs who pioneer so early with technology that will inevitably become the future. We lose a few percent of performance in this and and the next few gens but we'll gain so much more in the long run.

I really hope AMD will not go completely conservative with their next GPU set to compete with Nvidia's flagship and at least take the first steps to integrate the new features.

Question: How feasible are large chips? Is there like an upper limit where we reach the ceiling of what's possible with engineering, is it a cost issue? How far are we with stacked chips?



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.