By using this site, you agree to our Privacy Policy and our Terms of Use. Close
EricHiggin said:
JEMC said:

If AMD manages to bring something 10-15% faster than the 3070 for the same price, it could disrupt Nvidia's business plan because they've left room for a 3080Ti, but not for a 3070Ti.

But Nvidia maybe (probably) knows more than we do, and the gaps are placed where they need them.

If the highest performing Big Navi SKU is thought by Nvidia to be greater performance than the 3080, by a significant margin, that's why they would want to leave not just room, but a huge price gap so they can land wherever they need to when they respond with the 3080Ti. Probably sooner than later. Below that there isn't as much wiggle room in terms of pricing.

Offer 3070Ti and 3080Ti performance for 3070 and 3080 pricing with Big Navi. That would certainly make things interesting. 3070 and 3080 like performance for $50-$100 cheaper would be the next best thing.

After multiple gens of overpricing, Nvidia didn't just all of the sudden decide to be generous for no reason this gen. These prices should scream worthy competition is coming. Below the 3090 anyway.

I'd be surprised if AMD manages to not only beat, but even come on par with 3080. I'm not saying it can't happen, but I'd certainly be cautious about that, specially given the latest rumors about Big Navi (from AMD being surprised by the performance jump of Ampere, to Big Navi not being tapped out until recently, meaning that all previous rumors were fake or not true, to the latest kopite tweet comparing it to the GA104 of the 3070).

And when it comes to the price of the new cards, we also have to keep in mind that, because of COVID, the whole world is in the middle of an economic crisis and, as such, Nvidia can't charge as much as they want because they could risk to lose sales from people not being able to afford the new cards.

Also, looks like I could be wrong about the 3070Ti... (see below)

vivster said:

Let's talk about CUDA cores. So it looks like that seemingly massively increased number of shaders isn't the true story and neither are the TFLOPS. It has been noticed that performance of the new cards does not scale linearly with the core count as it usually does.

So the deal is that Nvidia basically invented hyperthreading for shaders and is selling it as double the shader count, which I find incredibly misleading. Two calculations per clock in the same shader just doesn't scale as well as 2 separate shaders. Yet they also use that "doubled" shader count for the calculation of TFLOPS. That means in real world performance Nvidia's shader count and TFLOPS are now worth less than they were with Turing and probably even below AMD.

But there is another theory I have I'd like some input on.

I believe that possibly applications are not yet able to fully utilize the massively increased logical shader count as you can parallelize only so much. Which is why I believe that performance on Ampere and any card that uses the new shaders will slowly increase to close the efficiency gap over the next 5-10 years.

Where do you get that info about the "fake" shader count? Just curious, I'd want to read more about it because videocardz has an article about Lenovo spoiling the existence of a 3070Ti and says this:

NVIDIA GeForce RTX 3070 Ti spotted with 16GB GDDR6 memory https://videocardz.com/newz/nvidia-geforce-rtx-3070-ti-spotted-with-16gb-gddr6-memory

Interestingly, Lenovo also confirmed that their Legion T7 system will feature the GeForce RTX 3070 Ti model. This SKU has not been announced or even teased by NVIDIA in any form. Though, it aligns with the rumors that RTX 3070 series will be offered with both 8GB and 16GB memory. What remains unclear is whether the model is really called 3070 Ti or 3070 SUPER, we have heard both names in private conversations with AIBs.

(...)

There is, however, something to consider. NVIDIA clearly did not inform the partners with the full specifications until the very last moment. We have heard that the final BIOS for the Ampere series was provided only recently. The doubled FP32 SM (Cuda) count has also not been communicated clearly to partners until just a few days ago. Hence, some AIBs still list incorrect CUDA core counts (5248/4352/2944) on their websites. What this means is that Lenovo may still rely on old data, which could’ve changed over the past few days.

They seem to think that the shader core number is real.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.