By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Captain_Yuri said:
Exclusive: GeForce RTX 30 series cards tight supply until end of year

https://www.tweaktown.com/news/74915/exclusive-geforce-rtx-30-series-cards-tight-supply-until-end-of-year/index.html?utm_source=dlvr.it&utm_medium=twitter&utm_campaign=tweaktown

Not really surprising given that it happenwith every GPU launch.

EricHiggin said:
JEMC said:

I'd be surprised if AMD manages to not only beat, but even come on par with 3080. I'm not saying it can't happen, but I'd certainly be cautious about that, specially given the latest rumors about Big Navi (from AMD being surprised by the performance jump of Ampere, to Big Navi not being tapped out until recently, meaning that all previous rumors were fake or not true, to the latest kopite tweet comparing it to the GA104 of the 3070).

And when it comes to the price of the new cards, we also have to keep in mind that, because of COVID, the whole world is in the middle of an economic crisis and, as such, Nvidia can't charge as much as they want because they could risk to lose sales from people not being able to afford the new cards.

Also, looks like I could be wrong about the 3070Ti... (see below)

Where do you get that info about the "fake" shader count? Just curious, I'd want to read more about it because videocardz has an article about Lenovo spoiling the existence of a 3070Ti and says this:

NVIDIA GeForce RTX 3070 Ti spotted with 16GB GDDR6 memory https://videocardz.com/newz/nvidia-geforce-rtx-3070-ti-spotted-with-16gb-gddr6-memory

Interestingly, Lenovo also confirmed that their Legion T7 system will feature the GeForce RTX 3070 Ti model. This SKU has not been announced or even teased by NVIDIA in any form. Though, it aligns with the rumors that RTX 3070 series will be offered with both 8GB and 16GB memory. What remains unclear is whether the model is really called 3070 Ti or 3070 SUPER, we have heard both names in private conversations with AIBs.

(...)

There is, however, something to consider. NVIDIA clearly did not inform the partners with the full specifications until the very last moment. We have heard that the final BIOS for the Ampere series was provided only recently. The doubled FP32 SM (Cuda) count has also not been communicated clearly to partners until just a few days ago. Hence, some AIBs still list incorrect CUDA core counts (5248/4352/2944) on their websites. What this means is that Lenovo may still rely on old data, which could’ve changed over the past few days.

They seem to think that the shader core number is real.

I wouldn't expect a Radeon 3080(Ti) competitor to go blow for blow with it's direct GeForce competition, but that's not to say it couldn't be better in some aspects while worse in others.

A 3070 Ti wouldn't be as warranted based on the pricing layout, but considering a Ti version is the norm, it's not entirely unexpected. Having a Ti/Super edition out sooner than later would make it even tougher on AMD though. This perhaps is a stronger indicator of where Nvidia thinks the top tier Radeon cards will land, so flood those tiers with reasonably priced models, price gaped for up sale at that, to try and keep people from buying AMD. If correct, this would mean AMD mostly has to rely on cheaper pricing if it wants to gain market share.

RTG's silence really does make me wonder, as explained below.

So you're like haxxiy and think that AMD could potentially beat Nvidia's Ampere in pure rasterization but lose in RT and the like. We'll see.

In any case, in order to present some battle, AMD needs to be competitive in both performance and also price. The 5700 series were very competitive against Nvidia cards but, given that they were priced very close, Nvidia still managed to sell more units because of brand name.

EricHiggin said:
haxxiy said:

From the performance figures they've given, Ampere has 98% more flops per watt than Turing but only 21% more performance, on average. That means one needs 1.61 Ampere flops to equal the peformance of 1 Turing flops, and 1.5 Ampere flops to equal 1 RDNA 1.0 flops.

It seems clear to me each shader was effectively cut in half before some architectural improvements, or perhaps it was the increased number of FP32 engines themselves that increased the performance relative to Turing.

With RDNA 2.0 apparently focusing on IPC, it would seem like Nvidia and AMD have more or less switched places concerning what their GPU design philosophy historically used to be. Ampere is very Terascale-like (lots of shaders, lower clocks and performance) while RDNA 2.0 is kind of Fermi-like (higher cloks and IPC but less shaders).

An Ampere CUDA core also has some similarities with Bulldozer modules, in that a second (integer in the case of Bulldozer, floats in the case of Ampere) unit was added to each processing core to increase performance and also make into those PR slides with twice the number of cores.

So, I don't think it's feasible to expect there's more performance left in future drivers (the same way that magical expectation wasn't feasible with Terascale or GCN).

Didn't AMD get sued over this not all that long ago, for marketing more CPU cores than those chips 'legitimately had'?

Yeah, they got sued for Bulldozer and had to pay 12 million dollars: https://www.anandtech.com/show/14804/amd-settlement



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.