haxxiy said:
Samsung's 8 nm has a 10 nm BEOL versus a 20 nm BEOL on the 12 nm TSMC process... huge difference right there. Going by feature size, that's at least a node and a half of improvements. Perhaps AMD is using a better node right now, perhaps they aren't... but we can't compare that, since we have literally nothing to base ourselves on. I'm just pointing out there was almost no improvement even with a new architecture on top of the newer node. That was the point of the Radeon VII comparison. Vega was bad compared to Turing, yes. But at least it demonstrates a response to a more advanced node, with the same architecture to boot, as immature as 7 nm was back then. As for the number of people gaming with 2080s and above, that's a fair point. But do remember that those cards were horribly overpriced and underperformed accordingly on the market. Perhaps Nvidia will have more success with Ampere, I don't know. |
Yea that's a good point. It will be interesting to see where GPUs go as the node shrinks become harder and harder.
Yea they did and rightfully so but you can go look at other GPUs that are $700+ and they are also very low %. Not to mention, the 1080p results get skewed because of the laptop crowd as majority of the laptops ranging from $400-$3000 have 1080p screens. It's rare that most people would get a 4k screen with their laptops and 1440p is hella rare. It's reasons like this why I don't bother paying attention to Steam's hardware survey as the results are always skewed heavily. Personally I have 3 computers, 1 is my desktop that has 3440 x 1440p and two laptops, both are 1080p and I have steam on all 3.
PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850