By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Pemalite said:

You can't reason with individuals like that.

In saying that... If you buy allot of games... And some of us do with collections numbering in the thousands of titles, PC works out significantly cheaper over the long term, but they conveniently ignore all of that.

I know. It's just annoying to see a personality like that with a YT channel, who sells merch, acts insanely arrogant and snarky, while pretending he's a saint, a "neutral", only to toss that all away when it doesn't suit him in a pinch. It's people like that, that give gaming it's bad takes, and it's people like him that keep the not to tech savvy people in the dark and ignorant.

That's the thing about us getting it better over the long term, they'll just bring up the price argument, no matter how much we actually save, they count the base, day 1 release RP as if it's the end all to be all. Like to them it doesn't matter that we're getting this insane new boon of tech and software from Nvidia, because it "doesn't cost the same, as an XSX" (which is the exact same line of phrasing Colt uses ironically on his YT vids).



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

Around the Network
Chazore said:
Pemalite said:

You can't reason with individuals like that.

In saying that... If you buy allot of games... And some of us do with collections numbering in the thousands of titles, PC works out significantly cheaper over the long term, but they conveniently ignore all of that.

I know. It's just annoying to see a personality like that with a YT channel, who sells merch, acts insanely arrogant and snarky, while pretending he's a saint, a "neutral", only to toss that all away when it doesn't suit him in a pinch. It's people like that, that give gaming it's bad takes, and it's people like him that keep the not to tech savvy people in the dark and ignorant.

That's the thing about us getting it better over the long term, they'll just bring up the price argument, no matter how much we actually save, they count the base, day 1 release RP as if it's the end all to be all. Like to them it doesn't matter that we're getting this insane new boon of tech and software from Nvidia, because it "doesn't cost the same, as an XSX" (which is the exact same line of phrasing Colt uses ironically on his YT vids).

The other aspect is... We don't need to throw out our PC's every 6 years when new hardware comes out either, PC doesn't change in big giant leaps where we have to throw out the old and replace with the new.

I still got the old Core i7 3930K PC, 6-cores, 12 threads @ 5Ghz, 32GB of Ram and a Radeon RX 580 (Upgrade the GPU every few years), does 1080P-1440P gaming perfectly fine in the games room... And that came out during the Xbox 360 era. (2011.)
And it will still be gaming during the Xbox Series X era. 3 console generations, one PC. 100% free online.

But they conveniently ignore that as well... I would like to see an Xbox 360 run a Series X game.

But yeah, those kinds of people really tarnish the entire gaming community with a bad brush... It happens on this forum as well, but less so recently.



--::{PC Gaming Master Race}::--

Let's talk about CUDA cores. So it looks like that seemingly massively increased number of shaders isn't the true story and neither are the TFLOPS. It has been noticed that performance of the new cards does not scale linearly with the core count as it usually does.

So the deal is that Nvidia basically invented hyperthreading for shaders and is selling it as double the shader count, which I find incredibly misleading. Two calculations per clock in the same shader just doesn't scale as well as 2 separate shaders. Yet they also use that "doubled" shader count for the calculation of TFLOPS. That means in real world performance Nvidia's shader count and TFLOPS are now worth less than they were with Turing and probably even below AMD.

But there is another theory I have I'd like some input on.

I believe that possibly applications are not yet able to fully utilize the massively increased logical shader count as you can parallelize only so much. Which is why I believe that performance on Ampere and any card that uses the new shaders will slowly increase to close the efficiency gap over the next 5-10 years.

Last edited by vivster - on 02 September 2020

If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Yea I noticed that as well so this is gonna be something that we will have to see how it performs in real world benches instead of assuming that the scaling will work the way we think it's going to work.

GN said the architecture day is coming in 1-2 days so we should have multiple deep dives soon enough.

With that being said, we do already have some real world comparison % from DF and those should be factually accurate.



As to whether or not this will apply to every game is another story entirely. I do think the performance gains are real proven by DF but the way that Nvidia presented somethings such as those Cuda cores and certainly make this a mess. Unless there's some weird shit going on in the background with DF as well.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

I also trust the DF numbers, though the selection of games is horrible. But I see a way bigger problem here. With the shaders and TFLOPS already being totally messed up we now have to throw DLSS into the mix which massively improves performance. So what I'm expecting to happen is that when the big wave of tests comes out people will be underwhelmed by the numbers because they do not scale well in games without RTX or DLSS.

Then we'll have Big Navi coming out, which will no doubt be an absolute rasterization beast, and will come close to the 3080 and in some games might even eclipse it. The red trolls will have a field day and it will put an overall damper over everything despite Ampere being a giant leap forward and completely trouncing AMD in any game with RTX and DLSS.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Around the Network

I mean both Borderlands 3 and Doom Eternal are pure Rasterization. It's not like Nvidia is making people buy the 3000 series right away. There's deep dives, reviews and a whole lot of other crap coming out. If 3000 series has issues with Rasterization, Nvidia would have made people start pre-ordering today and do shady shit with reviews and etc. Probably would have delayed architecture day.

They aren't stupid enough to be like, we are gonna give yall this shit tier GPU that can't Rasterize worth shit and we will hand out review samples 2 weeks to a month before anyone can buy it so reviewers can tell you 2 weeks to a month before hand whether or not it's shit. It doesn't make any sense.

What this tells me is that like always, judging it solely based on Teraflop and Cuda Cores is a bad idea. Even more so with Ampere. The reviews will tell the whole story.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Deep dives won't matter when people can find game benchmarks where Big Navi is very close to a 3080 or better.

I'm not saying that rasterization performance will be bad, just that it will not scale with the numbers on paper. Which will then make people disappointed and put blame on Nvidia for misleading marketing.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

I wouldn't be so sure about that. If the performance is real like DF suggests, then Big Navi is still gonna have one hell of a time competing... Even against Ampere's weirdness... Just because Ampere's architecture is different doesn't suddenly mean Big Navi is gonna be on par. Obviously people should wait for RDNA 2 if they are worried. It would be fun to see Nvidia losing for once though.

And people are looking at the Cuda cores/Teraflops only and aren't looking at reviews and performance numbers, then they deserve to be miss lead. Luckly word of mouth and social media is strong so that shouldn't happen.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Think of it like this.

You look at the two consoles right now and see what they are doing. They are playing a game of chicken with pricing and pre-orders. Why is that? It's cause they have no confidence in their product vs the other. They are like, you go first, no you go first.

What did Nvidia do? They are like, come at me bro. Here's our product. Here's our price. Bring it.

Now the ball is on AMD's court. We knew almost for a god damn month when Ampere was gonna get announced. Did AMD say anything? Nope. Now that doesn't mean shit because the question now will be, will AMD say something? Let's say they are playing the smart game and waiting for the reviews to go out. Cool. But will they tease us in the mean time? Or something? That right there will tell me what AMD is feeling about RDNA 2 cause for Zen, they were all over Intel. They weren't quiet, they were effectively saying, you lazy idiots, you are about to get rekt.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

The good thing is that I don't have to wait. Even if Big Navi is performing better on pure rasterization than a 3080 it's still better to go Nvidia thanks to their quite advanced RT and Tensor cores. I know absolutely nothing about AMD's RT capabilities, but I already know it won't be as good. Even if we pretend that AMD can go toe to toe on RT I very much doubt they will sacrifice as much real estate on their chips for it as Nvidia has. And on the AI side of things AMD is basically dead. I assume they will show some stuff, but I doubt they will have many specialized cores to do it.

So not only will Nvidia have the performance crown but their cards will also age better with more and more games adopting RT and DLSS. So for me there is absolutely no point in waiting for AMD. The only thing I'm waiting on right now is PCIe benchmarks and then I'll pull the trigger on my new build. The way Nvidia talked about PCIe advantages actually makes me think it's a substantial difference and they're only trying to downplay it for their best buddy Intel.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.