By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Chazore said:
Captain_Yuri said:
Think of it like this.

You look at the two consoles right now and see what they are doing. They are playing a game of chicken with pricing and pre-orders. Why is that? It's cause they have no confidence in their product vs the other. They are like, you go first, no you go first.

What did Nvidia do? They are like, come at me bro. Here's our product. Here's our price. Bring it.

Now the ball is on AMD's court. We knew almost for a god damn month when Ampere was gonna get announced. Did AMD say anything? Nope. Now that doesn't mean shit because the question now will be, will AMD say something? Let's say they are playing the smart game and waiting for the reviews to go out. Cool. But will they tease us in the mean time? Or something? That right there will tell me what AMD is feeling about RDNA 2 cause for Zen, they were all over Intel. They weren't quiet, they were effectively saying, you lazy idiots, you are about to get rekt.

Yeah that's why I just don't think AMD is gonna bring it this time around. There's even people on Era, that think AMD is gonna somehow trounce the 3080/3090 and come up with a superior version to DLSS (same going on in that PS5 or 3080 thread). All this tells me is that some people are sweating that AMD isn't talking their big game, like they did with Ryzen. 

Pretty much.

Personally if AMD manages to win, that would be fap worthy for all of us. Cause think about what they have to do. They have to provide a card that is on Par with a 3070 for less than $500, 3080 for less than $700 and 3090 less than $1500. If it's the same price and performance, then Nvidia wins on drivers and features as I highly doubt that they are even thinking about DLSS right now.

I wish at the very least, they could tease us. Have an intern go into MS word and type Big Navi in WordArt and then post it on twitter. That's all anyone even needs.

Last edited by Jizz_Beard_thePirate - on 02 September 2020

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Around the Network
JEMC said:
EricHiggin said:

If the highest performing Big Navi SKU is thought by Nvidia to be greater performance than the 3080, by a significant margin, that's why they would want to leave not just room, but a huge price gap so they can land wherever they need to when they respond with the 3080Ti. Probably sooner than later. Below that there isn't as much wiggle room in terms of pricing.

Offer 3070Ti and 3080Ti performance for 3070 and 3080 pricing with Big Navi. That would certainly make things interesting. 3070 and 3080 like performance for $50-$100 cheaper would be the next best thing.

After multiple gens of overpricing, Nvidia didn't just all of the sudden decide to be generous for no reason this gen. These prices should scream worthy competition is coming. Below the 3090 anyway.

I'd be surprised if AMD manages to not only beat, but even come on par with 3080. I'm not saying it can't happen, but I'd certainly be cautious about that, specially given the latest rumors about Big Navi (from AMD being surprised by the performance jump of Ampere, to Big Navi not being tapped out until recently, meaning that all previous rumors were fake or not true, to the latest kopite tweet comparing it to the GA104 of the 3070).

And when it comes to the price of the new cards, we also have to keep in mind that, because of COVID, the whole world is in the middle of an economic crisis and, as such, Nvidia can't charge as much as they want because they could risk to lose sales from people not being able to afford the new cards.

Also, looks like I could be wrong about the 3070Ti... (see below)

vivster said:

Let's talk about CUDA cores. So it looks like that seemingly massively increased number of shaders isn't the true story and neither are the TFLOPS. It has been noticed that performance of the new cards does not scale linearly with the core count as it usually does.

So the deal is that Nvidia basically invented hyperthreading for shaders and is selling it as double the shader count, which I find incredibly misleading. Two calculations per clock in the same shader just doesn't scale as well as 2 separate shaders. Yet they also use that "doubled" shader count for the calculation of TFLOPS. That means in real world performance Nvidia's shader count and TFLOPS are now worth less than they were with Turing and probably even below AMD.

But there is another theory I have I'd like some input on.

I believe that possibly applications are not yet able to fully utilize the massively increased logical shader count as you can parallelize only so much. Which is why I believe that performance on Ampere and any card that uses the new shaders will slowly increase to close the efficiency gap over the next 5-10 years.

Where do you get that info about the "fake" shader count? Just curious, I'd want to read more about it because videocardz has an article about Lenovo spoiling the existence of a 3070Ti and says this:

NVIDIA GeForce RTX 3070 Ti spotted with 16GB GDDR6 memory https://videocardz.com/newz/nvidia-geforce-rtx-3070-ti-spotted-with-16gb-gddr6-memory

Interestingly, Lenovo also confirmed that their Legion T7 system will feature the GeForce RTX 3070 Ti model. This SKU has not been announced or even teased by NVIDIA in any form. Though, it aligns with the rumors that RTX 3070 series will be offered with both 8GB and 16GB memory. What remains unclear is whether the model is really called 3070 Ti or 3070 SUPER, we have heard both names in private conversations with AIBs.

(...)

There is, however, something to consider. NVIDIA clearly did not inform the partners with the full specifications until the very last moment. We have heard that the final BIOS for the Ampere series was provided only recently. The doubled FP32 SM (Cuda) count has also not been communicated clearly to partners until just a few days ago. Hence, some AIBs still list incorrect CUDA core counts (5248/4352/2944) on their websites. What this means is that Lenovo may still rely on old data, which could’ve changed over the past few days.

They seem to think that the shader core number is real.

I wouldn't expect a Radeon 3080(Ti) competitor to go blow for blow with it's direct GeForce competition, but that's not to say it couldn't be better in some aspects while worse in others.

A 3070 Ti wouldn't be as warranted based on the pricing layout, but considering a Ti version is the norm, it's not entirely unexpected. Having a Ti/Super edition out sooner than later would make it even tougher on AMD though. This perhaps is a stronger indicator of where Nvidia thinks the top tier Radeon cards will land, so flood those tiers with reasonably priced models, price gaped for up sale at that, to try and keep people from buying AMD. If correct, this would mean AMD mostly has to rely on cheaper pricing if it wants to gain market share.

RTG's silence really does make me wonder, as explained below.

haxxiy said:
JEMC said:

It can aslo mean that the shaders aren't fully used. Some years ago, I don't remember if it was with Fury or the Vega cards, AMD had that problem. Those cards had something close to double the shaders of the regular, mainstream cards but didn't offer twice the performance because the chips wasn't well scaled and not all shaders could be used. Something akin could have happened this time to Nvidia, only to a less extend.

Another option would be that drivers still need to mature more and can't take full use of the new hardware.

From the performance figures they've given, Ampere has 98% more flops per watt than Turing but only 21% more performance, on average. That means one needs 1.61 Ampere flops to equal the peformance of 1 Turing flops, and 1.5 Ampere flops to equal 1 RDNA 1.0 flops.

It seems clear to me each shader was effectively cut in half before some architectural improvements, or perhaps it was the increased number of FP32 engines themselves that increased the performance relative to Turing.

With RDNA 2.0 apparently focusing on IPC, it would seem like Nvidia and AMD have more or less switched places concerning what their GPU design philosophy historically used to be. Ampere is very Terascale-like (lots of shaders, lower clocks and performance) while RDNA 2.0 is kind of Fermi-like (higher cloks and IPC but less shaders).

An Ampere CUDA core also has some similarities with Bulldozer modules, in that a second (integer in the case of Bulldozer, floats in the case of Ampere) unit was added to each processing core to increase performance and also make into those PR slides with twice the number of cores.

So, I don't think it's feasible to expect there's more performance left in future drivers (the same way that magical expectation wasn't feasible with Terascale or GCN).

Didn't AMD get sued over this not all that long ago, for marketing more CPU cores than those chips 'legitimately had'?

Captain_Yuri said:
Think of it like this.

You look at the two consoles right now and see what they are doing. They are playing a game of chicken with pricing and pre-orders. Why is that? It's cause they have no confidence in their product vs the other. They are like, you go first, no you go first.

What did Nvidia do? They are like, come at me bro. Here's our product. Here's our price. Bring it.

Now the ball is on AMD's court. We knew almost for a god damn month when Ampere was gonna get announced. Did AMD say anything? Nope. Now that doesn't mean shit because the question now will be, will AMD say something? Let's say they are playing the smart game and waiting for the reviews to go out. Cool. But will they tease us in the mean time? Or something? That right there will tell me what AMD is feeling about RDNA 2 cause for Zen, they were all over Intel. They weren't quiet, they were effectively saying, you lazy idiots, you are about to get rekt.

The console market is a bit different. SNY doesn't have the same general mind share that Nvidia does. If XBSX is the right price, along with Lockhart if it's coming, it will be more of a problem for SNY than if Radeon (6000?) can compete with GeForce 3000. Nvidia may lose some ground but it won't be that worrisome. That's at least partially why they went ahead. If AMD can beat them on performance and/or price, there isn't that much Nvidia can do about it, so getting out ahead and selling as many cards as possible before Big Navi get's announced wouldn't be the worst idea.

MS did the same thing with XBSX. Come at us bro. Well SNY did, and PS5 quickly became the heavy favorite, so. Consoles and PC's are different beasts though.

I wouldn't look at it as AMD as a whole here, because AMD tends to go more with the CPU's, and RTG with GPU's, though RTG isn't as commonly used.

AMD was super loud with Zen, but it's RTG that you should be focusing on. RTG has been marketing up a storm for years now, going back to Polaris, but unfortunately has been over hyping the products. Polaris wasn't quite up to par though fairly decent, but everything since has been a let down of sorts. RTG has been screaming from the rooftop about a storm coming, that never quite hits, or completely misses. Usually well in advance at that.

So why oh why have they been so silent all of the sudden? Everyone is seeing clouds forming and the skies darkening, so what gives? Is Big Navi even worse off than Vega, but they've learned by now and are trying to sweep it under the rug? Perhaps they're building a well armed horde, planning a 'surprise' attack on the king?

AMD knows Nvidia hasn't gotten sloppy like Intel. They may have gotten greedier, but they aren't lacking with their tech. AMD/RTG has to be much smarter about dealing with Nvidia than they were with Intel if they want to make up any ground in the GPU market.



green_sky said:
Pemalite said:

There are advantages of having games @4k.
On console they Super-sample down to 1080P which provides a very clean and crisp image even at 1080P.

Thanks. Pemalite always with good wisdom. 

My focus for hardware more on pc monitor. Lot of stuff is sold out right now. Am looking at 144 hz ips or va monitor. :)

Did Ryzen 3600x build, still using old gpu (Rx 580) and does well for games i play. 

I have always been a bit of an IPS fanatic going back to the Dell U2711 27" 1440P, IPS panels 10~ years ago where I was running 3x of those panels in Eyefinity for a 7680x1440 resolution. Crysis was amazing on that.

Today I am running a single LG 32" VA, 1440P, 144hz panel... And officially a convert.
I was always worried about "motion blur" which is actually a very typical characteristic with VA panels, but the higher refresh rates do mitigate that significantly unless you scaled back your displays response time.

The black levels of VA are generally a step up over IPS, very inky blacks... Good uniformity. - And I don't seem to notice anything similar to your typical IPS glow that is very common in e-IPS panels, there is some "clouding" it's just not as much of an issue.

At 1440P the RX 580 can still hold it's own, get a panel with Freesync so a drop under 144fps isn't going to be a pain in the ass to manage.

To me 1440P, 120hz+, Freesync are the minimum for a decent panel in 2020 for any PC gamer, no reason not to have that.



--::{PC Gaming Master Race}::--

EricHiggin said:
Captain_Yuri said:
Think of it like this.

You look at the two consoles right now and see what they are doing. They are playing a game of chicken with pricing and pre-orders. Why is that? It's cause they have no confidence in their product vs the other. They are like, you go first, no you go first.

What did Nvidia do? They are like, come at me bro. Here's our product. Here's our price. Bring it.

Now the ball is on AMD's court. We knew almost for a god damn month when Ampere was gonna get announced. Did AMD say anything? Nope. Now that doesn't mean shit because the question now will be, will AMD say something? Let's say they are playing the smart game and waiting for the reviews to go out. Cool. But will they tease us in the mean time? Or something? That right there will tell me what AMD is feeling about RDNA 2 cause for Zen, they were all over Intel. They weren't quiet, they were effectively saying, you lazy idiots, you are about to get rekt.

The console market is a bit different. SNY doesn't have the same general mind share that Nvidia does. If XBSX is the right price, along with Lockhart if it's coming, it will be more of a problem for SNY than if Radeon (6000?) can compete with GeForce 3000. Nvidia may lose some ground but it won't be that worrisome. That's at least partially why they went ahead. If AMD can beat them on performance and/or price, there isn't that much Nvidia can do about it, so getting out ahead and selling as many cards as possible before Big Navi get's announced wouldn't be the worst idea.

MS did the same thing with XBSX. Come at us bro. Well SNY did, and PS5 quickly became the heavy favorite, so. Consoles and PC's are different beasts though.

I wouldn't look at it as AMD as a whole here, because AMD tends to go more with the CPU's, and RTG with GPU's, though RTG isn't as commonly used.

AMD was super loud with Zen, but it's RTG that you should be focusing on. RTG has been marketing up a storm for years now, going back to Polaris, but unfortunately has been over hyping the products. Polaris wasn't quite up to par though fairly decent, but everything since has been a let down of sorts. RTG has been screaming from the rooftop about a storm coming, that never quite hits, or completely misses. Usually well in advance at that.

So why oh why have they been so silent all of the sudden? Everyone is seeing clouds forming and the skies darkening, so what gives? Is Big Navi even worse off than Vega, but they've learned by now and are trying to sweep it under the rug? Perhaps they're building a well armed horde, planning a 'surprise' attack on the king?

AMD knows Nvidia hasn't gotten sloppy like Intel. They may have gotten greedier, but they aren't lacking with their tech. AMD/RTG has to be much smarter about dealing with Nvidia than they were with Intel if they want to make up any ground in the GPU market.

Yea that's true. I do hope RTG brings it with RDNA 2 cause it's partly due to them and consoles, that we can get prices like this. Like all around the internet, there's so much excitement because of Ampere's price and I do believe it's because of the threat of RDNA 2. While I don't think they will be able to beat the 3080 in Rasterization + Ray Tracing, hopefully they can be very competitive.

Here's hoping we get some announcements soon!



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:
EricHiggin said:

The console market is a bit different. SNY doesn't have the same general mind share that Nvidia does. If XBSX is the right price, along with Lockhart if it's coming, it will be more of a problem for SNY than if Radeon (6000?) can compete with GeForce 3000. Nvidia may lose some ground but it won't be that worrisome. That's at least partially why they went ahead. If AMD can beat them on performance and/or price, there isn't that much Nvidia can do about it, so getting out ahead and selling as many cards as possible before Big Navi get's announced wouldn't be the worst idea.

MS did the same thing with XBSX. Come at us bro. Well SNY did, and PS5 quickly became the heavy favorite, so. Consoles and PC's are different beasts though.

I wouldn't look at it as AMD as a whole here, because AMD tends to go more with the CPU's, and RTG with GPU's, though RTG isn't as commonly used.

AMD was super loud with Zen, but it's RTG that you should be focusing on. RTG has been marketing up a storm for years now, going back to Polaris, but unfortunately has been over hyping the products. Polaris wasn't quite up to par though fairly decent, but everything since has been a let down of sorts. RTG has been screaming from the rooftop about a storm coming, that never quite hits, or completely misses. Usually well in advance at that.

So why oh why have they been so silent all of the sudden? Everyone is seeing clouds forming and the skies darkening, so what gives? Is Big Navi even worse off than Vega, but they've learned by now and are trying to sweep it under the rug? Perhaps they're building a well armed horde, planning a 'surprise' attack on the king?

AMD knows Nvidia hasn't gotten sloppy like Intel. They may have gotten greedier, but they aren't lacking with their tech. AMD/RTG has to be much smarter about dealing with Nvidia than they were with Intel if they want to make up any ground in the GPU market.

Yea that's true. I do hope RTG brings it with RDNA 2 cause it's partly due to them and consoles, that we can get prices like this. Like all around the internet, there's so much excitement because of Ampere's price and I do believe it's because of the threat of RDNA 2. While I don't think they will be able to beat the 3080 in Rasterization + Ray Tracing, hopefully they can be very competitive.

Here's hoping we get some announcements soon!

The 3000 Series is impressive so far, no doubt about it, but more hands on is needed to really know if it's everything it seems to be. I'm excited that Nvidia has shot for the moon with performance but is coming back down to Earth with their pricing.

I also hope that if this isn't just due to the consoles, and has something to do with Big Navi, that RTG does see a worthy consumer response. I fear if Big Navi shows up to party this time, but still can't get enough attention, that they will just throw in the towel, sober up and let things will go back to the way they have been for a couple generations. I'd have a hard time blaming them as well, if Big Navi can hold it's booze and give Jensen a run for his money at beer pong.

I also don't see AMD/RTG saying anything for a while yet, unless they're also ready to launch soon. Under Lisa they've been much more professional and classy, so I don't see them raining on Nvidia's parade, even if they could.



Around the Network

At this point we will have to desperately hope that Big Navi will be good. So there are more Nvidia cards going around to buy.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Some people are already starting to question the 3080's VRAM size, already asking for more. I feel like some of them are looking into VRAM as if it's the end all to be all, just like when MS was touting their VRAM with XSX and how the little squabbles began, then sometime after MS stopped touting VRAM.

yes I know it's 2gb less than my 1080ti, but it's using a new memory architecture, and has already been shown to being more efficient anyway. I'm planning to game at 1440p until 4k is an utter breeze (and texture quality for literally every object in every new game actually catches up, because I'm fed up with character models being 4k res, but rocks/objects being far less). Until then, 10gb for the 3080 will be fine for me, unless Nvidia offer a 3080ti for not that much more in price (which is highly doubtful, thus not making it worth me paying more than £100 more, because that's my absolute limit).



Step right up come on in, feel the buzz in your veins, I'm like an chemical electrical right into your brain and I'm the one who killed the Radio, soon you'll all see

So pay up motherfuckers you belong to "V"

I'm just gonna hope that the 3090 is not gonna be eclipsed too soon. Gimme at least 2 years at the top :(



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

Captain_Yuri said:
Exclusive: GeForce RTX 30 series cards tight supply until end of year

https://www.tweaktown.com/news/74915/exclusive-geforce-rtx-30-series-cards-tight-supply-until-end-of-year/index.html?utm_source=dlvr.it&utm_medium=twitter&utm_campaign=tweaktown

Not really surprising given that it happenwith every GPU launch.

EricHiggin said:
JEMC said:

I'd be surprised if AMD manages to not only beat, but even come on par with 3080. I'm not saying it can't happen, but I'd certainly be cautious about that, specially given the latest rumors about Big Navi (from AMD being surprised by the performance jump of Ampere, to Big Navi not being tapped out until recently, meaning that all previous rumors were fake or not true, to the latest kopite tweet comparing it to the GA104 of the 3070).

And when it comes to the price of the new cards, we also have to keep in mind that, because of COVID, the whole world is in the middle of an economic crisis and, as such, Nvidia can't charge as much as they want because they could risk to lose sales from people not being able to afford the new cards.

Also, looks like I could be wrong about the 3070Ti... (see below)

Where do you get that info about the "fake" shader count? Just curious, I'd want to read more about it because videocardz has an article about Lenovo spoiling the existence of a 3070Ti and says this:

NVIDIA GeForce RTX 3070 Ti spotted with 16GB GDDR6 memory https://videocardz.com/newz/nvidia-geforce-rtx-3070-ti-spotted-with-16gb-gddr6-memory

Interestingly, Lenovo also confirmed that their Legion T7 system will feature the GeForce RTX 3070 Ti model. This SKU has not been announced or even teased by NVIDIA in any form. Though, it aligns with the rumors that RTX 3070 series will be offered with both 8GB and 16GB memory. What remains unclear is whether the model is really called 3070 Ti or 3070 SUPER, we have heard both names in private conversations with AIBs.

(...)

There is, however, something to consider. NVIDIA clearly did not inform the partners with the full specifications until the very last moment. We have heard that the final BIOS for the Ampere series was provided only recently. The doubled FP32 SM (Cuda) count has also not been communicated clearly to partners until just a few days ago. Hence, some AIBs still list incorrect CUDA core counts (5248/4352/2944) on their websites. What this means is that Lenovo may still rely on old data, which could’ve changed over the past few days.

They seem to think that the shader core number is real.

I wouldn't expect a Radeon 3080(Ti) competitor to go blow for blow with it's direct GeForce competition, but that's not to say it couldn't be better in some aspects while worse in others.

A 3070 Ti wouldn't be as warranted based on the pricing layout, but considering a Ti version is the norm, it's not entirely unexpected. Having a Ti/Super edition out sooner than later would make it even tougher on AMD though. This perhaps is a stronger indicator of where Nvidia thinks the top tier Radeon cards will land, so flood those tiers with reasonably priced models, price gaped for up sale at that, to try and keep people from buying AMD. If correct, this would mean AMD mostly has to rely on cheaper pricing if it wants to gain market share.

RTG's silence really does make me wonder, as explained below.

So you're like haxxiy and think that AMD could potentially beat Nvidia's Ampere in pure rasterization but lose in RT and the like. We'll see.

In any case, in order to present some battle, AMD needs to be competitive in both performance and also price. The 5700 series were very competitive against Nvidia cards but, given that they were priced very close, Nvidia still managed to sell more units because of brand name.

EricHiggin said:
haxxiy said:

From the performance figures they've given, Ampere has 98% more flops per watt than Turing but only 21% more performance, on average. That means one needs 1.61 Ampere flops to equal the peformance of 1 Turing flops, and 1.5 Ampere flops to equal 1 RDNA 1.0 flops.

It seems clear to me each shader was effectively cut in half before some architectural improvements, or perhaps it was the increased number of FP32 engines themselves that increased the performance relative to Turing.

With RDNA 2.0 apparently focusing on IPC, it would seem like Nvidia and AMD have more or less switched places concerning what their GPU design philosophy historically used to be. Ampere is very Terascale-like (lots of shaders, lower clocks and performance) while RDNA 2.0 is kind of Fermi-like (higher cloks and IPC but less shaders).

An Ampere CUDA core also has some similarities with Bulldozer modules, in that a second (integer in the case of Bulldozer, floats in the case of Ampere) unit was added to each processing core to increase performance and also make into those PR slides with twice the number of cores.

So, I don't think it's feasible to expect there's more performance left in future drivers (the same way that magical expectation wasn't feasible with Terascale or GCN).

Didn't AMD get sued over this not all that long ago, for marketing more CPU cores than those chips 'legitimately had'?

Yeah, they got sued for Bulldozer and had to pay 12 million dollars: https://www.anandtech.com/show/14804/amd-settlement



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.