By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

About a week ago there was an article at Videocardz where they talked about that rumor of a September launch for Ampere, with some sites even saying August: https://videocardz.com/newz/nvidia-geforce-rtx-30-series-expected-to-launch-in-september

It makes sense for September because that was when Cyberpunk 2077 was supposed to launch, after a delay and before another), and what a better way to enjoy a new game that promises to tax your PC than with a new range of GPUs?

Also, that MI100 article... well, that will be the first card with using the new CDNA architecture from AMD, which will be different from the RDNA for the gaming parts. Therefore, we shouldn't take any conclusions about Navi based on it.

Also, and from Videocardz article:

The website further claims that the MI100 will feature 120 Compute Units. Assuming that the CDNA architecture features 64 Processors per Cluster, it would mean that the accelerator has 7680 cores in total (if each Compute Unit had 64 cores). We are intentionally not calling them Stream Processors, because we are unsure if this is the exact name for this architecture.

What does not make sense, however, is the 42 TFLOPs claim on the slide. This would put make MI100 more than twice as fast as NVIDIA Ampere A100 (19.5 TFLOPs). To achieve 42TF, it would require either: 7680 cores running at 2.75 GHz or 15360 cores running at 1350 MHz. The latter would suggest that each CU contains 128 cores, not 64.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network

i still think while DLSS is a wonderful thing when looking at screenshots and static images, it triggers while moving



 "I think people should define the word crap" - Kirby007

Join the Prediction League http://www.vgchartz.com/predictions

Instead of seeking to convince others, we can be open to changing our own minds, and seek out information that contradicts our own steadfast point of view. Maybe it’ll turn out that those who disagree with you actually have a solid grasp of the facts. There’s a slight possibility that, after all, you’re the one who’s wrong.

NVIDIA RTX 3090 Will Allegedly Offer A Massive 50% Performance Increase. Salt plox

NVIDIA RTX 3090 will allegedly score close to 10000 points in Time Spy Extreme benchmark. Salt plox x2

https://wccftech.com/nvidia-rtx-3090-will-allegedly-offer-a-massive-50-performance-increase/



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

That's not too terrible.

Though we shouldn't forget that newer GPUs will always have much higher improvements on the most extreme scenarios. In moderate loads it's probably not much higher than 30%.



If you demand respect or gratitude for your volunteer work, you're doing volunteering wrong.

kirby007 said:
i still think while DLSS is a wonderful thing when looking at screenshots and static images, it triggers while moving

With more advanced algorithms, tensor cores and processing power.. image quality should scale higher even in motion, with less artifacts. I would think Nvidia are working on DLSS 2.0 successor.

And oh boy, 3090 gonna be beast. Just like its pricing.

Last edited by hinch - on 30 July 2020

Around the Network
vivster said:

That's not too terrible.

Though we shouldn't forget that newer GPUs will always have much higher improvements on the most extreme scenarios. In moderate loads it's probably not much higher than 30%.

And we should also remember that the 2080Ti was actually CPU limited at anything lower than 4K by any processor except the 9900K, and the newer and more powerful cards could fare the same problem, probably even worse.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Captain_Yuri said:
AMD Radeon Instinct MI100 ‘CDNA GPU’ Alleged Performance Numbers Show Its Faster Than NVIDIA’s A100 in FP32 Compute, Impressive Perf/Value

https://wccftech.com/amd-radeon-instinct-mi100-cdna-gpu-alleged-performance-benchmarks-leak-faster-than-nvidia-ampere-a100/

CDNA is basically just an enhanced GCN part geared even more towards compute.
GCN was always amazing at compute operations.

Captain_Yuri said:

Looks to be the Crysis of this gen jeez. Brings 2080Ti + 3900X to it's knees at 4k. Overall looks to be Intel + Nvidia favoured. This is still in beta so there could be performance improvements on release. They will apparently add even more visual enhancements such as ray tracing and this is probably the first implementation of the "cloud" that actually makes sense. It is very scalable on both GPU and CPU though.

Steam page is also up:

https://store.steampowered.com/app/1250410/Microsoft_Flight_Simulator/

It made me moist, lets put it that way.

Literally "Petabytes" worth of texture and mesh data exists and is loaded in via the cloud... Compression can only take you so far when dealing with such expansive amounts of data. *looks at console SSDs*

JEMC said:

Also, that MI100 article... well, that will be the first card with using the new CDNA architecture from AMD, which will be different from the RDNA for the gaming parts. Therefore, we shouldn't take any conclusions about Navi based on it.

Also, and from Videocardz article:

The website further claims that the MI100 will feature 120 Compute Units. Assuming that the CDNA architecture features 64 Processors per Cluster, it would mean that the accelerator has 7680 cores in total (if each Compute Unit had 64 cores). We are intentionally not calling them Stream Processors, because we are unsure if this is the exact name for this architecture.

What does not make sense, however, is the 42 TFLOPs claim on the slide. This would put make MI100 more than twice as fast as NVIDIA Ampere A100 (19.5 TFLOPs). To achieve 42TF, it would require either: 7680 cores running at 2.75 GHz or 15360 cores running at 1350 MHz. The latter would suggest that each CU contains 128 cores, not 64.

Or it has 240 Compute units @64 stream processors each.

Or it could be none of the above.

192 CU's @ 1.72Ghz could get you there as well... That would essentially be 3x Fury or Vega 64.

We need to remember that CDNA is ditching allot of the rasterization baggage and dialing home the compute, that's a ton of free extra transistors that can be dedicated to the task of more shader cores.

This will be a big win for Cryptocurrency though.

JEMC said:
vivster said:

That's not too terrible.

Though we shouldn't forget that newer GPUs will always have much higher improvements on the most extreme scenarios. In moderate loads it's probably not much higher than 30%.

And we should also remember that the 2080Ti was actually CPU limited at anything lower than 4K by any processor except the 9900K, and the newer and more powerful cards could fare the same problem, probably even worse.

Zen 3 is coming this year as well.






--::{PC Gaming Master Race}::--

Bofferbrauer2 said:
EricHiggin said:

It's just branding but I like it. As soon as I saw it, big little came to mind. It should be memorable with consumers.

Though I'd say AMD Ryzen and Epyc branding is still better and remains feeling new and lively.

What came to my mind was Gateway, not big.LITTLE. 

A little ya. I take it you mean like this one?



Pemalite said:
JEMC said:

Also, that MI100 article... well, that will be the first card with using the new CDNA architecture from AMD, which will be different from the RDNA for the gaming parts. Therefore, we shouldn't take any conclusions about Navi based on it.

Also, and from Videocardz article:

The website further claims that the MI100 will feature 120 Compute Units. Assuming that the CDNA architecture features 64 Processors per Cluster, it would mean that the accelerator has 7680 cores in total (if each Compute Unit had 64 cores). We are intentionally not calling them Stream Processors, because we are unsure if this is the exact name for this architecture.

What does not make sense, however, is the 42 TFLOPs claim on the slide. This would put make MI100 more than twice as fast as NVIDIA Ampere A100 (19.5 TFLOPs). To achieve 42TF, it would require either: 7680 cores running at 2.75 GHz or 15360 cores running at 1350 MHz. The latter would suggest that each CU contains 128 cores, not 64.

Or it has 240 Compute units @64 stream processors each.

Or it could be none of the above.

192 CU's @ 1.72Ghz could get you there as well... That would essentially be 3x Fury or Vega 64.

We need to remember that CDNA is ditching allot of the rasterization baggage and dialing home the compute, that's a ton of free extra transistors that can be dedicated to the task of more shader cores.

This will be a big win for Cryptocurrency though.

And, with all due respect, none of those extra options matter given that the original rumor from AdoredTV mentions 120CUs.

Pemalite said:
JEMC said:

And we should also remember that the 2080Ti was actually CPU limited at anything lower than 4K by any processor except the 9900K, and the newer and more powerful cards could fare the same problem, probably even worse.

Zen 3 is coming this year as well.

We saw how games still prefered the fast Intel CPUs over to the more capable (IPC wise) Zen 2 so, despite all the changes the new CPUs will bring like unified L3, unless AMD manages to make the new Zen 3 processors at least 500MHz faster than Zen 2, we'll still be in the same situation.

They'll be closer, but Intel will still win in games.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
Pemalite said:

Or it has 240 Compute units @64 stream processors each.

Or it could be none of the above.

192 CU's @ 1.72Ghz could get you there as well... That would essentially be 3x Fury or Vega 64.

We need to remember that CDNA is ditching allot of the rasterization baggage and dialing home the compute, that's a ton of free extra transistors that can be dedicated to the task of more shader cores.

This will be a big win for Cryptocurrency though.

And, with all due respect, none of those extra options matter given that the original rumor from AdoredTV mentions 120CUs.

That is precisely my point. AdoredTV is just rehashing a "rumor". - It's no less or more credible than the arbitrary spec-numbers I just vomited out.

JEMC said:
Pemalite said:

Zen 3 is coming this year as well.

We saw how games still prefered the fast Intel CPUs over to the more capable (IPC wise) Zen 2 so, despite all the changes the new CPUs will bring like unified L3, unless AMD manages to make the new Zen 3 processors at least 500MHz faster than Zen 2, we'll still be in the same situation.

They'll be closer, but Intel will still win in games.

Part of that is due to Intel dominating so long, so compilers and development pipelines were optimized for those particular architectures.

Don't expect any or much of a clockrate boost for Zen 3, we aren't seeing dramatic transistor improvements, perhaps 10% at most.

Zen will definitely age better of the long term than the Intel equivalents, that is in large part to how disruptive it's been to the gaming community.



--::{PC Gaming Master Race}::--