By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Captain_Yuri said:

AMD Unveils Instinct MI300 APUs In MI300A & MI300X Flavors: CDNA 3 GPU, Up To 24 Zen 4 Cores, 192 GB HBM3, 153 Billion Transistors

https://wccftech.com/amd-unveils-instinct-mi300-apus-mi300a-mi300x-flavors-cdna-3-gpu-up-to-24-zen-4-cores-192-gb-hbm3-153-billion-transistors/

If you were watching the press conference, AMDs approach to comparing their products against Intel and Nvidia was quite fascinating. Against Intel vs Epyc, it was all about performance and numbers. AMDs Epyc CPUs are blah x faster, blah x more efficient, etc. Against Nvidia on the other hand, their spin was essentially, thanks to our chiplet design, we can add more cores and vram which allows you to have better total cost of ownership as Nvidia only gives you 80GB on their hopper GPUs while we are giving you 192GB. Like there was no performance comparisons even against Nvidia's previous gen GPUs or anything. Really goes to show the state of things.

I think likely because the thing is still far from ready. They compared the A100 to the MI210/250s with no problem. Only using the instructions that looked favorable to them like high precision floats, but still.

AMD claims it has '8 times the AI performance of the MI250'. That's of course a vague statement but it obviously applies to low-precision instructions with sparsity that CDNA2 can't do natively, hence the huge if somewhat misleading gains.

RTX 4090: 512 TCs x 2520 MHz x 1024 OPs = 1321 sparse INT8 TOPS

H100 SMX: 528 TCs x 1830 MHz x 2048 OPs = 1979 sparse INT8 TOPS

MI300: 'MI250X x 8' = either 1532 or 3064 sparse INT8 TOPS? Depending on how exactly they are counting.

Mind, in real-life models will be using instructions that are a quarter or so of these theoretical numbers. Anyway, I think both numbers make sense given the huge transistor count and the uncertain but likely very high TDP.



 

 

 

 

 

Around the Network
haxxiy said:
Captain_Yuri said:

AMD Unveils Instinct MI300 APUs In MI300A & MI300X Flavors: CDNA 3 GPU, Up To 24 Zen 4 Cores, 192 GB HBM3, 153 Billion Transistors

https://wccftech.com/amd-unveils-instinct-mi300-apus-mi300a-mi300x-flavors-cdna-3-gpu-up-to-24-zen-4-cores-192-gb-hbm3-153-billion-transistors/

If you were watching the press conference, AMDs approach to comparing their products against Intel and Nvidia was quite fascinating. Against Intel vs Epyc, it was all about performance and numbers. AMDs Epyc CPUs are blah x faster, blah x more efficient, etc. Against Nvidia on the other hand, their spin was essentially, thanks to our chiplet design, we can add more cores and vram which allows you to have better total cost of ownership as Nvidia only gives you 80GB on their hopper GPUs while we are giving you 192GB. Like there was no performance comparisons even against Nvidia's previous gen GPUs or anything. Really goes to show the state of things.

I think likely because the thing is still far from ready. They compared the A100 to the MI210/250s with no problem. Only using the instructions that looked favorable to them like high precision floats, but still.

AMD claims it has '8 times the AI performance of the MI250'. That's of course a vague statement but it obviously applies to low-precision instructions with sparsity that CDNA2 can't do natively, hence the huge if somewhat misleading gains.

RTX 4090: 512 TCs x 2520 MHz x 1024 OPs = 1321 sparse INT8 TOPS

H100 SMX: 528 TCs x 1830 MHz x 2048 OPs = 1979 sparse INT8 TOPS

MI300: 'MI250X x 8' = either 1532 or 3064 sparse INT8 TOPS? Depending on how exactly they are counting.

Mind, in real-life models will be using instructions that are a quarter or so of these theoretical numbers. Anyway, I think both numbers make sense given the huge transistor count and the uncertain but likely very high TDP.

Well Nvidia rates their H100 SXM at 3,958 TOPS with Sparse + INT8

https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet

But yea I do agree that it will take some time for CDNA to be ready to compete against Nvidia. Least Epyc is slaughtering Intel in the meantime though.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

Captain_Yuri said:

Simulated Radeon RX 7800 XT GPU ends up 4% to 13% faster than RX 6800 XT

https://videocardz.com/newz/simulated-radeon-rx-7800-xt-gpu-ends-up-4-to-13-faster-than-rx-6800-xt

NVIDIA GeForce RTX 4060 8 GB Graphics Card Confirmed To Launch on 29th June For $299 US

https://wccftech.com/nvidia-geforce-rtx-4060-8-gb-graphics-card-confirmed-launch-29th-june-299-usd/

*slide*

Most likely it will be better to get the 3060 12GB for less or RDNA 2 offerings. I would focus on the "without frame gen" section but even then, I'd take it with a grain of salt until real reviews come out.

I really hope the 7800XT ends up being faster than that, otherwise it will be a disappointment unless AMD prices it really well.

As for the 4060... yeah, I remember the slides Nvidia provides with the 4060Ti. Therefore, I have zero faith in that one.

Captain_Yuri said:

PCIe Express 7.0 Specifications Draft Released, 512 GB/s Speeds & Expected by 2027

https://wccftech.com/pcie-express-7-0-specifications-draft-released-512-gb-s-speeds-expected-by-2027/

Jeez the advancements have been nuts

I don't think we'll see that being used on desktop for a very long time. Gen 5 SSDs are expensive, very hot and only bring improvements in sequential tasks, and GPUs still haven't made the jump to it yet.

Captain_Yuri said:

AMD Discloses Zen 4C Architecture Details: Same Design As Zen 4, 35% Smaller, 2x Density & Cores

https://wccftech.com/amd-zen-4c-architecture-detailed-same-design-as-zen-4-35-percent-smaller-2x-density-cores/

I'm actually surprised the only difference between the regular Zen4 and Zen4c is halving the L3 cache. It's nothing like an efficient core that I thought it would be.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Captain_Yuri said:

haxxiy said:

Well Nvidia rates their H100 SXM at 3,958 TOPS with Sparse + INT8

https://resources.nvidia.com/en-us-tensor-core/nvidia-tensor-core-gpu-datasheet

But yea I do agree that it will take some time for CDNA to be ready to compete against Nvidia. Least Epyc is slaughtering Intel in the meantime though.

Ops, that's correct - forgot Hopper has TMA, which allows it to do two 8-bit inferences simultaneously. CUDA can't do that (yet - it might be available later).

That being said, the power consumption is very high with TMA, to the point it's not that bad of an idea to disregard it when normalizing vs. other GPGPUs:

As for AMD, assuming MI300's tensors are competitive, it'll likely still be at least a couple of years to build up some support around ROCm. Until then Nvidia has the monopoly.

Last edited by haxxiy - on 14 June 2023

 

 

 

 

 

green_sky said:

What are some games you guys like at this E3 but not an E3? Would be interesting to see what tickled your fancy.

I thought xbox and ubi had decent stuff. Life is busy so none of the games are day 1 but had some decent things.

Am gonna cut down on pvp a bit. Eats too many brain cell but when i have half an hour. I end up playing that as hard to get into single player mode.

I'll need some time to go thought it all and come to some conclusions.

For starters, CGI or in-engine trailers that don't show gameplay or even tell us anything about the story of the game are meaningless to me.

I haven't played Cyberpunk, and so the DLC doesn't interest me. I'm not a fan of Bethesda so, while Starfield looks promising, I'm not sold on it (yet).

Cities Skylines 2 could be tempting, even if I'm not in love with the new realistic style, but the amount of DLCs the first one had (I didn't know bad Paradox was with DLCs) and how much they changed the game makes me wary about it.

I would have loved Alan Wake 2, 5 years ago. As I said in the past, I got tired of waiting and moved on. I'll still check on it after it launches and goes on sale, of course, but it's low on my priorities list.

AC Mirage looks ok and the Star Wars game, while a bit generic, is also promising (I hope the stealthy approach they showed in the gameplay goes deeper than that), but both are Ubisoft games and that makes them unpredictable.

The new Forza also looks great, but the gameplay I posted today makes me wonder if it will ask for constant online an dalso worry about how much unnecessary stuff they're adding to it.

I still have a lot of videos to go through.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Around the Network
JEMC said:
Captain_Yuri said:

Simulated Radeon RX 7800 XT GPU ends up 4% to 13% faster than RX 6800 XT

https://videocardz.com/newz/simulated-radeon-rx-7800-xt-gpu-ends-up-4-to-13-faster-than-rx-6800-xt

NVIDIA GeForce RTX 4060 8 GB Graphics Card Confirmed To Launch on 29th June For $299 US

https://wccftech.com/nvidia-geforce-rtx-4060-8-gb-graphics-card-confirmed-launch-29th-june-299-usd/

*slide*

Most likely it will be better to get the 3060 12GB for less or RDNA 2 offerings. I would focus on the "without frame gen" section but even then, I'd take it with a grain of salt until real reviews come out.

I really hope the 7800XT ends up being faster than that, otherwise it will be a disappointment unless AMD prices it really well.

As for the 4060... yeah, I remember the slides Nvidia provides with the 4060Ti. Therefore, I have zero faith in that one.

It will be interesting to see what AMDs pitch will be to sell the 7800XT. 6800XT can be had for $500 and I somehow doubt AMD will sell the 7800XT for $500. I think lowest would be $600-$650 and against 4070, it would be lets say 13% faster for more power usage while having 4 more GBs of Vram. But it does feel like buying a 6800XT for $500 would be the better buy since you also get 16GB of Vram and both have the same FSR support. 7800XT really only has DP2.1 + AV1 encoding advantages.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

This looks pretty decent too. Got to play the first one first. 



Captain_Yuri said:
JEMC said:

I really hope the 7800XT ends up being faster than that, otherwise it will be a disappointment unless AMD prices it really well.

As for the 4060... yeah, I remember the slides Nvidia provides with the 4060Ti. Therefore, I have zero faith in that one.

It will be interesting to see what AMDs pitch will be to sell the 7800XT. 6800XT can be had for $500 and I somehow doubt AMD will sell the 7800XT for $500. I think lowest would be $600-$650 and against 4070, it would be lets say 13% faster for more power usage while having 4 more GBs of Vram. But it does feel like buying a 6800XT for $500 would be the better buy since you also get 16GB of Vram and both have the same FSR support. 7800XT really only has DP2.1 + AV1 encoding advantages.

Well, there's something like a 30% difference between a 6800XT and a 7900XT, so AMD can't push the 7800XT too much or it will end too close to its bigger sibling to make sense. And let's not forget that the 7900XT has dropped quite a bit in price since it launched, putting AMD into more pressure.

There's also another thing to note. We have the Navi 31 chip oowering the two 7900 cards and we have Navi 33 powering the 7600 (and likely the 7600XT). That leaves us with both the 7800 and 7700 series, but there's only Navi 32 left so, how will AMD do it? Will they use a further cut back Navi 31 for the 7800XT and power the other three with Navi 32 chips? Or will AMD do a simpler approach and launch only two cards, hte 7800XT and 770XT, both powered by Navi 32?

Whatever happens, AMD is going to have a hard time.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

JEMC said:
Captain_Yuri said:

Simulated Radeon RX 7800 XT GPU ends up 4% to 13% faster than RX 6800 XT

https://videocardz.com/newz/simulated-radeon-rx-7800-xt-gpu-ends-up-4-to-13-faster-than-rx-6800-xt

NVIDIA GeForce RTX 4060 8 GB Graphics Card Confirmed To Launch on 29th June For $299 US

https://wccftech.com/nvidia-geforce-rtx-4060-8-gb-graphics-card-confirmed-launch-29th-june-299-usd/

*slide*

Most likely it will be better to get the 3060 12GB for less or RDNA 2 offerings. I would focus on the "without frame gen" section but even then, I'd take it with a grain of salt until real reviews come out.

I really hope the 7800XT ends up being faster than that, otherwise it will be a disappointment unless AMD prices it really well.

As for the 4060... yeah, I remember the slides Nvidia provides with the 4060Ti. Therefore, I have zero faith in that one.

If it comes with just 70 CU, which are 2 less than on it's predecessor, then I don't expect a big jump either, but I do expect a couple more percentages here and there.

Also, considering the disparity is pretty big between 1080p and 4K, not sure I can really trust their methodology all that much. Those 4% in 1080p were enough to beat the 6900XT too btw - PCGH has 12% difference between the 6800XT and the 6950XT at 1080p, so having the 6900XT within 3% of the 6800XT seems very dubious to me.



Cyberpunk’s expansion totally overhauls the original game - CD Projekt Red explains how Phantom Liberty is rebuilding perks, skill trees, vehicle combat, AI, police and more

https://www.videogameschronicle.com/features/cyberpunks-expansion-totally-overhauls-the-original-game/

Well if the overhaul is as extensive as they are claiming and the changes applies to the original game as well, I can certainly see why the CPU requirements have gone up drastically. But I also can't trust them to have great optimization after the initial launch of Cyberpunk. Hopefully the new and improved Ai, police system and such delivers on the promises and it might be worth replaying the original if what they say is legit.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850