By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC Discussion - Carzy Zarx’s PC Gaming Emporium - Catch Up on All the Latest PC Gaming Related News

Captain_Yuri said:

haxxiy said:

Well a 13600k is competitive in gaming and is faster in multicore than a 7700X. Even during the recent sales, the price of the 7700X I saw go for as low as $348 while the retail price of the 13600k without any discounts is $330. If you add in Vcache to 7700X, it will be faster in gaming but also more expensive while still being behind in multicore. If you add in more CCDs and more Vcache to fix both, then the CPU will be a lot more expensive to the point of being in a different price class. So for Intel, the hybrid looks to be more effective than CCDs when it comes to price to performance.

AFAIK, that's a 257 mm2 monolithic chip vs. a 72 mm2 CCD + 125 mm2 I/O. The die yield would be >70% greater for the single CCD chips and >30% greater for double CCDs if they have comparable defect rates. So the Raptor Lake chip used in the 13600 and above is more comparable with the double CCD Zen 4 chips in terms of manufacturing costs, even assuming a 50% premium on the 5 nm node.

Someone was/is either overpricing or playing very aggressively here...



 

 

 

 

 

Around the Network
haxxiy said:
Captain_Yuri said:

Well a 13600k is competitive in gaming and is faster in multicore than a 7700X. Even during the recent sales, the price of the 7700X I saw go for as low as $348 while the retail price of the 13600k without any discounts is $330. If you add in Vcache to 7700X, it will be faster in gaming but also more expensive while still being behind in multicore. If you add in more CCDs and more Vcache to fix both, then the CPU will be a lot more expensive to the point of being in a different price class. So for Intel, the hybrid looks to be more effective than CCDs when it comes to price to performance.

AFAIK, that's a 257 mm2 monolithic chip vs. a 72 mm2 CCD + 125 mm2 I/O. The die yield would be >70% greater for the single CCD chips and >30% greater for double CCDs if they have comparable defect rates. So the Raptor Lake chip used in the 13600 and above is more comparable with the double CCD Zen 4 chips in terms of manufacturing costs, even assuming a 50% premium on the 5 nm node.

Someone was/is either overpricing or playing very aggressively here...

Unless the small cores are very cheap to produce because of the fact that they don't need to clock high or have hyper threading and etc where as AMD needs to make sure all of their performance cores needs to meet the higher clock specifications as well as have hyper threading and etc.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

haxxiy said:
Captain_Yuri said:

Well a 13600k is competitive in gaming and is faster in multicore than a 7700X. Even during the recent sales, the price of the 7700X I saw go for as low as $348 while the retail price of the 13600k without any discounts is $330. If you add in Vcache to 7700X, it will be faster in gaming but also more expensive while still being behind in multicore. If you add in more CCDs and more Vcache to fix both, then the CPU will be a lot more expensive to the point of being in a different price class. So for Intel, the hybrid looks to be more effective than CCDs when it comes to price to performance.

AFAIK, that's a 257 mm2 monolithic chip vs. a 72 mm2 CCD + 125 mm2 I/O. The die yield would be >70% greater for the single CCD chips and >30% greater for double CCDs if they have comparable defect rates. So the Raptor Lake chip used in the 13600 and above is more comparable with the double CCD Zen 4 chips in terms of manufacturing costs, even assuming a 50% premium on the 5 nm node.

Someone was/is either overpricing or playing very aggressively here...

That someone is most probably TSMC - and AMD, as they still want to make money from those chips.

Intel meanwhile is using a process that's been in use since 2017 with Cannon Lake, so the costs are negligible xompared to 5nm at TSMC.

haxxiy said:
JEMC said:

Meanwhile, AMD still has only 4 Zen4 CPUs on the market and we have no signs of new lower end parts, giving Intel the huge pool of systems AMD lived from during years.
Intel has come back, and AMD doesn't seem to be able to respond. Let's hope the rumors about ryzen 8/9000 featuring hybrid cores is real so that they can get on par with Intel on that front, and that they finally mov their asses and start designing CPUs from the top to the bottom market, including (no poorly gimped) low end parts.

Is it worth it, though?

For that extra die space, you might as well just toss another entire CCD (~ 72 mm2) into the package and get the performance of full cores, especially considering the efficiency gap is so large (a 13600k consumes more power than a 5600X and a 7600X put together!).

It takes a bit over 3 Gracemont cores to get one Raptor Cove. Considering the latter has SMT, the reduction in space compared to thread count is actually pretty small. And if you compare the performance of both, you probably end up at around even for the die space taken. In other words, you gain practically nothing in terms of overall performance. Some programs run well on Gracemont, others don't do so well, so it evens out.

The problem with Raptor Cove however is the power consumption. Imagine having 16 Raptor Cove cores at 6Ghz, that chip would be running at 500W! This is why Intel needs such hybrid cores and why AMD doesn't so far, as they could and can (ECO mode in absence of non-X chips) run their 16 cores much more efficiently.

As for AMD working on a big.LITTLE design, I doubt it will come to consumer hardware. It looks to me that this is the blueprint for Zen4c and Zen5c, so workstations and servers who do need high core counts but less performance overall. AMD has already stated that those will not come to consumer hardware (safe maybe threadripper), so I wouldn't hold my breath of AMD adopting this technology anytime soon. Plus, it would drain AMD's already limited ressources, as now you would need to develop and maintain 2 CPU architectures instead of just one.



Here's a bit of something to keep you busy:

*** NEW CONTEST ***

Wccftech Gives Away Plenty of 2K’s Video Games (for PC)
https://wccftech.com/wccftech-gives-away-plenty-of-2k-video-games-for-pc/

Today we're kicking off a special Holiday giveaway raffle featuring two PC codes for each of the following 2K video games:

  • New Tales from the Borderlands Standard Edition
  • The Quarry Deluxe Edition
  • Tiny Tina's Wonderlands Chaotic Great Edition
  • NBA 2K23 Jordan Edition
  • WWE 2K22 nWo Edition
  • PGA Tour 2K23 Deluxe Edition

There's no mention of country restrictions nor anything like that. Also, while the article says that the contest will run until December 4th (which is impossible since it launched yestarday), it will last another 11 days, with the winners announced Saturday 24th.

Also, before the official 7900 reviews go live later today, here's some leaks and info:

First Alleged AMD Radeon RX 7900-series Benchmarks Leaked
https://www.techpowerup.com/301992/first-alleged-amd-radeon-rx-7900-series-benchmarks-leaked
With only a couple of days to go until the AMD RX 7900-series benchmarks go live, some alleged benchmarks from both the RX 7900 XTX and RX 7900 XT have leaked on Twitter. The two cards are being compared to a NVIDIA RTX 4080 card in no less than seven different game titles, all running at 4K resolution. The games are God of War, Cyberpunk 2077, Assassin's Creed Valhalla, Watchdogs Legion, Red Dead Redemption 2, Doom Eternal and Horizon Zero Dawn. The cards were tested on a system with a Core i9-12900K CPU which was paired with 32 GB of RAM of unknown type. 

Update Dec 11th: The original tweet has been removed, for unknown reasons. It could be because the numbers were fake, or because they were in breach of AMD's NDA.

AMD Allegedly Has 200,000 Radeon RX 7900 Series GPUs for Launch Day
https://www.techpowerup.com/302067/amd-allegedly-has-200-000-radeon-rx-7900-series-gpus-for-launch-day
AMD is preparing the launch of the Radeon RX 7900 series of graphics cards for December 13th. And, of course, with recent launches being coated in uncertainty regarding availability, we are getting more rumors about what the availability could look like. According to Kyle Bennett, founder of HardOCP, we have information that AMD is allegedly preparing 200,000 Radeon RX 7900 SKUs for launch day. If the information is truthful, among the 200,000 launch-day SKUs, there should be 30,000 Made-by-AMD (MBA) cards, while the rest are AIB partner cards. This number indicates that AMD's market research has shown that there will be a great demand for these new GPUs and that the scarcity problem should be long gone.

A few days ago, we reported that the availability of the new AMD Radeon generation is reportedly scarce, with Germany receiving only 3,000 MBA designs and the rest of the EMEA region getting only 7,000 MBA SKUs as well. With today's rumor going around, we would like to know if this is correct and if more SKUs will circulate. America's region could receive most of the MBA designs, and AIB partners will take care of other regions. Of course, we must wait for tomorrow's launch and see how AMD plans to execute its strategy.



Please excuse my bad English.

Currently gaming on a PC with an i5-4670k@stock (for now), 16Gb RAM 1600 MHz and a GTX 1070

Steam / Live / NNID : jonxiquet    Add me if you want, but I'm a single player gamer.

Oh ya, forgot to mention, I started playing Final Fantasy VI (the mobile remake) on Steam.

I'm liking the game so far except for one thing....the Battle UI is atrocious lol.

They should have kept the UI that FFV mobile remake had.



Around the Network

Radeon 7900XTX/XT Reviews:

https://www.techspot.com/review/2588-amd-radeon-7900-xtx/

https://www.techpowerup.com/review/amd-radeon-rx-7900-xtx/32.html

AMD fucked up. They fucked up badly.

Last edited by Jizz_Beard_thePirate - on 12 December 2022

                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

I saw Linus's and Hardware Unboxed reviews. And yeah, the high end 7900 cards are just okay in raster if not a underwheming in everything else. Around a 4080 in Raster and 3090 in RT, as expected.

The power consumption is a fail, though. In LTT's review, they showed a comparison of idle power draw in orders of magantude worse than Nvidia. Where its like 15W for the 4080 vs 50W+ for the 7900XTX and scales differently with different monitors, and it goes up to like 170W (lol wtf). Adding to that the AMD reference models fans never spin down and are always on. Meh release tbh.



Muh efficiency

Absolute shit of an architecture



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850

So, the 7900 xtx is about 4080 level in raster performance, which is good, especially at a clear price advantage. But RT performance is about one generation behind, which was also expected. At the enthusiast level, RT performance is very important, I don't think this card will damage Nvidia as much as some might think. Also; I'm disappointed with the card's power efficiency, this was touted as perhaps its greatest strength in this battle.
All in all, I'm still rather impressed, and glad to see AMD once more being an actual option in this upper-range space. Here's hoping this all forces Nvidia's hand in lowering prices, I think I'll still go with the green for my new rig, I want that RT performance if I'm to shell out 4-5k for a new rig.



Mummelmann said:

So, the 7900 xtx is about 4080 level in raster performance, which is good, especially at a clear price advantage. But RT performance is about one generation behind, which was also expected. At the enthusiast level, RT performance is very important, I don't think this card will damage Nvidia as much as some might think. Also; I'm disappointed with the card's power efficiency, this was touted as perhaps its greatest strength in this battle.
All in all, I'm still rather impressed, and glad to see AMD once more being an actual option in this upper-range space. Here's hoping this all forces Nvidia's hand in lowering prices, I think I'll still go with the green for my new rig, I want that RT performance if I'm to shell out 4-5k for a new rig.

Well the issue is that the 4080 is AD103 die this time around so AMD is not actually competing against the real upper range which is AD102. Last gen, the 6900XT was able to get very close to the full fat 3090 with GA102. So AMD has actually regressed in performance against competition from Nvidia gen on gen.



                  

PC Specs: CPU: 7800X3D || GPU: Strix 4090 || RAM: 32GB DDR5 6000 || Main SSD: WD 2TB SN850