By using this site, you agree to our Privacy Policy and our Terms of Use. Close
haxxiy said:
Captain_Yuri said:

Well a 13600k is competitive in gaming and is faster in multicore than a 7700X. Even during the recent sales, the price of the 7700X I saw go for as low as $348 while the retail price of the 13600k without any discounts is $330. If you add in Vcache to 7700X, it will be faster in gaming but also more expensive while still being behind in multicore. If you add in more CCDs and more Vcache to fix both, then the CPU will be a lot more expensive to the point of being in a different price class. So for Intel, the hybrid looks to be more effective than CCDs when it comes to price to performance.

AFAIK, that's a 257 mm2 monolithic chip vs. a 72 mm2 CCD + 125 mm2 I/O. The die yield would be >70% greater for the single CCD chips and >30% greater for double CCDs if they have comparable defect rates. So the Raptor Lake chip used in the 13600 and above is more comparable with the double CCD Zen 4 chips in terms of manufacturing costs, even assuming a 50% premium on the 5 nm node.

Someone was/is either overpricing or playing very aggressively here...

That someone is most probably TSMC - and AMD, as they still want to make money from those chips.

Intel meanwhile is using a process that's been in use since 2017 with Cannon Lake, so the costs are negligible xompared to 5nm at TSMC.

haxxiy said:
JEMC said:

Meanwhile, AMD still has only 4 Zen4 CPUs on the market and we have no signs of new lower end parts, giving Intel the huge pool of systems AMD lived from during years.
Intel has come back, and AMD doesn't seem to be able to respond. Let's hope the rumors about ryzen 8/9000 featuring hybrid cores is real so that they can get on par with Intel on that front, and that they finally mov their asses and start designing CPUs from the top to the bottom market, including (no poorly gimped) low end parts.

Is it worth it, though?

For that extra die space, you might as well just toss another entire CCD (~ 72 mm2) into the package and get the performance of full cores, especially considering the efficiency gap is so large (a 13600k consumes more power than a 5600X and a 7600X put together!).

It takes a bit over 3 Gracemont cores to get one Raptor Cove. Considering the latter has SMT, the reduction in space compared to thread count is actually pretty small. And if you compare the performance of both, you probably end up at around even for the die space taken. In other words, you gain practically nothing in terms of overall performance. Some programs run well on Gracemont, others don't do so well, so it evens out.

The problem with Raptor Cove however is the power consumption. Imagine having 16 Raptor Cove cores at 6Ghz, that chip would be running at 500W! This is why Intel needs such hybrid cores and why AMD doesn't so far, as they could and can (ECO mode in absence of non-X chips) run their 16 cores much more efficiently.

As for AMD working on a big.LITTLE design, I doubt it will come to consumer hardware. It looks to me that this is the blueprint for Zen4c and Zen5c, so workstations and servers who do need high core counts but less performance overall. AMD has already stated that those will not come to consumer hardware (safe maybe threadripper), so I wouldn't hold my breath of AMD adopting this technology anytime soon. Plus, it would drain AMD's already limited ressources, as now you would need to develop and maintain 2 CPU architectures instead of just one.