haxxiy said: That's probably also the reason why Intel/AMD don't bother having Altera/Xilinx develop dedicated upscaling/ray tracing FPGAs for their GPUs. They could likely come up with industry-leading stuff in that regard but it simply wouldn't be worth the investment. |
GPU chiplets are super super hard.
You can't turn the GPU cores into chiplets because you need low latency and high bandwidth... Both things that chiplets heavily impact.
CPU cores can get away with it... Because CPU cores aren't transferring upwards of several terabytes of data per second.
AMD's approach was to actually break up their memory interface and have lots of memory interfaces instead.
So instead of having a single 384bit interfacing directly with memory, AMD has made 12x 32bit interfaces.
Each chiplet is thus housing two 2x32bit interfaces, which interfaces directly with DRAM to make a cumulative 384bit interface.
It also means instead of 1x fabric going from one large memory controller to the CCD like on a Ryzen CPU, there are 6x fabrics which can do 900GB/s of bi-directional traffic... Each.
Suddenly they have more than enough bandwidth to interface the GPU cores to the memory interface... But still not enough for multiple GPU core chiplets.
It also does mean that there is very little room to interface additional chips to AMD GPU's as the memory controller chiplet approach takes up most of the surrounding area around the cores themselves.
But I could see them integrating them into the memory controller chiplets at a later date... Sadly I don't think we will ever see the holy grail of multi-GPU chiplets due to the lack of bandwidth with the infinity link.
But I do see a future where we have stacked GPU chiplet dies just like how we stack cache on CPU's now.
--::{PC Gaming Master Race}::--