By using this site, you agree to our Privacy Policy and our Terms of Use. Close
TomaTito said:

Some future insights:

I’d say make a new architecture around it instead.

New CPU: direct access to this memory. Throw out the L2/3 cache, use it’s place to make a lot of registers. More registers make it easier to run multiple processes on the same core and make virtualization easier too. Cache on the other hand is only needed to prefetch from the relatively slow RAM, not needed anymore. Throw out branch-prediction, prefetch and long pipelines and ques, we only needed them to handle NOPs while waiting for memory, but not needed here. Use a simple RISC architecture. Instead make several cores that share the entire memory and hardware memory protection. RISC can do more instructions per clock.

L2, L3, Branch Prediction, Prefetch, Advanced Pipelining will not be replaced. Even if the transfer rates of storage exceed the cache speed.

Cache is on-die. It's big advantage is latency. A CPU is wasting processing cycles waiting for the request to be sent to storage and for that information to arrive at the CPU.

Also. Internally CPU's these days are basically RISC anyway. Intel and AMD will take a Complex instruction and break it down into simple instructions.

Prediction is important as it can help hide the latency even when accessing the L2 cache.



--::{PC Gaming Master Race}::--