By using this site, you agree to our Privacy Policy and our Terms of Use. Close
curl-6 said:
walsufnir said:


A general answer is more L2$ means less io-wait in certain cases but if your data is too big so it doesn't fit it won't matter much. Increasing cache size can of course result in a cpu performing better but this is not generally the case. Especially if you consider it's shared among 3 cores.

Even between 3, it's still 8 times as much as Broadway for the main core, and twice as much for the two secondaries.

So what kind of workloads consist of small data that benefits most from L2 increases?


You can't count it as that as you don't know which core is computing what so it can happen that a core invalidates the data on the cache you want to read and data of one core is overwritten with data from the other core. Therefore algorithms have been introduced but I don't know which one is used in WiiU.

Take a look at this: https://en.wikipedia.org/wiki/Cache_coherence 

In gaming contexts I guess small local (sub-)routines, anything graphics related should be way too big and given that it is CPU cache it's more game logic than fancy graphics stuff.

 

Edit: An interesting part about the first Celeron:

https://en.wikipedia.org/wiki/Celeron#Covington

Intel went cheap and didn't even use a L2$ in this processor :)