Hynad said:
Pemalite said:
eyeofcore said: Cell was not a failure when came to GPU tasks yet it was a utter failure involving CPU tasks... Most games on Xbox 360 and PlayStation 3 were actually doing multi-gpu because CPU could do GPU tasks well enough and then GPU that could communicate with GPU. I hope that PlayStation 4 and Xbox One don't have single very large L2 Cache in their CPU's because that is bad since then CPU would need more time to find necessary code to execute it so longer latency so 4MB of L2 Cache at 8 cores is not good if unified for various purposes. That is why Nintendo's console the Wii U has Core 0 and Core 2 with 512KB of L2 Cache while Core 1 has 2MB of L2 Cache for tasks that benefit from very large caches eg AI and that stuff plus rendering levels and that, basically polygons and that. |
That's so far off base. It's funny.
|
I've been reading a lot of similar comments from him, in recent days. How misinformed can someone be.
|
Reason why a lot of games on Xbox 360 and PlayStation 3 have screen tearing is because game developers use GPU and also CPU as GPU so it is basically a dual GPU configuration aka Crossfire/SLI. Xenon and Cell were good as "GPU"s yet they were terrible in CPU tasks because of 32-40 stage pipeline and small amount of L2 Cache and being unified made it worse because they had 32-40 stage pipeline and that is basically the longest stage pipeline in CPU's in the world, just above Pentium D's that had 31 stage pipeline and that were utterly destroyed by AMD's own dual cores.
I am misinformed? Okay...
"Cache serves essentially the same purpose as the system RAM as it is a temporary storage location for data. Since L# cache is on the CPU itself however, it is much faster for the CPU to access than the main system RAM. The amount of cache available on a CPU can impact performance very heavily especially in environments with heavy multitasking.
The cache on a CPU is divided into different levels indicating the hierarchy of access. L1 is the first place the CPU looks for data and is the smallest, but also the fastest cache level. The amount of L1 cache is generally given per core and is in the range of 32KB to 64KB per core. L2 cache is the second place that the CPU looks and while larger than L1 cache is also slightly slower. L2 cache can range anywhere from 256KB to 1MB (1024KB) per core.
The reason that you do not simply make the size of the L1 cache larger instead of adding a whole new level of cache is that the larger the cache, the longer it takes for the CPU to find the data it needs. This is also that reason that it cannot be said that the more L2 cache the better. In a focused environment with only a few applications running, to a certain extent, the more cache the better. Once multitasking comes into play however, the larger cache sizes will result in the CPU having to take longer to search through all of the additional cache. For this reason, it is very difficult to say whether more L2 cache is better or not as it depends heavily on the computer's intended usage.
In general however, more L2 cache is better for the average user. In specialized applications where large amounts of small data is continuously accessed (where the total data is smaller than the total L2 cache available), less L2 cache may actually have a performance advantage over more L2 cache."
"L3 cache is the third level of onboard cache and as such is the third place the CPU looks for data after first looking in the L1 and L2 cache. L3 cache is much larger than L2 or L1 cache (up to 20MB on some CPUs) but is also slower. Compared to the system RAM however, it is still much faster for the CPU to access.
L3 cache is also different in that it is almost exclusively shared across all of the cores in the CPU. So if there is data in the L3 cache, it is available for all of the cores to use unlike the core-specific L1 and L2 cache. In general, L3 cache is less concerned about speed as L1 or L2 cache so in almost all instances more L3 cache is better."
So since Jaguar CPU's are in Xbox One and PlayStation 4 and the L2 Cache is shared by all cores and since it is L2 Cache then it may or may not have negative influence involving performance in CPU tasks, if it had seperate/dedicated L2 Cache pool's per core then at some tasks it would also have some pros and cons in performance if it does not have an L3 Cache and since L2 Cache is also having a role of L3 Cache thus it is a comprimise for easier programming yet it can considerably decrease performance of the CPU in some tasks.
The larger the cache the lower the speed is because of its size also the CPU would need longer to find the code stored in the Cache, same is valid for eDRAM/eSRAM and RAM. As size increases thus the latency. I am not a geek, at least I am understanding what I am saying and what I read.
If Sony's and Microsoft's console have seperate caches like 0-3 has own 2MB L2 Cache and 4-7 has own L2 Cache then latency will be lower and the CPU will need less time to find the necesary code to execute. Just think about this scenario;
You have two small boxes(L1 Cache 32 + 32KB) and then you have one large box (L2 Cache 2MB) and you need to find something, you will very easily find things that you need in those two small boxes yet when you are searching the larger box then you will ofcourse need more time and specially if you have another 3 people(cores) searching the code so it could potentially increase time to find the thing that you need to finish your work.
Hopefully I explained propery and hopefully I properly understood the articles that I read and from I properly learned(hopefully).