By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - SONY to reveal The PS4 Processor November 13th at AMD APU13

Alby_da_Wolf said:

Also both MS and Sony must have asked AMD in their deals to keep their APUs the hottest ones in their segment at least for the launch and Xmas window.


Nah.
It all comes down to costs, power and die sizes.

Basically, the Xbox One and Playstation 4 have the worst x86 CPU's money can buy, they're tiny and cheap, all the extra transister and power budget is then spent on the GPU.

On the PC however, AMD has a different design philosphy for APU's, basically the CPU and GPU are essentially given more equal treatment in terms of transister counts.
Power wise, PC APU's need to fit into a TDP that ranges from 5-100W, the consoles are probably anywhere from 50-100% higher than 100w TDP if not more. (Remains to be seen.)

You would probably need a couple more die-shrinks and AMD/Board partners to include GDDR5 (A-La Side-Port Memory.) for it to ever be on a level playing field with the consoles APU graphics, DDR4 won't change things for awhile yet.

Basically you *could* buy an APU system, then drop in a discreet Cheap/Mid-range secondary card and enable Hybrid-Crossfire, the second GPU will only initiate when there is a need.
Or you could say "to hell with it" and go AM3 or the faster Intel 1150/2011 Platforms, paired up with a Radeon 7850-7870 for cheap.



--::{PC Gaming Master Race}::--

Around the Network
Madword said:
Please no more Secret sauce... it was so overused these last few months - don't want to hear that term ever again.

So I wonder why Sony would do this announce this, seems a little strange. I wonder if they have upped the processor from 1.6 to 1.7 or whatever the XboxOne is so they can match it. Otherwise why bother talk about it... suppose its another bit of PR but only if its good PR...

How about 11 herbs and spices?



 

Really not sure I see any point of Consol over PC's since Kinect, Wii and other alternative ways to play have been abandoned. 

Top 50 'most fun' game list coming soon!

 

Tell me a funny joke!

Pemalite said:
Alby_da_Wolf said:

Also both MS and Sony must have asked AMD in their deals to keep their APUs the hottest ones in their segment at least for the launch and Xmas window.


Nah.
It all comes down to costs, power and die sizes.

Basically, the Xbox One and Playstation 4 have the worst x86 CPU's money can buy, they're tiny and cheap, all the extra transister and power budget is then spent on the GPU.

On the PC however, AMD has a different design philosphy for APU's, basically the CPU and GPU are essentially given more equal treatment in terms of transister counts.
Power wise, PC APU's need to fit into a TDP that ranges from 5-100W, the consoles are probably anywhere from 50-100% higher than 100w TDP if not more. (Remains to be seen.)

You would probably need a couple more die-shrinks and AMD/Board partners to include GDDR5 (A-La Side-Port Memory.) for it to ever be on a level playing field with the consoles APU graphics, DDR4 won't change things for awhile yet.

Basically you *could* buy an APU system, then drop in a discreet Cheap/Mid-range secondary card and enable Hybrid-Crossfire, the second GPU will only initiate when there is a need.
Or you could say "to hell with it" and go AM3 or the faster Intel 1150/2011 Platforms, paired up with a Radeon 7850-7870 for cheap.

Thanks for the suggestions and yes, this is what I get from the current AMD offer and infos available. If you're right about PS4 and XBOne's APUs TDP, I guess too I'll have to choose a different config, and APU + discrete GPU Hybrid-Crossfire could be the best alternative, more expensive  but also a lot more powerful than my minimum specs. Anyway, I guess I'll have to wait until January to decide, as the 13th November we'll get only half of the infos needed to build a PC witth the newest AMD components and making sure it equal or exceed 8th gen consoles specs. Doing it with existing components would be probably easier, but I want to see what's new in AMD low-power consumption offer, I hate fan noise and high-power with liquid cooling isn't a solution, as, environment considerations aside, electricity is very expensive in Italy, so there's a double incentive to save it.



Stwike him, Centuwion. Stwike him vewy wuffly! (Pontius Pilate, "Life of Brian")
A fart without stink is like a sky without stars.
TGS, Third Grade Shooter: brand new genre invented by Kevin Butler exclusively for Natal WiiToo Kinect. PEW! PEW-PEW-PEW! 
 


Alby_da_Wolf said:

Thanks for the suggestions and yes, this is what I get from the current AMD offer and infos available. If you're right about PS4 and XBOne's APUs TDP, I guess too I'll have to choose a different config, and APU + discrete GPU Hybrid-Crossfire could be the best alternative, more expensive  but also a lot more powerful than my minimum specs. Anyway, I guess I'll have to wait until January to decide, as the 13th November we'll get only half of the infos needed to build a PC witth the newest AMD components and making sure it equal or exceed 8th gen consoles specs. Doing it with existing components would be probably easier, but I want to see what's new in AMD low-power consumption offer, I hate fan noise and high-power with liquid cooling isn't a solution, as, environment considerations aside, electricity is very expensive in Italy, so there's a double incentive to save it.


I hear you on the power thing, Australia's energy prices are even more expensive than Italy's.

With Hybrid Crossfire though, if you're not running a game, the second card generally shuts off. :)
Performance when you need it, power consumption lowered when you don't.

With that in mind though, you could actually save more energy by going with an Intel Core i3 or i5 over AMD's APU's. - AMD's CPU's in comparison are power hogs for the performance you get. - More so with the Core i5, because it's significantly faster and can "hurry up and idle" far sooner.
Plus, you should not need to upgrade it as often either, just drop in a new GPU next time you think you need an upgrade. (Unless you do something CPU heavy like lots of transcoding to multiple devices.)
Drop in a Noctua Air cooler and you would have silence.

You could even do a Mini-ITX build, Intel seems to have a far larger choice of Mini-ITX motherboards these days.



--::{PC Gaming Master Race}::--

Hynad said:
Pemalite said:
eyeofcore said:
Cell was not a failure when came to GPU tasks yet it was a utter failure involving CPU tasks...

Most games on Xbox 360 and PlayStation 3 were actually doing multi-gpu because CPU could do GPU tasks well enough and then GPU that could communicate with GPU.

I hope that PlayStation 4 and Xbox One don't have single very large L2 Cache in their CPU's because that is bad since then CPU would need more time to find necessary code to execute it so longer latency so 4MB of L2 Cache at 8 cores is not good if unified for various purposes. That is why Nintendo's console the Wii U has Core 0 and Core 2 with 512KB of L2 Cache while Core 1 has 2MB of L2 Cache for tasks that benefit from very large caches eg AI and that stuff plus rendering levels and that, basically polygons and that.


That's so far off base. It's funny.


I've been reading a lot of similar comments from him, in recent days. How misinformed can someone be.


Reason why a lot of games on Xbox 360 and PlayStation 3 have screen tearing is because game developers use GPU and also CPU as GPU so it is basically a dual GPU configuration aka Crossfire/SLI. Xenon and Cell were good as "GPU"s yet they were terrible in CPU tasks because of 32-40 stage pipeline and small amount of L2 Cache and being unified made it worse because they had 32-40 stage pipeline and that is basically the longest stage pipeline in CPU's in the world, just above Pentium D's that had 31 stage pipeline and that were utterly destroyed by AMD's own dual cores.

I am misinformed? Okay...

"Cache serves essentially the same purpose as the system RAM as it is a temporary storage location for data. Since L# cache is on the CPU itself however, it is much faster for the CPU to access than the main system RAM. The amount of cache available on a CPU can impact performance very heavily especially in environments with heavy multitasking.

The cache on a CPU is divided into different levels indicating the hierarchy of access. L1 is the first place the CPU looks for data and is the smallest, but also the fastest cache level. The amount of L1 cache is generally given per core and is in the range of 32KB to 64KB per core. L2 cache is the second place that the CPU looks and while larger than L1 cache is also slightly slower. L2 cache can range anywhere from 256KB to 1MB (1024KB) per core.

The reason that you do not simply make the size of the L1 cache larger instead of adding a whole new level of cache is that the larger the cache, the longer it takes for the CPU to find the data it needs. This is also that reason that it cannot be said that the more L2 cache the better. In a focused environment with only a few applications running, to a certain extent, the more cache the better. Once multitasking comes into play however, the larger cache sizes will result in the CPU having to take longer to search through all of the additional cache. For this reason, it is very difficult to say whether more L2 cache is better or not as it depends heavily on the computer's intended usage.

In general however, more L2 cache is better for the average user. In specialized applications where large amounts of small data is continuously accessed (where the total data is smaller than the total L2 cache available), less L2 cache may actually have a performance advantage over more L2 cache."

"L3 cache is the third level of onboard cache and as such is the third place the CPU looks for data after first looking in the L1 and L2 cache. L3 cache is much larger than L2 or L1 cache (up to 20MB on some CPUs) but is also slower. Compared to the system RAM however, it is still much faster for the CPU to access.

L3 cache is also different in that it is almost exclusively shared across all of the cores in the CPU. So if there is data in the L3 cache, it is available for all of the cores to use unlike the core-specific L1 and L2 cache. In general, L3 cache is less concerned about speed as L1 or L2 cache so in almost all instances more L3 cache is better."

So since Jaguar CPU's are in Xbox One and PlayStation 4 and the L2 Cache is shared by all cores and since it is L2 Cache then it may or may not have negative influence involving performance in CPU tasks, if it had seperate/dedicated L2 Cache pool's per core then at some tasks it would also have some pros and cons in performance if it does not have an L3 Cache and since L2 Cache is also having a role of L3 Cache thus it is a comprimise for easier programming yet it can considerably decrease performance of the CPU in some tasks.

The larger the cache the lower the speed is because of its size also the CPU would need longer to find the code stored in the Cache, same is valid for eDRAM/eSRAM and RAM. As size increases thus the latency. I am not a geek, at least I am understanding what I am saying and what I read.

If Sony's and Microsoft's console have seperate caches like 0-3 has own 2MB L2 Cache and 4-7 has own L2 Cache then latency will be lower and the CPU will need less time to find the necesary code to execute. Just think about this scenario;

You have two small boxes(L1 Cache 32 + 32KB) and then you have one large box (L2 Cache 2MB) and you need to find something, you will very easily find things that you need in those two small boxes yet when you are searching the larger box then you will ofcourse need more time and specially if you have another 3 people(cores) searching the code so it could potentially increase time to find the thing that you need to finish your work.

Hopefully I explained propery and hopefully I properly understood the articles that I read and from I properly learned(hopefully).



Around the Network
bananaking21 said:
so is there any secret sauce?

 

hUma technology, we know it's in there but it hasnt been confirmed by sony yet ( amd talked about this)



Predictions for end of 2014 HW sales:

 PS4: 17m   XB1: 10m    WiiU: 10m   Vita: 10m

 

eyeofcore said:
SNIP

I don't even know where to begin, but your understanding of cache isn't reality.

For starters, more cache isn't slower, it never has been.

Lets take the Pentium 3 Katmai for instance, it had 512kb of L2 cache, however the cache ran at 1/2 to 1/4th the speed of the processor, which was a massive performance bottleneck.

Now, take the Pentium 3 Coppermine, which was an evolutionary improvement over the Katmai core, Intel brought the L2 cache on die, however it was half the amount but ran at the same speed as the processor, which brought with it massive gains in performance, not because of the size reduction but because of the speed. (Doubling/Quaddrupling cache speed has massive advantages.)

Now if you were to take the Athlon 64 3000+ and 3200+ with 512kb of L2 cache against the 3200+ 1024kb of L2 cache, both with identical clockspeeds and everything else, guess which one had the advantage? The 3000+ never managed to beat the 3200+ under any circumstance.

My CPU has 12Mb of total L3 cache, 1.5Mb of L2 cache and 64Kb of L1 cache.
Guess what? The largest cache is also the slowest, the smallest is the fastest, this is by design so that the CPU can use it's predicters to predict the data it needs ahead of time and store it into the L3 cache, then into the L2 cache and then the L1 cache depending how far along the processing train it's in to hide the bandwidth and latency hit of travelling down to system memory.
The more cache, the more the CPU can store in the caches to prevent a cache-miss-hit making the CPU travel all the way down to system memory to grab the data that needs to be processed, that would be a massive amount of CPU cycles on idle going to waste if that happens.

Soon, we will have L4 caches too. (In-fact Intel has it on a couple of CPU's.)

The other advantage of having a cache hierachy is one of cost in terms of transister counts and die size, L1 cache is stupidly expensive, L2 cache is less so but still expensive and L3 is pretty darn cheap in the grand scheme of things, in-fact a massive portion of a CPU die is actually cache.

Using the Wii U's CPU as an example though is a pretty poor one, the Wii U's CPU is old and slow, it's designed to be fabricated cheaply, it's got a below average branch tree predictor amongst other things.
But considering that some Intel CPU's have 140Mb+ of "cache memory" in the form of eDRAM, L1, L2 and L3 caches and considering those would pretty much dominate the paltry Wii U's CPU at the same clock, well. You get the idea.

The reason for shared L2 cache is for coherancy, which brings with it it's own advantages, however the general consensus between Intel and AMD is for the L3 cache to be shared across all cores, whilst the L2 will feed 1-2 cores/threads.

Seriously though, Intel and AMD spend Billions in R&D, they know more than either of us when it comes to cache and they both have the same ideas on what's-what. Nintendo however isn't in the CPU building game and IBM is essentially relegated to last century stuff.

If you want I could go into other parts like the uop, registers and such.

In the end though, it's better to have as much data as you can next to the CPU rather than forcing the CPU to go to System Memory, that's the fundamental reason why cache exists in the first place, more is always better as it's faster and lower latency with better associativity than Ram.



--::{PC Gaming Master Race}::--

Pemalite said:
eyeofcore said:
SNIP

I don't even know where to begin, but your understanding of cache isn't reality.

For starters, more cache isn't slower, it never has been.

Lets take the Pentium 3 Katmai for instance, it had 512kb of L2 cache, however the cache ran at 1/2 to 1/4th the speed of the processor, which was a massive performance bottleneck.

Now, take the Pentium 3 Coppermine, which was an evolutionary improvement over the Katmai core, Intel brought the L2 cache on die, however it was half the amount but ran at the same speed as the processor, which brought with it massive gains in performance, not because of the size reduction but because of the speed. (Doubling/Quaddrupling cache speed has massive advantages.)

Now if you were to take the Athlon 64 3000+ and 3200+ with 512kb of L2 cache against the 3200+ 1024kb of L2 cache, both with identical clockspeeds and everything else, guess which one had the advantage? The 3000+ never managed to beat the 3200+ under any circumstance.

My CPU has 12Mb of total L3 cache, 1.5Mb of L2 cache and 64Kb of L1 cache.
Guess what? The largest cache is also the slowest, the smallest is the fastest, this is by design so that the CPU can use it's predicters to predict the data it needs ahead of time and store it into the L3 cache, then into the L2 cache and then the L1 cache depending how far along the processing train it's in to hide the bandwidth and latency hit of travelling down to system memory.
The more cache, the more the CPU can store in the caches to prevent a cache-miss-hit making the CPU travel all the way down to system memory to grab the data that needs to be processed, that would be a massive amount of CPU cycles on idle going to waste if that happens.

Soon, we will have L4 caches too. (In-fact Intel has it on a couple of CPU's.)

The other advantage of having a cache hierachy is one of cost in terms of transister counts and die size, L1 cache is stupidly expensive, L2 cache is less so but still expensive and L3 is pretty darn cheap in the grand scheme of things, in-fact a massive portion of a CPU die is actually cache.

Using the Wii U's CPU as an example though is a pretty poor one, the Wii U's CPU is old and slow, it's designed to be fabricated cheaply, it's got a below average branch tree predictor amongst other things.
But considering that some Intel CPU's have 140Mb+ of "cache memory" in the form of eDRAM, L1, L2 and L3 caches and considering those would pretty much dominate the paltry Wii U's CPU at the same clock, well. You get the idea.

The reason for shared L2 cache is for coherancy, which brings with it it's own advantages, however the general consensus between Intel and AMD is for the L3 cache to be shared across all cores, whilst the L2 will feed 1-2 cores/threads.

Seriously though, Intel and AMD spend Billions in R&D, they know more than either of us when it comes to cache and they both have the same ideas on what's-what. Nintendo however isn't in the CPU building game and IBM is essentially relegated to last century stuff.

If you want I could go into other parts like the uop, registers and such.

In the end though, it's better to have as much data as you can next to the CPU rather than forcing the CPU to go to System Memory, that's the fundamental reason why cache exists in the first place, more is always better as it's faster and lower latency with better associativity than Ram.


Okay... Thanks for the lesson...

I have a question involving Wii U's CPU, I don't truly agree that is really slow and can you compare it to Xbox 360 and PlayStation 3 CPU?

I know that Wii U's CPU is PowerPC 750CL so it is an old CPU yet I would not underestimate it that easily, it has 4 stage pipeline and that is really short and should have little to no "bubbles" compared to atrocious Xbox 360/PlayStation 3 CPU with their 32 to 40 stage pipeline that are also in order versus out of order that Gamecube/Wii/Wii U CPU is even thought it is kinda limited as I read in some forums.

Is there difference between Xbox 360/PlayStation 3 L2 Cache versus Wii U's L2 Cache that is eDRAM also its configuration that is Core 0 512KB Core 1 2MB Core 2  512KB compared to Unified L2 Cache that is 1MB in Xbox 360 and  768KB in PlayStation 3.

I asked Marcan42 on Twitter if Wii U's CPU can use Wii U's GPU eDRAM pool as L3 Cache and he said it is system RAM and that Espresso can use it for what ever it wants so I assume that it can use it maybe directly or only issue commands?

I read this article and it seems that WIi U's CPU can directly access and use eDRAM from Espresso, maybe I am wrong;

http://hdwarriors.com/general-impression-of-wii-u-edram-explained-by-shinen/

Wii U has DSP while Xbox 360/PlayStation 3 don't have DSP so audio is done on one of their CPU cores? Right? So only two cores are really for game while a third one acts as DSP also the OS is partially ran on one of those cores compared to Wii U that as rumored has 2 ARM cores that are used as "background" cores also there is another ARM core for backward compatibility with Wii so it could also be used.

I know that Xbox 360 had a bottleneck involving RAM, it was GDDR3 with 22.8GB/s yet the FSB or what ever is called that kind of chip could only push 10.8GB/s and PlayStation 3 also had some sort of bottleneck. While Wii U does nto have any kind of bottleneck and uses DDR3 1600mhz so it has 12.8GB/s like most computers nowadays also it has 1GB for games thus it has like almost 3 times more memory for game assets/data to store temporally. DDR3 has much lower latency than GDDR3 so it is great for the OS and games, right?



eyeofcore said:

 

I have a question involving Wii U's CPU, I don't truly agree that is really slow and can you compare it to Xbox 360 and PlayStation 3 CPU?

No they can't for multiple reasons.
Mostly to the fact that it's Out-of-Order execution, better integer and floating point performance, more cache etc'.
I wouldn't be surprised if it ate the Xbox 360 and Playstation 3's CPU's for lunch.

At the instruction set level they can be compared.

eyeofcore said:

I know that Wii U's CPU is PowerPC 750CL so it is an old CPU yet I would not underestimate it that easily, it has 4 stage pipeline and that is really short and should have little to no "bubbles" compared to atrocious Xbox 360/PlayStation 3 CPU with their 32 to 40 stage pipeline that are also in order versus out of order that Gamecube/Wii/Wii U CPU is even thought it is kinda limited as I read in some forums.

The number of pipelines is only a problem if everything is kept equal.
A processor with a longer pipeline but with lots of cache, uop, loop buffer, lots of low latency bandwidth to System Ram and a really good branch predictor (Something the Xbox 360 and Playstation 3 lacks.) can make it all a non-issue, plus a longer pipeline can assist in reaching a higher frequency for an overall larger performance benefit.
My i7 3930K for instance has "up-to" a 19 stage pipeline, one of the fastest CPU's money can buy, but because of all the other benefits, it's certainly significantly faster than PowerPC.
I'm not arguing that the Wii U isn't more powerfull than the Xbox 360/Playstation 3, but it certainly isn't as fast as the Xbox One or Playstation 4 in any regard to the physical processors in all the machines.

eyeofcore said:

I read this article and it seems that WIi U's CPU can directly access and use eDRAM from Espresso, maybe I am wrong;

It can, you aren't wrong.

 

eyeofcore said:

Wii U has DSP while Xbox 360/PlayStation 3 don't have DSP so audio is done on one of their CPU cores? Right? So only two cores are really for game while a third one acts as DSP also the OS is partially ran on one of those cores compared to Wii U that as rumored has 2 ARM cores that are used as "background" cores also there is another ARM core for backward compatibility with Wii so it could also be used.

I know that Xbox 360 had a bottleneck involving RAM, it was GDDR3 with 22.8GB/s yet the FSB or what ever is called that kind of chip could only push 10.8GB/s and PlayStation 3 also had some sort of bottleneck. While Wii U does nto have any kind of bottleneck and uses DDR3 1600mhz so it has 12.8GB/s like most computers nowadays also it has 1GB for games thus it has like almost 3 times more memory for game assets/data to store temporally. DDR3 has much lower latency than GDDR3 so it is great for the OS and games, right?

 

Up-to a point, the Xbox 360 for instance had a DAP that will offload Audio Processing "up-to" 256 channels, 48 KHz, 16-bit tracks.
Thus if you wanted to do 24bit or 32bit Audio you would have to use CPU time.
Converesly, the Xbox 360's GPU could also offload some Audio tasks if a developer saw fit, it's "just" flexible enough in order to do so. (More so than the Playstation 3, that's for sure.)

As for Bottlenecks, every computer system, be it a PC, Gaming Console or Phone has some form of bottleneck, be it storage, graphics, processor or system memory etc'.
Essentially the bottleneck is whatever limitation the developers run into first, usually they build within the limitations of the hardware, but bottlenecks can change from one frame to the next whilst rendering a scene due to the fact that data being processed is always changing.

For example a big bottleneck on all the current and next generation consoles if they "theoretically" ran a game like StarCraft 2 would actually be the CPU due to the sheer amount of units that can be on screen at any one time.
If you fired up a game of Battlefield 4, you would find that it's more GPU limited due to the heavy effects that the game employs.

Or if you could run Civilization IV, you would be GPU limited whilst playing your turn, but when you finish your turn and the Computer players do their turn, you would quickly find to be CPU limited.

As for memory latency, both GDDR3 and GDDR5 typically have higher latency than DDR3, however it's not significant, you're looking at 20-30% tops, even then that's going to have a neglible performance difference anyway, due in part to caches and eDRAM/eSRAM and all it's other variations.
Plus, consoles are typically more GPU orientated, GPU's really don't care about memory latency, but rather bandwidth is the determining factor.

 

 



--::{PC Gaming Master Race}::--

Pemalite said:
eyeofcore said:

 

I have a question involving Wii U's CPU, I don't truly agree that is really slow and can you compare it to Xbox 360 and PlayStation 3 CPU?

No they can't for multiple reasons.
Mostly to the fact that it's Out-of-Order execution, better integer and floating point performance, more cache etc'.
I wouldn't be surprised if it ate the Xbox 360 and Playstation 3's CPU's for lunch.

At the instruction set level they can be compared.

eyeofcore said:

I know that Wii U's CPU is PowerPC 750CL so it is an old CPU yet I would not underestimate it that easily, it has 4 stage pipeline and that is really short and should have little to no "bubbles" compared to atrocious Xbox 360/PlayStation 3 CPU with their 32 to 40 stage pipeline that are also in order versus out of order that Gamecube/Wii/Wii U CPU is even thought it is kinda limited as I read in some forums.

The number of pipelines is only a problem if everything is kept equal.
A processor with a longer pipeline but with lots of cache, uop, loop buffer, lots of low latency bandwidth to System Ram and a really good branch predictor (Something the Xbox 360 and Playstation 3 lacks.) can make it all a non-issue, plus a longer pipeline can assist in reaching a higher frequency for an overall larger performance benefit.
My i7 3930K for instance has "up-to" a 19 stage pipeline, one of the fastest CPU's money can buy, but because of all the other benefits, it's certainly significantly faster than PowerPC.
I'm not arguing that the Wii U isn't more powerfull than the Xbox 360/Playstation 3, but it certainly isn't as fast as the Xbox One or Playstation 4 in any regard to the physical processors in all the machines.

eyeofcore said:

I read this article and it seems that WIi U's CPU can directly access and use eDRAM from Espresso, maybe I am wrong;

It can, you aren't wrong.

 

eyeofcore said:

Wii U has DSP while Xbox 360/PlayStation 3 don't have DSP so audio is done on one of their CPU cores? Right? So only two cores are really for game while a third one acts as DSP also the OS is partially ran on one of those cores compared to Wii U that as rumored has 2 ARM cores that are used as "background" cores also there is another ARM core for backward compatibility with Wii so it could also be used.

I know that Xbox 360 had a bottleneck involving RAM, it was GDDR3 with 22.8GB/s yet the FSB or what ever is called that kind of chip could only push 10.8GB/s and PlayStation 3 also had some sort of bottleneck. While Wii U does nto have any kind of bottleneck and uses DDR3 1600mhz so it has 12.8GB/s like most computers nowadays also it has 1GB for games thus it has like almost 3 times more memory for game assets/data to store temporally. DDR3 has much lower latency than GDDR3 so it is great for the OS and games, right?

 

Up-to a point, the Xbox 360 for instance had a DAP that will offload Audio Processing "up-to" 256 channels, 48 KHz, 16-bit tracks.
Thus if you wanted to do 24bit or 32bit Audio you would have to use CPU time.
Converesly, the Xbox 360's GPU could also offload some Audio tasks if a developer saw fit, it's "just" flexible enough in order to do so. (More so than the Playstation 3, that's for sure.)

As for Bottlenecks, every computer system, be it a PC, Gaming Console or Phone has some form of bottleneck, be it storage, graphics, processor or system memory etc'.
Essentially the bottleneck is whatever limitation the developers run into first, usually they build within the limitations of the hardware, but bottlenecks can change from one frame to the next whilst rendering a scene due to the fact that data being processed is always changing.

For example a big bottleneck on all the current and next generation consoles if they "theoretically" ran a game like StarCraft 2 would actually be the CPU due to the sheer amount of units that can be on screen at any one time.
If you fired up a game of Battlefield 4, you would find that it's more GPU limited due to the heavy effects that the game employs.

Or if you could run Civilization IV, you would be GPU limited whilst playing your turn, but when you finish your turn and the Computer players do their turn, you would quickly find to be CPU limited.

As for memory latency, both GDDR3 and GDDR5 typically have higher latency than DDR3, however it's not significant, you're looking at 20-30% tops, even then that's going to have a neglible performance difference anyway, due in part to caches and eDRAM/eSRAM and all it's other variations.
Plus, consoles are typically more GPU orientated, GPU's really don't care about memory latency, but rather bandwidth is the determining factor.

 

 


Thanks, I know what a bottleneck is and I know that every device has it even our body. LMAO

How much is Wii U's CPU faster than Xbox 360 and PlayStation 3 CPU's on average if you could make an rough estimation/approximation?!

Also how core to core is comparable Wii U CPU to CPU's in Xbox One/PlayStation 4?

Do you have an rough idea of what are the benefits of Wii U's CPU to access directly to eDRAM of Wii U's GPU? Could it have similar effect to AMD's hUMA in away, not the same implementation or that stuff. I mean the effect on performance and programming if it is easier than regular route on UMA systems.

Starcraft 2 is only a bottleneck in CPU department because a lot of things are done on CPU and also it only uses 2 cores, if it was using 4 cores natively and properly then it would ran okay on next generation systems, Battlefield 4 is GPU intensive primarily because of DirectX 11 effects and other things.

I have been doing an investigation into Wii U's GPU and I found it silly that Wii U's GPU is supposedly just 320 Shaders or basically something like Radeon HD 5550 or as some people tried to force those ridicilous 160 Shaders yet I found out that a GPU like Radeon HD 6630M would fit in and since AMD's GPU's are done on relatively cheap process in TSMC's fabs while Wii U's GPU is also done in TSMC's fabs but at CMOS process or something like that.

So I assume that is done on a better process/silicon or something and that it would allow higher density and other things.