By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - " PS4 is easier to develop for than PC and built around the Share button "

Nevermind.... just imagine the Cell with GDDR5.



Ex Graphics Whore.

Around the Network
ethomaz said:
Kasz216 said:

First off, you completely made up those numbers.

Secondly your numbers are backwords.  I thought you said you were a developer in another thread.  How do you not know the difference between Latency and Bandwith.

Latency is the time it takes for a process to be processed.

Bandwith is the amount of data that can be processed at once.

GDDR5 has better Bandwith at the cost of Latency.

So using your made up numbers it would be....

 

GDDR5 can carry 10 "data" per movement

DDR3 can carry 1 "data" per movement

GDDR5 can do 12 movement per second

DDR3can do 170 movement per second

GDDR5: 10 x 12 = 120 per second

DDR3: 1 x 170 = 170 per second

No. Latency is the delay time between a data is accessed and another.... DRR3 can do more access in the same period than GDDR5 but GDDR5 can carry more data per access..

My example was bad because I mixed the words but it is still valid for gamers.... there is no way DDR3 is better for games than GDDR5.

Low latency is better for CPU that act in a linear fashion (execute instruction A, then B, then C, etc...)... so the CPU needs additional instruction for execution and in this case the low latency of DDR3 give you a fast response... and you can avoid the bad latency increasing the numbers of cores of a CPU so you can split the linear instruction between the cores (that is not the best scenario but helps when you have low latency).

if by mixed words you mean completely said everything backwords.

Low Latency is better for a CPU that acts in a linear fashion.  You mean like...

When a computer AI needs to move 30 units one after another, where the effect of each move based on the battle rolls could mean having to completley change the startegy based on each individual move?

Or really... any complicated AI based on contigencies?  (Or in general any AI that relies on triggers)

Sure texture C doesn't have to wait for Texture A to load.  However move 5 needs to wait for move 1.  Because move 5 doesn't know what it is until move 1 happens.

Out of Order processing doesn't really help when the data can't be out of order.

It's not really an issue for a game where the badguys always pop up exactly the same place... but for a game like Civilization, or Even Left 4 Dead. 

Being able to process more, doesn't really help when your platform doesn't know WHAT it's supposed to be processing.

 

You may not realize this, but the 360's ram was completely GDDR3.

Which had the same advantages over DDR3 that GDDR5 did, but not to as great a degree.

Yet Microsoft is going with DDR3 with ESRAM.  Why?  I'm guessing they are expecting some CPU intensive stuff.



Kasz216 said:

Yet Microsoft is going with DDR3 with ESRAM.  Why?  I'm guessing they are expecting some CPU intensive stuff.

I guess (like the rumors) Microsoft is not making a game machine focused... it is general living room machine focus... a lot of Windows App will run in background.

ethomaz said:
Kasz216 said:

Yet Microsoft is going with DDR3 with ESRAM.  Why?  I'm guessing they are expecting some CPU intensive stuff.

I guess (like the rumors) Microsoft is not making a game machine focused... it is general living room machine focus... a lot of Windows App will run in background.


I feel like you aren't grasping the core concepts of this and complicated AIs... so i'm going to approach this from a different direction.

Why is DDR3 better for a non game focused machine.  What is it about the 720's ram that will make it superior to the PS4's when it comes to multiple apps?



windows aint so bad freaking norton is the pc killer lol



Around the Network

5 > 3



Ex Graphics Whore.

Troll_Whisperer said:
Two minute buffer, does that mean it records the last two minutes? I thought it was 15, or that's what the rumours said.

maybe it refers to the streaming feature.  so, your stream a window of 2 mins, so that people can also see what you did before (2mins before).  I dunno really.



Kasz216 said:


I feel like you aren't grasping the core concepts of this and complicated AIs... so i'm going to approach this from a different direction.

Why is DDR3 better for a non game focused machine.  What is it about the 720's ram that will make it superior to the PS4's when it comes to multiple apps?

Complicated AIs runs in GPU with GDDR memory (CUDA or OpenCL) better than in CPU... that is what I'm in disagree with you. Every kind of processing existent in a game can be parallelized... so the memory latency is not too important but the bandwidth is.

http://what-when-how.com/artificial-intelligence/ia-algorithm-acceleration-using-gpus-artificial-intelligence

Using the stream programming model as well as resources provided by graphics hardware, Artificial Intelligence algorithms can be parallelized and therefore computing-accelerated. The parallel and high-intensive computing nature of this kind of algorithms makes them good candidates for being implemented on the GPU.

It is not only AI... Machine Learning, Computer Science, Mathematics, Artificial Intelligence, Statistics, etc... every processing that can be parallelized run better on GPU with HIGH bandwidth than CPU with LOW latency.... It's because when you parallelize a process you need a lot chunk of the same data to works... lol that's the concept of GDDR over DDR.

Now to answer your question... a non-game machine or general use machine have a lot of small process that make random tasks that can't be parallelized... these process works in a linear processing... so you have a Skype, Music Player, Video Chat, Voice commands, Social notifications, E-mails notifications, downloads, etc etc etc running at the same time requesting different kind of data one after the other... so the high response of the low latency works better here.

So general applications running in a CPU needs a low latency GPU because it is different spaces of memory random accessed every instant... not the a chuck of the same data accessed to be processed with the same operation (GPU parallelized tasks).

Game processing (graphcis, AI, physics, etc) are parallelized... small and general PC apps are linear.

Games works better with high bandwidth.
General PC apps works better with low latency.

Of course they are trying to create the best world with DDR4 (low latencies with high bandwidth) but now what we have that better fit the game world is GDDR5.... Microsoft and Nintendo are trying to avoid the huge bottleneck of DDR3 for gaming using fast eDRAM and eSRAM.

That's what I think.

ethomaz said:
Kasz216 said:


I feel like you aren't grasping the core concepts of this and complicated AIs... so i'm going to approach this from a different direction.

Why is DDR3 better for a non game focused machine.  What is it about the 720's ram that will make it superior to the PS4's when it comes to multiple apps?

Complicated AIs runs in GPU with GDDR memory (CUDA or OpenCL) better than in CPU... that is what I'm in disagree with you. Every kind of processing existent in a game can be parallelized... so the memory latency is not too important but the bandwidth is.

http://what-when-how.com/artificial-intelligence/ia-algorithm-acceleration-using-gpus-artificial-intelligence

Using the stream programming model as well as resources provided by graphics hardware, Artificial Intelligence algorithms can be parallelized and therefore computing-accelerated. The parallel and high-intensive computing nature of this kind of algorithms makes them good candidates for being implemented on the GPU.

It is not only AI... Machine Learning, Computer Science, Mathematics, Artificial Intelligence, Statistics, etc... every processing that can be parallelized run better on GPU with HIGH bandwidth than CPU with LOW latency.... It's because when you parallelize a process you need a lot chunk of the same data to works... lol that's the concept of GDDR over DDR.

Now to answer your question... a non-game machine or general use machine have a lot of small process that make random tasks that can't be parallelized... these process works in a linear processing... so you have a Skype, Music Player, Video Chat, Voice commands, Social notifications, E-mails notifications, downloads, etc etc etc running at the same time requesting different kind of data one after the other... so the high response of the low latency works better here.

So general applications running in a CPU needs a low latency GPU because it is different spaces of memory random accessed every instant... not the a chuck of the same data accessed to be processed with the same operation (GPU parallelized tasks).

Game processing (graphcis, AI, physics, etc) are parallelized... small and general PC apps are linear.

Games works better with high bandwidth.
General PC apps works better with low latency.

Of course they are trying to create the best world with DDR4 (low latencies with high bandwidth) but now what we have that better fit the game world is GDDR5.... Microsoft and Nintendo are trying to avoid the huge bottleneck of DDR3 for gaming using fast eDRAM and eSRAM.

That's what I think.

Well first off... using the GPU for AI and Physics is going to greatly lower your processing budget for... well graphics effects, which is going to greatly cut into the PS4's graphics capabilties.  Also the CPU will just be sorta wasted at this point.  Doing so more or less wastes the Bandwith advantage in the first place, making using GDDR5 in the first place... useless.

Aside from which according to your link it doesn't.

"First, not all the algorithms fit for the GPU’s programming model, because GPUs are designed to compute high-intensive parallel algorithms"

As Civ 5's Ai wouldn't work well for parelel processing because well, it's AI can't be processed effectivly paralel as later problems rely on earlier ones... in otherwords, there just aren't that many independent variables to process seperatly as most things are dependent.

For example.  Mongolian Horse Archer attacks Greek Phanalax Roll 1D4 for damage, if Hp end up > 10, pull back calvary men, pull foward spearman.  If lower then 10, attack with calvary man 1D4 = >2 attack with Spearman, <2 move spearman over valuable resoruce.


Now imagine that times about 20 more units needing to be positioned.  You can only paralel process so much since the MAJORITY of moves requires imput from other moves, meaning that each move has to be calculated seperatly.  You can get part of each problem done, but they have to be finished in order anyawy, making it fairly moot.

I mean, instead of computer... think of it as either having a really smart guy quick at solving problems. (DDR3) or a group of 8 or so guys. (GDDR5) (numbers pulled out of my ass,  but largely irrelevent.)

 

Sure the 8 guys can probably solve 30 problems faster then the 1 guy.

However, give them 8 parts of the same algebra problem and it doesn't really help and they'll fall behind.   As 2 needs to find out what X is from 1,  3 needs to find Y from 2,   Etc.

Sure, they can shave off a little time by simplyfing down to needing the variables... but they still need the variables and then just sit there... waiting for the guys at the front of the line to hand them the new info, and since these guys are split up, they have to walk towards each other to tell each other what each number is, and they are each individually slower then the first guy at figuring out stuff in the first place. 

Once you build up enough dependent variables for a complicated AI in a strategy game... the smart guy has a clear advantage.

 

GPU's are workers... not thinkers.  Now if your offlaoding ALL The work on your workers... and your thinkers aren't doing anything... even the thinking.

Well, that's bad programming right there.  Your workers will get overworked, or you will have to cut back on their normal duties.



Kasz216 said:

Well first off... using the GPU for AI and Physics is going to greatly lower your processing budget for... well graphics effects, which is going to greatly cut into the PS4's graphics capabilties.  Also the CPU will just be sorta wasted at this point.  Doing so more or less wastes the Bandwith advantage in the first place, making using GDDR5 in the first place... useless.

Aside from which according to your link it doesn't.

"First, not all the algorithms fit for the GPU’s programming model, because GPUs are designed to compute high-intensive parallel algorithms"

As Civ 5's Ai wouldn't work well for parelel processing because well, it's AI can't be processed effectivly paralel as later problems rely on earlier ones... in otherwords, there just aren't that many independent variables to process seperatly as most things are dependent.

For example.  Mongolian Horse Archer attacks Greek Phanalax Roll 1D4 for damage, if Hp end up > 10, pull back calvary men, pull foward spearman.  If lower then 10, attack with calvary man 1D4 = >2 attack with Spearman, <2 move spearman over valuable resoruce.


Now imagine that times about 20 more units needing to be positioned.  You can only paralel process so much since the MAJORITY of moves requires imput from other moves, meaning that each move has to be calculated seperatly.  You can get part of each problem done, but they have to be finished in order anyawy, making it fairly moot.

I mean, instead of computer... think of it as either having a really smart guy quick at solving problems. (DDR3) or a group of 8 or so guys. (GDDR5) (numbers pulled out of my ass,  but largely irrelevent.)

 

Sure the 8 guys can probably solve 30 problems faster then the 1 guy.

However, give them 8 parts of the same algebra problem and it doesn't really help and they'll fall behind.   As 2 needs to find out what X is from 1,  3 needs to find Y from 2,   Etc.

Sure, they can shave off a little time by simplyfing down to needing the variables... but they still need the variables and then just sit there... waiting for the guys at the front of the line to hand them the new info, and since these guys are split up, they have to walk towards each other to tell each other what each number is, and they are each individually slower then the first guy at figuring out stuff in the first place. 

Once you build up enough dependent variables for a complicated AI in a strategy game... the smart guy has a clear advantage.

 

GPU's are workers... not thinkers.  Now if your offlaoding ALL The work on your workers... and your thinkers aren't doing anything... even the thinking.

Well, that's bad programming right there.  Your workers will get overworked, or you will have to cut back on their normal duties.

AI is a parallel processing... you have all the variable to woks in parallel to make the most actions possible and choose one... GPU do that... if any variable change a new AI processing is made.

You are just not understanding what the AI parallel processing means... it is already made in parallel in CPUs because it need to be processed that mode to be fast and responsible.

The AI process in parallel one decision with the variable they have at that moment... if a new variable (a later problem) happen it made a new AI parallel process to make another decision... that is made for all object in the screen.... so in parallel.

There is no linear or wait decision in IA processing. Everything is made in milliseconds (os less time)

The more parallel processing power you have better the AI because you made more options for the final decision making the AI even more unpredictable.

Just google Intelligence Artificial Parallel Processing... the base of IA is to process everything in parallel (GPU task).