By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ethomaz said:
Kasz216 said:


I feel like you aren't grasping the core concepts of this and complicated AIs... so i'm going to approach this from a different direction.

Why is DDR3 better for a non game focused machine.  What is it about the 720's ram that will make it superior to the PS4's when it comes to multiple apps?

Complicated AIs runs in GPU with GDDR memory (CUDA or OpenCL) better than in CPU... that is what I'm in disagree with you. Every kind of processing existent in a game can be parallelized... so the memory latency is not too important but the bandwidth is.

http://what-when-how.com/artificial-intelligence/ia-algorithm-acceleration-using-gpus-artificial-intelligence

Using the stream programming model as well as resources provided by graphics hardware, Artificial Intelligence algorithms can be parallelized and therefore computing-accelerated. The parallel and high-intensive computing nature of this kind of algorithms makes them good candidates for being implemented on the GPU.

It is not only AI... Machine Learning, Computer Science, Mathematics, Artificial Intelligence, Statistics, etc... every processing that can be parallelized run better on GPU with HIGH bandwidth than CPU with LOW latency.... It's because when you parallelize a process you need a lot chunk of the same data to works... lol that's the concept of GDDR over DDR.

Now to answer your question... a non-game machine or general use machine have a lot of small process that make random tasks that can't be parallelized... these process works in a linear processing... so you have a Skype, Music Player, Video Chat, Voice commands, Social notifications, E-mails notifications, downloads, etc etc etc running at the same time requesting different kind of data one after the other... so the high response of the low latency works better here.

So general applications running in a CPU needs a low latency GPU because it is different spaces of memory random accessed every instant... not the a chuck of the same data accessed to be processed with the same operation (GPU parallelized tasks).

Game processing (graphcis, AI, physics, etc) are parallelized... small and general PC apps are linear.

Games works better with high bandwidth.
General PC apps works better with low latency.

Of course they are trying to create the best world with DDR4 (low latencies with high bandwidth) but now what we have that better fit the game world is GDDR5.... Microsoft and Nintendo are trying to avoid the huge bottleneck of DDR3 for gaming using fast eDRAM and eSRAM.

That's what I think.

Well first off... using the GPU for AI and Physics is going to greatly lower your processing budget for... well graphics effects, which is going to greatly cut into the PS4's graphics capabilties.  Also the CPU will just be sorta wasted at this point.  Doing so more or less wastes the Bandwith advantage in the first place, making using GDDR5 in the first place... useless.

Aside from which according to your link it doesn't.

"First, not all the algorithms fit for the GPU’s programming model, because GPUs are designed to compute high-intensive parallel algorithms"

As Civ 5's Ai wouldn't work well for parelel processing because well, it's AI can't be processed effectivly paralel as later problems rely on earlier ones... in otherwords, there just aren't that many independent variables to process seperatly as most things are dependent.

For example.  Mongolian Horse Archer attacks Greek Phanalax Roll 1D4 for damage, if Hp end up > 10, pull back calvary men, pull foward spearman.  If lower then 10, attack with calvary man 1D4 = >2 attack with Spearman, <2 move spearman over valuable resoruce.


Now imagine that times about 20 more units needing to be positioned.  You can only paralel process so much since the MAJORITY of moves requires imput from other moves, meaning that each move has to be calculated seperatly.  You can get part of each problem done, but they have to be finished in order anyawy, making it fairly moot.

I mean, instead of computer... think of it as either having a really smart guy quick at solving problems. (DDR3) or a group of 8 or so guys. (GDDR5) (numbers pulled out of my ass,  but largely irrelevent.)

 

Sure the 8 guys can probably solve 30 problems faster then the 1 guy.

However, give them 8 parts of the same algebra problem and it doesn't really help and they'll fall behind.   As 2 needs to find out what X is from 1,  3 needs to find Y from 2,   Etc.

Sure, they can shave off a little time by simplyfing down to needing the variables... but they still need the variables and then just sit there... waiting for the guys at the front of the line to hand them the new info, and since these guys are split up, they have to walk towards each other to tell each other what each number is, and they are each individually slower then the first guy at figuring out stuff in the first place. 

Once you build up enough dependent variables for a complicated AI in a strategy game... the smart guy has a clear advantage.

 

GPU's are workers... not thinkers.  Now if your offlaoding ALL The work on your workers... and your thinkers aren't doing anything... even the thinking.

Well, that's bad programming right there.  Your workers will get overworked, or you will have to cut back on their normal duties.