By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - PC - So what comes after parallel processing?

Speaking as a Software Engineer and an amature game developer, I can tell you that the VAST majority of computation for games goes along at fixed increments, ie the frames of the game. You are still throttled by the slowest thread.

The only thing I can think of off the top of my head that could be put on a thread and run without interdependancy on other thread output is Non-dynamic music.

I didn't say that increased PP will do nothing, but its really bumping up on the edge of what is feasable for games. It will come down to making each thread faster, not adding more of them.

Better Shader compilers will probably help a bit, allowing programming of the GPU as extensive as the CPU, but still you will not see any huge leap in gaming capabilities until it moves off silicon, or some major breakthrough is found to exponentially increase the number of transistors you can throw on a chip.



I am a Gauntlet Adventurer.

I strive to improve my living conditions by hoarding gold, food, and sometimes keys and potions. I love adventure, fighting, and particularly winning - especially when there's a prize at stake. I occasionally get lost inside buildings and can't find the exit. I need food badly. What Video Game Character Are You?

Mega Man 9 Challenges: 74%

Waltz Tango Jitterbug Bust a move Headbanging
Bunny Hop Mr. Trigger Happy Double Trouble Mr. Perfect Invincible
Almost Invincible No Coffee Break Air Shoes Mega Diet Encore
Peacekeeper Conservationist Farewell To Arms Gamer's Day Daily Dose
Whomp Wiley! Truly Addicted! Truly Hardcore! Conqueror Vanquisher
Destroyer World Warrior Trusty Sidearm Pack Rat Valued Customer
Shop A Holic Last Man Standing Survivor Hard Rock Heavy Metal
Speed Metal Fantastic 9 Fully Unloaded Blue Bomber Eco Fighter
Marathon Fight Quick Draw G Quick Draw C Quick Draw S Quick Draw H
Quick Draw J Quick Draw P Quick Draw T Quick Draw M Quick Draw X
Around the Network
Nickelbackro said:
Well an increase in threads per core is one idea, i know Intel is returning Hyper Threading to the Core duo line.

 

hyper threading is a nice idea browed for the ppc line (velocity engine/alvatech) but, require developers to code to take advantage of them... and unless you see developers doing it (ah la adobe for velocity engine) the tech will be abandoned, much like the ppc in personal computing.... even the ppc, in consoles (apple/ms) does not use alvatech/velocity engine. the cell off spring uses parts of these technologies to direct traffic, but not the standard as stated in the ppc guidlines 



come play minecraft @  mcg.hansrotech.com

minecraft name: hansrotec

XBL name: Goddog

You can never have too many cores. Each AI can run in it's own core. Each pixel can be a part of a giant ray tracer, where you dedicate one core per pixel, and it has the refresh time of the video display to generate it's next dot. Until you get to 4 trillion cores, an experienced developer can find a way to use them all... You have one core managing a group of cores, and each of those managers managed by another, etc... Being able to have thousands of cores all doing a little bit is the only way for exponential improvements in what can be done.

Programmers just have to think differently how the implement things.

@crashman: The number of transistors has increased exponentially since 1965, doubling every two years on average, and is expected to continue for at least another 10-20 years. In fact, as programmer learn to take advantage of massive numbers of cores, some of the extra circuitry for optimizing a single thread will be removed to allow for more cores. For example, a single thread may run 20% slower, but can get 64 cores instead of 16...

Haven't you heard of Moore's law? It is far more difficult to make a single thread go faster now, so that is why the numbers of cores is increasing recently.

One thing I think we will see more of, is more memory will be built into the chips. Drop the "cache" on the chip and actually run everything inside the cpu. If additional external memory is used, let it be swapped into the chip in a more intelligent manner instead of cached which is just a kludge anyways.



Of course I have heard of Moorse Law (though it is really a misnomer, its not a 'Law') and even in the circuits classes I took in college 5 years ago, the prof was talking about how manufacturers have recently been trying to keep up with Moore's law, not Moore's Law predicting the natural flow of development.

From all the hardware people I have ever spoken to in the know, Moores Law will not be holding for decades.

And it is not a matter of whether raw processing power is there, its whether the architecture fits the software, which massive PP does NOT for games. There is no getting around that.



I am a Gauntlet Adventurer.

I strive to improve my living conditions by hoarding gold, food, and sometimes keys and potions. I love adventure, fighting, and particularly winning - especially when there's a prize at stake. I occasionally get lost inside buildings and can't find the exit. I need food badly. What Video Game Character Are You?

Mega Man 9 Challenges: 74%

Waltz Tango Jitterbug Bust a move Headbanging
Bunny Hop Mr. Trigger Happy Double Trouble Mr. Perfect Invincible
Almost Invincible No Coffee Break Air Shoes Mega Diet Encore
Peacekeeper Conservationist Farewell To Arms Gamer's Day Daily Dose
Whomp Wiley! Truly Addicted! Truly Hardcore! Conqueror Vanquisher
Destroyer World Warrior Trusty Sidearm Pack Rat Valued Customer
Shop A Holic Last Man Standing Survivor Hard Rock Heavy Metal
Speed Metal Fantastic 9 Fully Unloaded Blue Bomber Eco Fighter
Marathon Fight Quick Draw G Quick Draw C Quick Draw S Quick Draw H
Quick Draw J Quick Draw P Quick Draw T Quick Draw M Quick Draw X
CrashMan said:
Of course I have heard of Moorse Law (though it is really a misnomer, its not a 'Law') and even in the circuits classes I took in college 5 years ago, the prof was talking about how manufacturers have recently been trying to keep up with Moore's law, not Moore's Law predicting the natural flow of development.

From all the hardware people I have ever spoken to in the know, Moores Law will not be holding for decades.

And it is not a matter of whether raw processing power is there, its whether the architecture fits the software, which massive PP does NOT for games. There is no getting around that.

 

you know  this is exactly what people have been saying about moorse law since it was published, you can find articles saying what you posted here almost every year 



come play minecraft @  mcg.hansrotech.com

minecraft name: hansrotec

XBL name: Goddog

Around the Network

It's been true for the last 43 years (4 decades already!  so much for the people you thought were in the know), and it may stop or slow eventually, but probably not any time soon... besides they could always start doing multiple layers on a chip and design 3d chips, not to mention new areas such as:

http://news.cnet.com/8301-10787_3-10096966-60.html


No offense to your professor, but this is what intel was saying 5 years ago:

http://news.cnet.com/2100-1001-984051.html

Which basically said they were covered for at least decade and expected to have ideas for the next step prior to that decade passing.




^Thats the whole point. They are working to keep up with moore's law. Moore's law is not a mirror of the natural flow of the technology.

There is no getting around the fact that silicon chips (as they are now) are not the future of the computing industry. For a real leap (like from tubes to transistors and transistors to ICs) there will need to be a whole new technology created.



I am a Gauntlet Adventurer.

I strive to improve my living conditions by hoarding gold, food, and sometimes keys and potions. I love adventure, fighting, and particularly winning - especially when there's a prize at stake. I occasionally get lost inside buildings and can't find the exit. I need food badly. What Video Game Character Are You?

Mega Man 9 Challenges: 74%

Waltz Tango Jitterbug Bust a move Headbanging
Bunny Hop Mr. Trigger Happy Double Trouble Mr. Perfect Invincible
Almost Invincible No Coffee Break Air Shoes Mega Diet Encore
Peacekeeper Conservationist Farewell To Arms Gamer's Day Daily Dose
Whomp Wiley! Truly Addicted! Truly Hardcore! Conqueror Vanquisher
Destroyer World Warrior Trusty Sidearm Pack Rat Valued Customer
Shop A Holic Last Man Standing Survivor Hard Rock Heavy Metal
Speed Metal Fantastic 9 Fully Unloaded Blue Bomber Eco Fighter
Marathon Fight Quick Draw G Quick Draw C Quick Draw S Quick Draw H
Quick Draw J Quick Draw P Quick Draw T Quick Draw M Quick Draw X

After parallel processing comes Quantum processing, and then will get faze engines and go quantum speed...



No one said there wouldn't be work involved. A real leap iin technology s not going to be accomplished with any less work.

They haven't even really started staking layers (sort of they do, for wires in a 2d way). Lot's more cooling issues would be involved, but that could be another decade... 2, 4, 8, 16, 32 layers...

I would much prefer a 100Ghz Core CPU then 128 2Ghz cores. Much easier to program for with one fast CPU, and it can always emulate lots of slower cores. That said, I expect to see 128 2Ghz cores on a single commodity chip before I see a commodity 100Ghz single chip CPU. By commodity, say



jlauro said:
With enough cores, you could design the cpu such that instead of at the normal thread level, you could have functions work inside of cores. As one function calls another, it can literally be processing the values as they are coming in. With a single (or only a few cores) you are sending all the values on the stack, switching to the function, having the function pull them back to operate on, and pausing the calling function, and then returning once the processing is done. With tons of cores, the function can begin processing the data as it's coming while the calling function can be working on calculating the values to process. Both cores running concurrently. Some of that could be done by the compilers even for problems that don't naturally lend themselves to massive amounts of cores directly. As functions are hundreds of levels deep, think of the speed ups that are possible.

 

 The speedup your talking about assumes that your not getting any cache misses and that your functions can be called without reliance on data from another function.  Whenever those things happen your just going to leave one of your cores idling, with that happening hundreds or thousands of times your really just going to have hundreds of billions of wasted cycles.  You also can only add so many cores before the speed of light is going to limit how well they can communicate, even with a 3D chip layout.  You can in theory reduce the fab size to increase this but then your going to eventually run into Heisenberg problems.  This means your going to hit a maximum number of cores per chip, at which point adding more cores slows the whole thing down.

 



Proud member of the Sonic Support Squad