By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Played_Out said:
sc94597 said:

Don't post if you don't know what quantum computers are. When quantum computers become mainstream in about 20 years( my estimate) how do you think that will effect gaming in graphics Ai and physics. I already know that the AI would rival the human brain when quantum computers are produced. Graphics would look almost super realistic imo and physics would be alot more powerful than what we have now probaly 1000x but not real world physics seeing as real world physics require 1000-10,000 computers to generate and millions if you are doing extreme physics such as determine how supernovae work.

http://en.wikipedia.org/wiki/Quantum_computer the quantum computers wikipedia.


That is absolute bollocks.

Even if it were theoretically possible, it would still require someone to actually program the AI routines. It's not going to happen in the scientific field within the next 20 years and it sure as shit ain't gonna happen in the field of videogames!


Uhm, not quite...

Neural Networks are already to the point where a simple setup can allow a computer to learn complex rules based on simple input. The principle is similar to how a child learns what is exceptable social behavior. The computer is given input, produces output and then recieves feedback on how it did.

As a very simple example: If the input is a picture and the computer is asked to return a value equal to the number of faces in the picture.

So pictures are given as the inputs and the computer essentially spits out garbage the first several thousand times...but as it is told wrong wrong wrong wrong...right...wrong wrong wrong wrong wrong..right...it actually begins to learn what its doing right and wrong through use of an algorithm (more on that in a sec) that is taking advantage of the feedback it is being given. Now to a human this would be ridiculously tedious due to the completely clueless level we are starting on.. and it would appear to take forever to accomplish anything at all...but for a computer that can examine 2 pictures a second.. after only an hour it will have looked at 7,200 pictures and recieved 7,200 hints about what it is supposed to do. After a week it will have ~1.21m hints about how to do its task..and it can keep learning from there, although it is worth noting that from what I know the learning curve is logarithmic, or for those mathematically challenge who don't know what a logarithmic graph looks like...

Ignore the binary search bit, the graph is what I'm trying to illustrate (although a binary search is Big'O of log n also).

For those who still don't get it the computer will learn quickly at first but progress will always be made with diminishing returns in terms of time invested and improvements returned. But since a network can have a snapshot taken and be put into use while another instance of it continues to learn it really doesn't hurt to let it keep going if you the computer time to do so. They also continue to learn while being used anyways so over time they just get better.

As for the algorithm, its job is basically to look at the input, output, and feedback its given at the end of each attempt and determine how to adjust its decision making to get a better result...and then the next go around it determines ok that was better..or ok that was worse based on how many neurons were getting negative/positive feedback. Now the algorithm usually is generic and not designed for the problem specifically. Its goal is to without pre-concieved notions just focus on what seems to be working and if it has exhausted that method and is still bad at the task it will pick a radically different approach and start from scratch.

Then all do is build one network and train it on 10 seperate computers and for each one you train you will get a different result because it is using a bit of random seeding to make decisions in the early stages and the initial approach is usually completely random. In terms of efficiency it doesn't learn like a person does as of yet but there is already work being done to build a "super-alogorithm" so to speak through use of a neural network...in other words they are teaching a computer how to learn better....which imo is absolutely the right direction.

Now the reason its referred to as a neural network is because it actually attempts to emulate the effect of a persons' neural network built into their brain. The brain, like this method but the brain is WAY better at it, attempts to take tons and tons of shortcuts and weed out unecessary steps. A great example that many people are familiar with is the e-mail going aurond taht sowhs you taht you dnot aualclty need msot of the iofrmatoin that the middle letters of a word provide and actually read based on length and the first and last letter of a word. You might notice that its misspelled but if you learned to read like most folks it shouldn't really prevent you from getting the message.

Now once you understand that a computer can learn on its own you only have to realize that the next step is layering that complexity. If you have 100 networks each with their own seperate tasks they've been trained for you can combine them by simply building a control neural network which recieves a small set of inputs and train it to recognize what task its being asked to do and to select the correct type of network to utilize....and even to recognize that it doesn't know what to do and ask for help.

The big thing holding neural networks back is computing time on some of the bad boys of the supercomputer world. If we move into quantum computers in the next 20 years we are talking about a huge leap in the power available to any researcher who wants to train a network. And even without quantum computers we will still be using computers that are aprox 10,321 times faster in 20 years according to Moore's law...which even Kaku says still has another 20 years or so.

 

If you think about all of this it means that similar to how we train people to do all of these tasks we will train computers..it takes a bit longer sure...but they don't forget and they can easily explain it to other computers within a few minutes by simply transferring the information on how the network is built....even better is that the networks themsevles can be described in relatively small files and with a sort of "Neural Interpreter" (so to speak) these files could be opened and the network built on the spot according to specification whenever that task needed to be done and then once completed it would unload the network to free up system resources for other tasks.

Now it is important to keep in mind that I oversimplified the tasks a bit...there is no reason why a single network couldn't learn to handle all of the visual recognition including association of names to faces...provided it was given the computing time needed to learn that.

In conlusion, you're wrong that each routine needs to be coded tediously, but you are correct that 20 years is probably a bit of an optimistic timeline.

PS - The majority of my info comes from conversations with friends who are grad students..some is second-hand info...some is third-hand...so while I am fairly confident that the basic concepts and overall idea is accurate please please do not assume that my specifics are dead on as I am NOT claiming to be an expert. Consider this fair warning.

edit: PPS - If you happen to know for fact that something I've said is wrong please PM me as I would be glad to not only learn more about it but correct the mistake here as well.  I just don't want to clutter this thread with an off-topic discussion. 



To Each Man, Responsibility