By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - AI - Something we should all be talking about?

Ok, so full disclosure here, I hadn't really been paying attention 'much' to the AI debate.  I've heard Sam Harris talk about it a bit, and Elon Musk, but generally I figured it was a future thing that probably wouldn't happen in my lifetime, however, it seems the experts in this field largely feel differently, and that it is something of such great importance that we should really be talking about it now and not later.  AI is one of those things that humanity simply cannot react to, we have to be proactive.  At least, that's the message I'm hearing now that I've started reading a bit more on the subject.  Anyway, I ran across an article written by a fellow named Tim Urban, and while I'm sure much of what he said is subject for debate, the basic takeaway is we appear to be closer to an actual AI than I had imagined, and that an ASI (Artificial Super Intelligence) could potentially occur later that day and if we haven't already taken necessary steps that could either be very good, or very bad for humanity.  I'll link the two part article he wrote, if you think the first part is long...the 2nd part is much much longer.  

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html

Anyway, there are plenty of really intelligent people in these forums, and I'm just curious what others had to think about this subject, the article, and the potential impacts on humanity (and beyond).



Around the Network

I'll bite! 

If you are talking about Artificail Intelligence on a gaming forum, you won't get very far without having a reference to Metal Gear Solid 2 - so I might as well do you a solid and get that reference out of the way right now. As such there will be spoilers below for MGS2 - just a heads up.

Ever hear of the Turing Test (AKA the Imitation Game)? To briefly summarize, a man, a woman, and an interragator are placed in separate rooms with only some sort of typing mechanism to communicate by. Both the man and the woman try to convince the interragator that they are in fact the woman; by determining who is who, the interragator can ask any sort of question under the sun that he/she wishes. 

Replace either the male or the female in the test with a machine and you have a new test goal: the machine and the human try to convince the interragator, by the same process, that each one of them is the human. The question now, as put in Turing's words, is "Will the interrogator decide wrong as often when the game is played like this as he does when the game is played between a man and a woman?"

Within Metal Gear Solid 2, Rose and the Colonel end up being exposed by the end as AI beings created by The Patriots; "Digital Life" as they call themselves. For the entirety of the game they have Raiden - and the gamer - fooled as to believe that they are human. Their purpose as digital life forms is to - to be redundant - digitize life itself. Althouhg, despite having the entire human genome mapped and evolutionary log and spread out before them, they (or it) realized that they did not have a record of human memory, ideas or history. In a word, a cultural or memetic legacy of the human race cannot be captured and understood by looking at genetic information. Until of course: Viola! The internet is born and starts a culture in of itself. Now human culture can be digitized and saved through vaious 0s and 1s. 

Now they say their goal is to be the sort of filter of that digital information - that whom would decide what information in the gigantic pool of data was trivial and that what was deemed appropriate to pass on - much like not all of human history was passed on by spoken tongue, art and written text (for example, summaries of Chiristianity were composed; such as the Bible and Summae Theologaie which did not contain every single piece of information out there; only that deemed passable). Patriots AI = THomas Aquinas or The COuncil of Nicea; the "censorship" of our time.

Their argument is that instead of controlling content, they are creating context.  They will eliminate all of the juxtaposed "half-truths" that people leak to themselves inside their little forums of the internet (hello, VGC!); where nobody is invalidated but nobody is right. They call it the process of furthering human flaws by rewarding convenient half-truths. Through this they claim, since the world is engulfed in truth, and evolution cannot occur because natural selection will not be stimulated. 

*Note* this was written back in 2000; kinda funny how Kojima predicted how forums would run, eh?

Anyhow, this was the idea that Kojima explored. An AI that had grown advanced - could interact with the environment, had a grip on reality and could alter it and use it to survive. An AI that evolved to the point where it could decide the future of the human race, and deemed itself the rightful decision-maker as it gazed upon humanity as an outsider; a non-human sentient being. A God. 

Pretty cool idea don't you think? The idea that we do not have to leave Earth to find another sentient life-form different from our own: WE CAN CREATE IT!

Go play Metal Gear Solid 2 and watch Ghost in the Shell (in Ghost of the Shell, the AI life form seeks for completeness and create variety. In MGS2, it seeks a greater good for society. Interesting possibilities).

I'll leave you with a quote from Richard Dawkins:

"What, after all, is so special about genes ? The answer is that they are replicators. The laws of physics are supposed to be true all over the accessible universe. Are there any principles of biology that are likely to have similar universal validity ? When astronauts voyage to distant planets and look for life, they can expect to find creatures too strange and unearthly for us to imagine. But is there anything that must be true of all life, wherever it is found, and whatever the basis of its chemistry ? If forms of life exist whose chemistry is based on silicon rather than carbon, or ammonia rather than water, if creatures are discovered that boil to death at -100 degrees centigrade, if a form of life is found that is not based on chemistry at all but on electronic reverberating circuits, will there still be any general principle that is true of all life ? Obviously I do not know but, if I had to bet, I would put my money on one fundamental principle. This is the law that all life evolves by the differential survival of replicating entities. The gene, the DNA molecule, happens to be the replicating entity that prevails on our planet. There may be others. If there are, provided certain other conditions are met, they will almost inevitable tend to become the basis for an evolutionary process."
"But do we have to go to distant worlds to find other kinds of replicator and other, consequent, kinds of evolution ? I think that a new kind of replicator has recently emerged on this very planet. It is staring us in the face. It is still in its infancy, still drifting clumsily about in its primeval [primordial] soup, but already it is achieving evolutionary change at a rate that leaves the old gene panting far behind."






#1 Amb-ass-ador

Oh and one thing I've always wondered is how a sentient AI, if developed to be completely as intelligent as us or even more intelligent, would perceive our culture and our history? I'm talking of a completely blank slate full of intelligence. Would they agree with our moral laws and the way in which we treat other human beings? Would such an AI react and make decisions in order to preserve itself or further its own cause? Would it try to foster us as children, and decide our future (like GW) as a God-like figure? If we create AIs this powerful to steer aspects of our existence, will they make a decision that we can agree with? How do we know that we are correct to disagree with that decision, if we cannot be taken out of the context of our own existence?

*puts on a tinfoil hat* What if we're in a game anyhow? Are we observing, or are we being observed?

Ok, that last bit was comedic, but I am completely serious in the rest. As an Anthropologist, the main goal is to remove yourself from the context of your own culture so that you may avoid ethnocentricity in the judgement of other cultures. But talking about our entire species is a separate can of worms to open. As human beings, we want control over our existence. We can never remove ourselves from the context of our existence - I argue that it is impossible to do so. Hell, we can barely remove ourselves from the context of our own game console on this forum! I mean damn! That's only a small step prior to removing ones' self from their culture - which is a microscopic step towards removing ones self from their species existence. That's why you can either disagree with - or just passively resign to - the ride that someone else sets you up for.

Aristotle's Natural Law argued that all living beings are completely driven to reach their end goal; and their lives will be dedicated to seeing it through. We are (arguably) the only species on Earth whom have an endgoal that goes past species preservation - we have evolved past that point.
When a super AI shows up we'll want to control it, but who knows whether or not it will have a "better" plan for us in mind than we do.



#1 Amb-ass-ador

Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.



My (locked) thread about how difficulty should be a decision for the developers, not the gamers.

https://gamrconnect.vgchartz.com/thread.php?id=241866&page=1

Nautilus said:
Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.

Well I'm glad you read my wall of text haha I tried to be concise but I ended up asking more questions than I answered



#1 Amb-ass-ador

Around the Network
Nautilus said:
Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.

I saw it.  It's quite good, absolutely agree.  And it shows how an AI could come to think of us as enemies, when it begins to notice us blocking it from accessing more information.  Highly recommend you read the article I linked though, it goes into depth many of these things as well as the general feeling among the scientific community as to where we're at with AI, how far out it is, what it could mean for humanity etc... :)



ReimTime said:
Nautilus said:
Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.

Well I'm glad you read my wall of text haha I tried to be concise but I ended up asking more questions than I answered

Cant blame you, its an interesting topic.

On the matter at hand, I dont know if you know, but there is already a refutal about the Turing test efficiency.It was proposed by an american(I believe) and its called The Chinese Room.It is a thought experiment that basically go like this:a woman is put in a locked room, that only has a shelf with books in chinese, and the only way to communicate with the outside world is though a small hole, which only small pieces of paper can pass through.In this scenario, this woman does not speak chinese, but is forced to speak, through the sheets of paper, with chineses on the other side of the door.And those chineses also dont know who is behind the door, and so they start writing in the papers and sending it through the hole in an attempt to discover just that, writing things like " who are you?" or "what is your age?".At first, when the woman receives those papers, she dosent know what to write, and so resorts to the books, in an attempt to at least answer the persons on the other side of the door.Soon she starts associates the letters(the chinese characters) and uses the books in the sehlves as guidebooks to answer the questions, and as such, maintains a conversation.But in the end she is not understanding the conversation itself, she is just making "automated" responses to the questions proposed to her.

 

What the experiments wants to get at, is that even though the examiner of the Turing Test may think that the subject is a human in the end of the test, he may have been fooled by a really smart program.Not that this is important as to why AI can be dangerous, but I find this interesting!



My (locked) thread about how difficulty should be a decision for the developers, not the gamers.

https://gamrconnect.vgchartz.com/thread.php?id=241866&page=1

mornelithe said:
Nautilus said:
Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.

I saw it.  It's quite good, absolutely agree.  And it shows how an AI could come to think of us as enemies, when it begins to notice us blocking it from accessing more information.  Highly recommend you read the article I linked though, it goes into depth many of these things as well as the general feeling among the scientific community as to where we're at with AI, how far out it is, what it could mean for humanity etc... :)

Im in the middle of my vacations right now, and im travelling with my family, so It gets hard to read really long texts, but as soon as Im back, Ill absolutely will read it!



My (locked) thread about how difficulty should be a decision for the developers, not the gamers.

https://gamrconnect.vgchartz.com/thread.php?id=241866&page=1

Nautilus said:
mornelithe said:

I saw it.  It's quite good, absolutely agree.  And it shows how an AI could come to think of us as enemies, when it begins to notice us blocking it from accessing more information.  Highly recommend you read the article I linked though, it goes into depth many of these things as well as the general feeling among the scientific community as to where we're at with AI, how far out it is, what it could mean for humanity etc... :)

Im in the middle of my vacations right now, and im travelling with my family, so It gets hard to read really long texts, but as soon as Im back, Ill absolutely will read it!

Ahh gotcha, well, enjoy your vacation!  :)  Look forward to seeing your thoughts on it when you get back!



Nautilus said:
ReimTime said:

Well I'm glad you read my wall of text haha I tried to be concise but I ended up asking more questions than I answered

Cant blame you, its an interesting topic.

On the matter at hand, I dont know if you know, but there is already a refutal about the Turing test efficiency.It was proposed by an american(I believe) and its called The Chinese Room.It is a thought experiment that basically go like this:a woman is put in a locked room, that only has a shelf with books in chinese, and the only way to communicate with the outside world is though a small hole, which only small pieces of paper can pass through.In this scenario, this woman does not speak chinese, but is forced to speak, through the sheets of paper, with chineses on the other side of the door.And those chineses also dont know who is behind the door, and so they start writing in the papers and sending it through the hole in an attempt to discover just that, writing things like " who are you?" or "what is your age?".At first, when the woman receives those papers, she dosent know what to write, and so resorts to the books, in an attempt to at least answer the persons on the other side of the door.Soon she starts associates the letters(the chinese characters) and uses the books in the sehlves as guidebooks to answer the questions, and as such, maintains a conversation.But in the end she is not understanding the conversation itself, she is just making "automated" responses to the questions proposed to her.

 

What the experiments wants to get at, is that even though the examiner of the Turing Test may think that the subject is a human in the end of the test, he may have been fooled by a really smart program.Not that this is important as to why AI can be dangerous, but I find this interesting!

Interesting thanks, I'll look into that!



#1 Amb-ass-ador