By using this site, you agree to our Privacy Policy and our Terms of Use. Close
ReimTime said:
Nautilus said:
Holy Cow, reim has written a gigantic text!Ill try to be more concise!

The real dangerous thing is that we would be creating life.After all, what is the meaning of being a human?Or rather, of having a soul?It is no different than being able to make decisions on its own, being able to observe the situation and make a decision based on it.And an AI would be just it, but a mechanical lifeform instead of a biological one.And not just be any being, but most likely a powerful entity that would be much more inteligent than us, since it would think at a computational level, and having potentially access to all information in the world through the internet, and depending the level of access it is given or gets, could even be responsible to energie destribution or even nuclear weapons.Like Reim said, just look at MGS 2 or 4 as a good example of what it could be.And researching this kind of thing without making some guidelines, or even having some plan in case it backfires is extremely dangerous to us mankind in general.Imagine yourself as a being, lets say like Dr. Manhattan, having all the power you have but being used to no end by human for their own pleasure.What is the most likely thing you would do?Use your powers to free yourself.Now imagine a being like that, but that has no moral code or even an idea of honor, it could go south really fast.Thats why this type of research is really dangerous and should be treated carefully.

A good film that will make you think about this all is Ex-Machine.If you are really interested in the subject, I highly sugest you watch it.

Well I'm glad you read my wall of text haha I tried to be concise but I ended up asking more questions than I answered

Cant blame you, its an interesting topic.

On the matter at hand, I dont know if you know, but there is already a refutal about the Turing test efficiency.It was proposed by an american(I believe) and its called The Chinese Room.It is a thought experiment that basically go like this:a woman is put in a locked room, that only has a shelf with books in chinese, and the only way to communicate with the outside world is though a small hole, which only small pieces of paper can pass through.In this scenario, this woman does not speak chinese, but is forced to speak, through the sheets of paper, with chineses on the other side of the door.And those chineses also dont know who is behind the door, and so they start writing in the papers and sending it through the hole in an attempt to discover just that, writing things like " who are you?" or "what is your age?".At first, when the woman receives those papers, she dosent know what to write, and so resorts to the books, in an attempt to at least answer the persons on the other side of the door.Soon she starts associates the letters(the chinese characters) and uses the books in the sehlves as guidebooks to answer the questions, and as such, maintains a conversation.But in the end she is not understanding the conversation itself, she is just making "automated" responses to the questions proposed to her.

 

What the experiments wants to get at, is that even though the examiner of the Turing Test may think that the subject is a human in the end of the test, he may have been fooled by a really smart program.Not that this is important as to why AI can be dangerous, but I find this interesting!



My (locked) thread about how difficulty should be a decision for the developers, not the gamers.

https://gamrconnect.vgchartz.com/thread.php?id=241866&page=1