By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Nautilus said:
ReimTime said:

Well I'm glad you read my wall of text haha I tried to be concise but I ended up asking more questions than I answered

Cant blame you, its an interesting topic.

On the matter at hand, I dont know if you know, but there is already a refutal about the Turing test efficiency.It was proposed by an american(I believe) and its called The Chinese Room.It is a thought experiment that basically go like this:a woman is put in a locked room, that only has a shelf with books in chinese, and the only way to communicate with the outside world is though a small hole, which only small pieces of paper can pass through.In this scenario, this woman does not speak chinese, but is forced to speak, through the sheets of paper, with chineses on the other side of the door.And those chineses also dont know who is behind the door, and so they start writing in the papers and sending it through the hole in an attempt to discover just that, writing things like " who are you?" or "what is your age?".At first, when the woman receives those papers, she dosent know what to write, and so resorts to the books, in an attempt to at least answer the persons on the other side of the door.Soon she starts associates the letters(the chinese characters) and uses the books in the sehlves as guidebooks to answer the questions, and as such, maintains a conversation.But in the end she is not understanding the conversation itself, she is just making "automated" responses to the questions proposed to her.

 

What the experiments wants to get at, is that even though the examiner of the Turing Test may think that the subject is a human in the end of the test, he may have been fooled by a really smart program.Not that this is important as to why AI can be dangerous, but I find this interesting!

Interesting thanks, I'll look into that!



#1 Amb-ass-ador