By using this site, you agree to our Privacy Policy and our Terms of Use. Close
John2290 said:
John2290 said:

That's why it's called the singularity, man. who knows. I guess we won't until if or when it happens. In my opinion it really depends on how AI is created and I see the only solution as to simulate the existence of AI within a virtually created world first, or a society just like ours and choose the best entity that comes out of that for the tasks at hand which would also fit the morality of the times. Perhaps we are the product of another universe in which they reached technological capabilities to create a simulation and "they" are doing just that, testing for a capable entity for some purpose or another. I would recommend Nick bostroms books about it but not alone, also read from people who look at the positives and attempts solutions. Gotta love this delicious thought food though, mmm, tasty ideas.

EDIT: Also, I just want to add that I think we should make clear our intentions and purpose for creating this AI. It would be a huge mistake to create a superior being/s and expect them to be equals or work for us, no matter how small or important that task is to us. At the very least if we were to create AI it should be outside the releam of capitalism, religion or any other ism. They should be seen as the future of evolution from the get go and not our caretakers as slavery is a great way to make enemies. Someone will figure this out.

Thanks for the recommendation! Very interesting topic, hopefully Stephen Hawkin's warnings about artifical intelligence causing the end of mankind doesn't happen. If AI does exceed our own intelligence it can only go three ways, it'll be useful to mankind, independently coexist or will cause the end of the human race. You know, this really could've been it's own thread haha