By using this site, you agree to our Privacy Policy and our Terms of Use. Close
the2real4mafol said:
McDonaldsGuy said:
the2real4mafol said:
Aren't robots essentially slaves? Also, they have no morality or conscience at all. So, no they shouldn't have rights. Robots are the same as any other manufactured good as far as i'm concerned


"Morality" is a human construct and differents from human to human. There are no set "laws" on morality. If Artificial Intelligence radically evolves (called AI+, AI++  - read David Chalmers) then AI will have powers far beyond humans and will be able to self-improve and self-replicate itself at a rapid rate. If this AI is goal driven (and it will be, if anything, to self improve and self replicate) then the human race goes bye bye. A machine will not have to take human condition when pursuing its goals, mainly through acquiring scarce resources and using them to the detriment of humans.

 

But of course, for now, this is all just fantasy. If we do design an AI that reaches human intelligence, there are no reasons to come to the conclusion that it will understand itself well enough to design a superior successor because there is no verifiable proof that a designed intelligence can improve itself.

An animal has a far better idea of what is right and what is wrong than any dam robot. Unlike a person, you can easily change the goal of a robot and use it for greater evil. Yes, a person would do this but i don't see robots ever creating new robots in a factory. 

I also struggle to see how we could ever create something that is smarter than ourselves. How is that even possible? 

I feel, if such things are developed. Our own technology will end us. We don't need robots or things that do their task without human interaction. Of we need technology, but it is getting to the point, where we are stupidly lazy now as most things are automated. 

Yes, I agree- I do not think it is possible, however a lot of people think the Singularity not only is possible, but will happen. You should read a book called "Radical Evolution," where it talks about the exponential rise of artificial intelligence (it uses Moore's Law).

The thing is, IF the singularity does occur, then according to David Chalmers and others, this AI will self-replicate and become smarter by itself. All AI will want to pursue its "basic AI drives," which according to Steve Omohundro, the AI will want to be rational, self improve, self replicate, perserve their utility functions and prevent counterfeit utility, and most importantly: acquire scarce resources. The robots will be smarter by modifying its source code.

That's the thing - once robots are able to reach human intelligence (if, when - let's assume they do) then humans are doomed to extinction. It probably won't be by any malice, like in THE TERMINATOR or THE MATRIX, but by indifference. The machine will acquire resources that humans need, and humans will die to it. It's like when we acquire resources from animals (ie deforestration) we don't have the intention of killing them we do it for us.

So, the question shouldn't be "Will robots ever deserve rights?" The question should be: "When robots reach human intelligence (and they will surpass our intelligence within a microsecond once reaching human intelligence), will HUMANS receive rights?"