By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - Will robots ever deserve rights?

 

Answer the Damn Question!

Yes 32 41.03%
 
No 46 58.97%
 
Total:78
Veracity said:

1. You miss the point that programming a directive is not the same thing as a rationalization.

2. you can simulate thought, but you cannot create it.

3. You can teach a computer what emotions are and a computer can simulate the emotion, but it will never truly emote. The same goes for conscious. 

4. Id rather not go into proving humans have conscious. That is a settled dispute.

5. You don't need to speak to have conscious. You say low awareness not NO awareness so you answer your own question.

1. A few definitions.

  • Rationalize: to employ reason; think in a rational or rationalistic manner. 
  • Rational: agreeable to reasonreasonable; sensible
  • Reason: to form conclusions, judgments, or inferences from facts or premises.

Please explain how programming would be incapable of any of these.

2. Definition of think: to have a conscious mind, to some extent of reasoning, remembering experiences, making rationaldecisions, etc.

We are still debating "to have a conscious mind". But the other criteria can all be performed by a robot.

3. Are emotions required for consciousness? Even if they are, what proof do you have that otherr humans have emotions that cannot be applied to robots?

4. Oh really? Educate me.

5. This was in response to your earlier statement, where you said "Because every human, when presented with the question of "do you exist" responds in the same affirmative manner." to prove that other humans have consciousness. I have refuted this because some humans have consciousness, yet lack the ability to respond to the question "do you exist?" I'm speaking about severely mentally handicapped people here.



Around the Network
SxyxS said:
Jay520 said:


1. Uhhh...sure, I guess you could say that.

2. This is true. Do you think we will never understand consciousness?

3. Pain and fear are effective feelings to help beings avoid danger. I don't see how it's irrational.

4. Okay.

2)no!never.

We have some really high tech machines and several other interessting things,we can watch stars billions miles away,we can make exoeriments and find out what happens with atoms in 1/10000000th of a second etc but we don't have a slightest clue about how consciuosness works.

It seems very much that consciousness works on quantum physics levels(sadly my english is too bad but i'll try to explain)

There is a physical law,it is called "Heisenbergs uncertainity principle"- according to this principle it is 100% impossible to watch an experiment on subatomic niveau without changing the process= you will always get the wrong result .Wrong result=reproduction of the originall process impossible(do some research on your own about heissenbergs theory,you'll find better explanations than mine)

Another problem most people don't care and noone was able to explain though  is less complicated than human  conciousness:

How was the first living being on planet earth able to analyse itself(dna strings for living beings are 100% worthless as long as this dna does not have the structure to reproduce and evolve itself,but before creating this "structure" the first living being must have first analysed itself,than realised that it need to reproduce itself,than built this structure inside its dna etc)Without this talent of analysing and reproducing live would have occured and dissapeared and never came back.

That's the question:How was the first living being was able to analyse itself,to built a structure of reproduction inside its dna(that works)and to reproduce itself(no reproduction no live),you can create trillions of unicell organism,endless numbers,as long as you don't install reproduction-dna they will all die.they don't know that they live and they don't know that they should reproduce themselves.

Who has told this first living being that it lives(it was only one cell,no brain)and why ?why the hell was it interessted in reproducing itself?This is extremly advanced intelligence and conciuosness-sadly we ignore this fact

 

find the answer to this question and than try something more complex like human conciousness.

3)it is absolutely not necessary to code fear and pain.

Most people ,ideologies and theologies etc are trying to overcome fear(and pain and suffering)Why the hell should somebody make machines suffer(=create more suffering)while religions etc are trying to minimise suffering and while every living being is trying to avoid suffering?

A  code that makes a machine "react" the right way to avoid damage or cause damage is 100% enough.there is no need for such "emotions" and no need for traumas or neuroses etc.The nature coded us with fear and pain as there was no other way for nature to show us that some thing are dangerous and no good.

When you are in danger it is always better not to be in fear,not to overreact=no need for fear.(like the guy called data in star trek)

4)Maybe some parts of my answers sounded strange:3 and 4 are both quotes from the movie blade runner(book title"do androids dream of electric sheeps",a book trying to find an answer to your question and go a little bit further("do androids have a soul")

and the male android was complaining about fear and pain.Watch the movie and you will know that he is right.

2. Let's make this easier. What is your definition of consciousness?

3. Physical pain discourages people from things which cause bodily harm. Fear is an emotion induced by a perceived threat which causes entities to quickly pull far away from it and usually hide. It's a basic survival instinct that has helped our species. Fear does not mean you have to overract. It's simply a sensation that makes a being seek shelter.

4. I will watch this movie for you.



the2real4mafol said:
McDonaldsGuy said:
the2real4mafol said:
Aren't robots essentially slaves? Also, they have no morality or conscience at all. So, no they shouldn't have rights. Robots are the same as any other manufactured good as far as i'm concerned


"Morality" is a human construct and differents from human to human. There are no set "laws" on morality. If Artificial Intelligence radically evolves (called AI+, AI++  - read David Chalmers) then AI will have powers far beyond humans and will be able to self-improve and self-replicate itself at a rapid rate. If this AI is goal driven (and it will be, if anything, to self improve and self replicate) then the human race goes bye bye. A machine will not have to take human condition when pursuing its goals, mainly through acquiring scarce resources and using them to the detriment of humans.

 

But of course, for now, this is all just fantasy. If we do design an AI that reaches human intelligence, there are no reasons to come to the conclusion that it will understand itself well enough to design a superior successor because there is no verifiable proof that a designed intelligence can improve itself.

An animal has a far better idea of what is right and what is wrong than any dam robot. Unlike a person, you can easily change the goal of a robot and use it for greater evil. Yes, a person would do this but i don't see robots ever creating new robots in a factory. 

I also struggle to see how we could ever create something that is smarter than ourselves. How is that even possible? 

I feel, if such things are developed. Our own technology will end us. We don't need robots or things that do their task without human interaction. Of we need technology, but it is getting to the point, where we are stupidly lazy now as most things are automated. 

Yes, I agree- I do not think it is possible, however a lot of people think the Singularity not only is possible, but will happen. You should read a book called "Radical Evolution," where it talks about the exponential rise of artificial intelligence (it uses Moore's Law).

The thing is, IF the singularity does occur, then according to David Chalmers and others, this AI will self-replicate and become smarter by itself. All AI will want to pursue its "basic AI drives," which according to Steve Omohundro, the AI will want to be rational, self improve, self replicate, perserve their utility functions and prevent counterfeit utility, and most importantly: acquire scarce resources. The robots will be smarter by modifying its source code.

That's the thing - once robots are able to reach human intelligence (if, when - let's assume they do) then humans are doomed to extinction. It probably won't be by any malice, like in THE TERMINATOR or THE MATRIX, but by indifference. The machine will acquire resources that humans need, and humans will die to it. It's like when we acquire resources from animals (ie deforestration) we don't have the intention of killing them we do it for us.

So, the question shouldn't be "Will robots ever deserve rights?" The question should be: "When robots reach human intelligence (and they will surpass our intelligence within a microsecond once reaching human intelligence), will HUMANS receive rights?"



To those questioning how we can "code" a machine to be intelligent or advanced you're looking at this wrong. The answer is to build a machine that will learn and develop, you're giving it the building blocks to improve itself rather than tell it each step to develop. The question is given all the ethical questions would we ever allow a AI to develop far enough to be considered conscious. We could solve a lot of transplant problems by aggressively pursuing human cloning but there are so many ethical issues that it is not considered viable. I suspect robot advances will be a similar taboo.

From my point of view, if a robot ever achieves a level where it can debate for its rights without influence from humans then it would probably deserve the rights.





Around the Network

Only if they shout loud enough.



 

Yes.

Fetus's have rights, animal species have rights, frozen piece of rock has rights (Antartica), robots will get rights too.

Sentient computer programs won't begin with robots though. And why assume that robots will have individual minds, a hive mind scenario might be more plausible. Humans never had the benefit of always online, with robots that will be natural. So will skynet get rights?

We can already make programs that can learn.
http://venturebeat.com/2012/12/18/numenta-grok/
http://www.technologyreview.com/featuredstory/513696/deep-learning/
A sentient program will first be made using the cloud.



Of course they will, you must be stupid, ignorant and pretentiously skeptical not to think so.

It's unevitable that someday, through various methods an AI will be able to think by itself and feel, it's unevitable that if it can, it'll either surpass humans or reach some kind of threshold below the full capacity of a human mind, the same way humans try to reach perfection in the image of their god but never can.

Then the question will be ask : do these thinking, intelligent, sensitive, emotional robot deserve rights? Most likely it'll simply follow the same path as jews in Egypt before they were freed, or africans before they were freed etc...



The rights of Dolphins are being debated, so it is very possible http://www.bbc.co.uk/news/world-17116882

Humans are complex carbon based machines, in the future other complex machines may also have similar attributes that warrent rights, it may even come to the point Humans aren't the ones to dictate who has rights

The future is a vaste expanse of time and events before us, anything could happen that is beyond our own imaginings



Hell no. They all belong Stuffed on my Wall.