By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - Will robots ever deserve rights?

 

Answer the Damn Question!

Yes 32 41.03%
 
No 46 58.97%
 
Total:78

Top-down AI, when they will exist, will need rights.

For these who don't think a software can have a consciousness, make your homework searching "Steeve Grand" and "Grandroids" on google

 

Edit : Dogs have roghts, and they have the same level of self consciousness as creatures in "Creatures" (don't remember their names, sorry)



Around the Network

No. If they have rights, how will I use them as sex slaves?



Aren't robots essentially slaves? Also, they have no morality or conscience at all. So, no they shouldn't have rights. Robots are the same as any other manufactured good as far as i'm concerned



Xbox Series, PS5 and Switch (+ Many Retro Consoles)

'When the people are being beaten with a stick, they are not much happier if it is called the people's stick'- Mikhail Bakunin

Prediction: Switch 2 will outsell the PS5 by 2030

Jay520 said:
Veracity said:

1. Because you are only addressing single points and pretending that's my entire argument. I've made many statements here and they should be taken as a continuous response. 

2. A computer is a slave to programming. If it is asked about its existence it can see if its memory bank of responses pertains to the question, but it cannot rationalize a response of its own volition.

3. you also completely ignored my animal example despite being directly applicable to humans.

4. if you are referring to vegetables then they do not have conscious.

5. you don't acknowledge the scientific fact that conscious comes from the brain?

1. I only responded to what I felt needed to be responded to. But just for your sake. I will repost your post and respond to all the points

Veracity said:

(a) Because every human, when presented with the question of "do you exist" responds in the same affirmative manner. (b) An extension to animals would be their ability to avoid predators realizing "they" are and need to avoid threats. 

( c) It is a byproduct of a brain. (d) Trees are not conscious at the macro level

(a) I responded to this and this conversation is ongoing.

(b) I can give the same answer to this that I can give as 'a'. Computers can be programmed to do it.

(c ) What is your point? Are you arguing that because something is the byproduct of the brain, it cannot be produced using other means? Moreover, I addressed this in the OP, so I didn't feel it needed to be responded to.

(d) This is a true statement. Not sure why you brought this up.

2. Humans also need to look back on their memory bank of facts, experiences, etc. to rationalize. Sure, they don't have preset answers to questions. But I have no reason to believe that robots won't advance far enough where they too will look bank on their memory bank of facts and experiences to rationalize, without just responding with preset answers.

3. It's basically the same as your human example with a diffferent application. Don't see why both need a response.

4. I'm talking about mentally retarded humans with an inability to speak and have a low awareness. Do you believe they have rights?

5. No, I don't deny that. Just because something comes from the brain, that doesn't mean it can only come from the brain.

You miss the point that programming a directive is not the same thing as a rationalization. You can simulate thought, but you cannot create it.

You can teach a computer what emotions are and a computer can simulate the emotion, but it will never truly emote. The same goes for conscious. 

Id rather not go into proving humans have conscious. That is a settled dispute.

You don't need to speak to have conscious. You say low awareness not NO awareness so you answer your own question.



John Searle is famous for the Chinese Room experiment - even though the person understands syntax (grammar), they do not understand semantics (meaning). I believe this is wrong, because what is the definition of "understanding something?" In the Chinese Room, you receive Chinese characters, and then look up the proper response symbols without knowing Chinese. The question is - what is a proper response?

For example, if the Chinese characters given as input were "What's your favorite food?" The person must understand the CONCEPTS of favorites and food in order to give a proper reply. The fact that they look up symbols in a Chinese rulebook is unimportant to the intelligence that operates in making that decision. (It's called the "Systems" argument).

Here, this will make more sense; I am thinking of how to reply. The abstract thoughts then get translated into my mind into English language words, and the individual letters that compose those words. The knowledge of what letters compose the words direct my fingers on the keyboard.

Searle is focused on the keyboard - a minor part of the process. Intelligence is the decision as to WHICH concepts you can convey through the keyboard.

If a being says it's conscious, it is. Since it is sapient, it deserves the same right as humans. If anything, we better hope it doesn't turn on us and turn into a "paperclip maximizer."



Around the Network
the2real4mafol said:
Aren't robots essentially slaves? Also, they have no morality or conscience at all. So, no they shouldn't have rights. Robots are the same as any other manufactured good as far as i'm concerned


"Morality" is a human construct and differents from human to human. There are no set "laws" on morality. If Artificial Intelligence radically evolves (called AI+, AI++  - read David Chalmers) then AI will have powers far beyond humans and will be able to self-improve and self-replicate itself at a rapid rate. If this AI is goal driven (and it will be, if anything, to self improve and self replicate) then the human race goes bye bye. A machine will not have to take human condition when pursuing its goals, mainly through acquiring scarce resources and using them to the detriment of humans.

 

But of course, for now, this is all just fantasy. If we do design an AI that reaches human intelligence, there are no reasons to come to the conclusion that it will understand itself well enough to design a superior successor because there is no verifiable proof that a designed intelligence can improve itself.



Jay520 said:
SxyxS said:
1)even (most) animals don't have (almost) no rights
but you ask for righs for a fucking piece of silicon and metal?You really care about the important problems.

2)we have to find out what consciouness is .You can't code and simulate something what you don't understand.

3)you must be a 100% pervert bastard when you create an artificial being and coding a programm that makes it suffer.
As the famous nexus6 Roy Beatty once said(or was it Prizz?):"All this pain and fear was so irrational"

4)There will be rights for Androids as soon as they start to dream of electronic sheep

4b)Considering the perverts who run the USA,they will give rights to Androids,Not because of empathy,just as a reason to protect their war machine robots,to make them more accepted by the population AND to protect the military.If such a robot runs crazy ,the military has to pay,some guy (eg. general,coder,manufacturer,the guy who rc's the robot)has to go to jail but
as soon a robot has rights the robot will be accused and setenced to prison.


1. Uhhh...sure, I guess you could say that.

2. This is true. Do you think we will never understand consciousness?

3. Pain and fear are effective feelings to help beings avoid danger. I don't see how it's irrational.

4. Okay.

2)no!never.

We have some really high tech machines and several other interessting things,we can watch stars billions miles away,we can make exoeriments and find out what happens with atoms in 1/10000000th of a second etc but we don't have a slightest clue about how consciuosness works.

It seems very much that consciousness works on quantum physics levels(sadly my english is too bad but i'll try to explain)

There is a physical law,it is called "Heisenbergs uncertainity principle"- according to this principle it is 100% impossible to watch an experiment on subatomic niveau without changing the process= you will always get the wrong result .Wrong result=reproduction of the originall process impossible(do some research on your own about heissenbergs theory,you'll find better explanations than mine)

Another problem most people don't care and noone was able to explain though  is less complicated than human  conciousness:

How was the first living being on planet earth able to analyse itself(dna strings for living beings are 100% worthless as long as this dna does not have the structure to reproduce and evolve itself,but before creating this "structure" the first living being must have first analysed itself,than realised that it need to reproduce itself,than built this structure inside its dna etc)Without this talent of analysing and reproducing live would have occured and dissapeared and never came back.

That's the question:How was the first living being was able to analyse itself,to built a structure of reproduction inside its dna(that works)and to reproduce itself(no reproduction no live),you can create trillions of unicell organism,endless numbers,as long as you don't install reproduction-dna they will all die.they don't know that they live and they don't know that they should reproduce themselves.

Who has told this first living being that it lives(it was only one cell,no brain)and why ?why the hell was it interessted in reproducing itself?This is extremly advanced intelligence and conciuosness-sadly we ignore this fact

 

find the answer to this question and than try something more complex like human conciousness.

3)it is absolutely not necessary to code fear and pain.

Most people ,ideologies and theologies etc are trying to overcome fear(and pain and suffering)Why the hell should somebody make machines suffer(=create more suffering)while religions etc are trying to minimise suffering and while every living being is trying to avoid suffering?

A  code that makes a machine "react" the right way to avoid damage or cause damage is 100% enough.there is no need for such "emotions" and no need for traumas or neuroses etc.The nature coded us with fear and pain as there was no other way for nature to show us that some thing are dangerous and no good.

When you are in danger it is always better not to be in fear,not to overreact=no need for fear.(like the guy called data in star trek)

4)Maybe some parts of my answers sounded strange:3 and 4 are both quotes from the movie blade runner(book title"do androids dream of electric sheeps",a book trying to find an answer to your question and go a little bit further("do androids have a soul")

and the male android was complaining about fear and pain.Watch the movie and you will know that he is right.



As long as they can ask for them they can be considered more than objects and gain individual rights, but I think that would be after a social conflict or something like that in which they would fight to be considered equal



McDonaldsGuy said:
the2real4mafol said:
Aren't robots essentially slaves? Also, they have no morality or conscience at all. So, no they shouldn't have rights. Robots are the same as any other manufactured good as far as i'm concerned


"Morality" is a human construct and differents from human to human. There are no set "laws" on morality. If Artificial Intelligence radically evolves (called AI+, AI++  - read David Chalmers) then AI will have powers far beyond humans and will be able to self-improve and self-replicate itself at a rapid rate. If this AI is goal driven (and it will be, if anything, to self improve and self replicate) then the human race goes bye bye. A machine will not have to take human condition when pursuing its goals, mainly through acquiring scarce resources and using them to the detriment of humans.

 

But of course, for now, this is all just fantasy. If we do design an AI that reaches human intelligence, there are no reasons to come to the conclusion that it will understand itself well enough to design a superior successor because there is no verifiable proof that a designed intelligence can improve itself.

An animal has a far better idea of what is right and what is wrong than any dam robot. Unlike a person, you can easily change the goal of a robot and use it for greater evil. Yes, a person would do this but i don't see robots ever creating new robots in a factory. 

I also struggle to see how we could ever create something that is smarter than ourselves. How is that even possible? 

I feel, if such things are developed. Our own technology will end us. We don't need robots or things that do their task without human interaction. Of we need technology, but it is getting to the point, where we are stupidly lazy now as most things are automated. 



Xbox Series, PS5 and Switch (+ Many Retro Consoles)

'When the people are being beaten with a stick, they are not much happier if it is called the people's stick'- Mikhail Bakunin

Prediction: Switch 2 will outsell the PS5 by 2030

I think its theoretically possible to replicate human intelect in a machine to the point it gains self awareness, after all, we, as them, are limited by our material nature(brain), and in their case, we as humans, can change it to the point they can obtain our capacities.

Even if you go by the natural law side of it, you cant prove humans have a soul that determines our nature and in reality all we have is our material body which gives us all of those capacities that make us "human" and by possitive laws grant us rights.