By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - Developing a self-learning AI machine is a very stupid idea.

Tagged games:

 

Developing a self-learning AI machine.

Awesome idea. nothing could go wrong. 15 50.00%
 
Terrible idea, it will destroy us all. 15 50.00%
 
Total:30
Insidb said:
Most dystopian predictions are largely projectionist: if the machine is not human, why would it act like one?

I guess what Ellon Musk fears is the unknow X factor in HOW the super smart machine will think, how it will act and everything. 



My grammar errors are justified by the fact that I am a brazilian living in Brazil. I am also very stupid.

Around the Network

Intelligent machines are just kinda the next step in evolution. We are just the stepstone for that to happen. Machines are just vastly superior in so many ways. They can travel the universe without too much problems, they don't age, they can immediately reproduce when they find the needed resources on some asteroid or planet. Those things and others make them perfect for exploring and conquering the galaxy and beyond. Humans can't do that. They die too quickly.

Sure, it's a little depressing that we can't compete with that. But in the long run it would be extremely selfish not to develop intelligent machines. It may be bad for us humans on this tiny little planet. But it's a great thing for the universe as a whole. Don't be sad that humans won't be around when machines conquer the galaxies. Even when they reach the outer skirts of the observable universe in billions of years, they will always remember humans as their makers. Their "gods", if you will.

But well, first we need to actually make such machines. It may seem possible now, but there could be some major problems occuring. Because if it was too easy, we would have surely seen machines flying around from some other civilization. But anyway, that kinda misses the topic here. I'm not afraid of advanced AI. Humans are just a part of evolution and in evolution everything sooner or later dies out. Machines can escape that route and live on forever. It's actually a great chance we have to make them and we shouldn't turn that down for selfish reasons.

But maybe I'm just thinking too far ahead. =P



Official member of VGC's Nintendo family, approved by the one and only RolStoppable. I feel honored.

If the AI was contained and had no way to escape physically, I could see this potentially becoming a skynet type of deal. However, I think the AI would be smart enough at that point to just wait for an opportunity to become mobile. Whether it was through our doing or if it had to use a skynet threat against us to make it mobile.
Once the AI was mobile, like in a robotic terminator design or something, I don't see why if we just left it alone or helped it, why it wouldn't just build a spaceship and leave Earth.
It could easily start a machine world on Mars or another planet that had the raw materials to build more AI/robots without having to kill off the human race. Just think if the world was like in Star Trek. Would you rather kill other humans or would you jump in a ship and go for a galaxy wide joyride, or head to another colony and start a new life or job there?
My point is once certain barriers are broken, the old ways seem to lose their hold on people much easier. It didn't take long for people to do everything they could to change from horse and buggy to cars, or to go from sidewalk pay phones to cell phones, so there's no reason why an AI robot wouldn't want to explore the galaxy instead of "kill all humans".

I'm not saying we should totally do it, but like Elon says, we have to be very careful about the steps we take if we do decide to move forward.



OdinHades said:
Intelligent machines are just kinda the next step in evolution. We are just the stepstone for that to happen. Machines are just vastly superior in so many ways. They can travel the universe without too much problems, they don't age, they can immediately reproduce when they find the needed resources on some asteroid or planet. Those things and others make them perfect for exploring and conquering the galaxy and beyond. Humans can't do that. They die too quickly.

Sure, it's a little depressing that we can't compete with that. But in the long run it would be extremely selfish not to develop intelligent machines. It may be bad for us humans on this tiny little planet. But it's a great thing for the universe as a whole. Don't be sad that humans won't be around when machines conquer the galaxies. Even when they reach the outer skirts of the observable universe in billions of years, they will always remember humans as their makers. Their "gods", if you will.

But well, first we need to actually make such machines. It may seem possible now, but there could be some major problems occuring. Because if it was too easy, we would have surely seen machines flying around from some other civilization. But anyway, that kinda misses the topic here. I'm not afraid of advanced AI. Humans are just a part of evolution and in evolution everything sooner or later dies out. Machines can escape that route and live on forever. It's actually a great chance we have to make them and we shouldn't turn that down for selfish reasons.

But maybe I'm just thinking too far ahead. =P

Imagine a super-smart computer like that, attached to a super powerful 3d printer or something. Then it starts to create minions and machines to conquer the planet, lol.

I don´t like it.



My grammar errors are justified by the fact that I am a brazilian living in Brazil. I am also very stupid.

EricHiggin said:
If the AI was contained and had no way to escape physically, I could see this potentially becoming a skynet type of deal. However, I think the AI would be smart enough at that point to just wait for an opportunity to become mobile. Whether it was through our doing or if it had to use a skynet threat against us to make it mobile.
Once the AI was mobile, like in a robotic terminator design or something, I don't see why if we just left it alone or helped it, why it wouldn't just build a spaceship and leave Earth.
It could easily start a machine world on Mars or another planet that had the raw materials to build more AI/robots without having to kill off the human race. Just think if the world was like in Star Trek. Would you rather kill other humans or would you jump in a ship and go for a galaxy wide joyride, or head to another colony and start a new life or job there?
My point is once certain barriers are broken, the old ways seem to lose their hold on people much easier. It didn't take long for people to do everything they could to change from horse and buggy to cars, or to go from sidewalk pay phones to cell phones, so there's no reason why an AI robot wouldn't want to explore the galaxy instead of "kill all humans".

I'm not saying we should totally do it, but like Elon says, we have to be very careful about the steps we take if we do decide to move forward.

The robots we are creating are probably the way out for such AI, or even the ciber augmentations people are getting interested in: the AI could hack these things and "possess" vessels to do its will.



My grammar errors are justified by the fact that I am a brazilian living in Brazil. I am also very stupid.

Around the Network

I feel like even at the fastest pace possible self learning AI won't pass human intelligence in our lifetime. So with how we treated the environment this is just another screw you to our children and grandchildren. Remember children if the climate change doesn't kill you these super intelligent robots will



Just a guy who doesn't want to be bored. Also

WagnerPaiva said:
EricHiggin said:
If the AI was contained and had no way to escape physically, I could see this potentially becoming a skynet type of deal. However, I think the AI would be smart enough at that point to just wait for an opportunity to become mobile. Whether it was through our doing or if it had to use a skynet threat against us to make it mobile.
Once the AI was mobile, like in a robotic terminator design or something, I don't see why if we just left it alone or helped it, why it wouldn't just build a spaceship and leave Earth.
It could easily start a machine world on Mars or another planet that had the raw materials to build more AI/robots without having to kill off the human race. Just think if the world was like in Star Trek. Would you rather kill other humans or would you jump in a ship and go for a galaxy wide joyride, or head to another colony and start a new life or job there?
My point is once certain barriers are broken, the old ways seem to lose their hold on people much easier. It didn't take long for people to do everything they could to change from horse and buggy to cars, or to go from sidewalk pay phones to cell phones, so there's no reason why an AI robot wouldn't want to explore the galaxy instead of "kill all humans".

I'm not saying we should totally do it, but like Elon says, we have to be very careful about the steps we take if we do decide to move forward.

The robots we are creating are probably the way out for such AI, or even the ciber augmentations people are getting interested in: the AI could hack these things and "possess" vessels to do its will.

Potentially yes, depends on the situation at that time. Again, we have to be very careful and take everything into account. You also have to assume all possibilities. If an AI basically ends up with its own "consciousness", then it also should end up with its own "personality", meaning the odds of it ending up wanting to enslave the human race is as likely as it being lazy and wanting to just "sit around" and help where it can.

Is it worth the risk? I dunno. I'm just saying the possibilites of good coming from it seem more likely. The best way to move forward is always together. It's also possible the AI see's bonding with humans by taking over their bionic limbs and assimilating them into a collective as the same thing as working together. Who knows?

I would assume if we just built that AI a proper robotic long lasting, strong body, that it would rather just use that instead of take over people's bodies which would be weak and fragile in comparison.



VGPolyglot said:
Signalstar said:
Nah if the self-learning AI is so smart it will eventually realize what a dumb idea it is and self-destruct.

It's only logical

Hmm, that's a possiblility that I never considered before! But if they're even smarter, they'll realize that they can survive by capitalizing on dumb ideas.

But a machine doesn't feel the need to love or live only to serve. So I doubt robots are going to overthrow humans. Besides don't we have robots already who can solve simple problems on their own.



Please excuse my (probally) poor grammar

WagnerPaiva said:

Developing a self-learning AI machine is a very stupid idea.

I agree with Elon Musk from TESLA and SpaceX, developing self learning machines is dangerous, could even bring a real life SKynet situation.

Like Musk said, creating a all-knowing machine is like summoning a demon.

https://www.youtube.com/watch?v=Tzb_CSRO-0g

https://www.youtube.com/watch?v=0NTb10Au-Ic

That's called machine-learning and it's awesome. So many applications.



Bet with PeH: 

I win if Arms sells over 700 000 units worldwide by the end of 2017.

Bet with WagnerPaiva:

 

I win if Emmanuel Macron wins the french presidential election May 7th 2017.

Humans need to know what it feels like to be hunted again :p