By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Politics Discussion - Elon Musk and Stephen Hawking calling for a ban of artificially intelligent weapons

The idea of intelligent weapons is certainly not a pleasant one. The idea of intelligent weapons in the hands of politically or religiously motivated idiots is outright terrifying.



Around the Network



Before the PS3 everyone was nice to me :(

-CraZed- said:
SvennoJ said:
I don't really want the military with AI weapons either...
It's a sad state of affairs that advancements in AI are most likely to come from military research. Instead of autonomous robots to farm, build solar plants, reforest or explore other planets, we'll first get smart weapons and autonomous surveillance.


Hate to break it to ya but there is a laundry list of important advancements that have come from military applications.

Nuclear fission, Teflon, large computer technology, composites, radar, integrated circuits, spread spectrum technology, rocket technology which we use to launch satellites, GPS, microwave technologies, drone technology, plus many other contributions in the areas of civil engineering  and not to mention the medical advancements we have today was largely driven by military research.

But as for a ban on AI weapons. Why? It's science unfettered. I mean, where does morality come into play here? And besides, we can control it. What could go wrong?

 

Yes and it's about time for that to change. Why are the biggest innovations byproducts of finding better ways to kill eachother. Well this is why

It's from 2011 but I doubt it has changed much. Over 50 years of NASA spending, less than 1 year of military spending.

As for morality, we're not anywhere near self aware AIs yet, but do you really want that to kick off with weapon systems. And a little glitch can set back international relations for decades, for example http://www.washingtonpost.com/news/worldviews/wp/2013/10/16/the-forgotten-story-of-iran-air-flight-655/ Maybe that would not have happened with AI in control, yet it's humans that make the AI. When it comes to software Murphys law always applies, Anything that can go wrong, eventually will.
Let's start building non deadly AIs first.



We all know no one will stop until the worst case scenario happens.