Jumpin said:
The AI we have today is an algorithmic tool used by humans. The term “AI” is a marketing term - it’s a different sort of technology than Skynet, Star Trek, or Asimov’s positronic brain. And there is no technology that resembles Ultron. As I’ll note - Asimov’s AI was incredibly helpful to humanity. Asimov also much more deeply considered the matter than James Cameron and others did - Cameron was looking for a villain for his assassin story. James Cameron has also described the danger of AI is how humans use it, much like how they use other technologies. Anyway, in the robot series, I Robot begins near the dawn of the positronic brain. It goes on to discuss the products, mainly robots, that use the positronic brain to better the lives of humanity - end loneliness, make exploration and industry function unlike anything before - all told through the stories of robot psychologists linked to Susan Calvin. The story follows the decades of her life during the 21st century, ending with the development of the FTL drive by the AI, allowing humanity to travel to other planets Fast forward 3000 years to the Spacer Trilogy/Robot Trilogy, and we have 50 Utopian planets where people are virtually immortal thanks to the technology developed. There’s a planet called Solaria where every human owns their own Barony with tens of thousands of robots employed. Crime is virtually non-existent in the spacer worlds… the stories that follow involve extremely rare crimes. Ten thousand years later, humanity rules the galaxy. So, not all of science fiction agrees on AI = evil. Including the guy who virtually put AI on the map. |
I love Azimov to my bones, and I would love to see his vision come to life. Just like I would love to see Star Trek's TNG vision as well. But those had a very positive way of looking at technology. Our capital based "deciders" do not think of other humans as peers if they are not really as rich and/or powerful as them. They do not care about humanity well being. Non rich people are cannon fodder for this small group, just numbers, and their losses are "collateral damage" (to use the words of a huge douchebag that thankfully died this very week). I don't see these deciders doing anything not even close to the first rule of robotics (A robot may not injure a human being or, through inaction, allow a human being to come to harm). If needed, they WILL program machines to harm others and even incentivize them to do so.
My 1000th post: https://gamrconnect.vgchartz.com/post.php?id=9368779