By using this site, you agree to our Privacy Policy and our Terms of Use. Close
smroadkill15 said:

These are still instances of human programming, not actions made independently by AI. I know AI has killed plenty but that's because humans programmed it to do those actions or accidental. I'm mainly speaking of someday humans having AI companions and then they go rogue some day. IRobot shit.

This might not be what the OP is specifically referring to. 

Regarding blaming AI for the end of the world, I still would put blame squarely on humans if this happens. 

We will still call it instances of human programming. The things I was taught while studying AI in Uni back in the late 90s is now considered standard programming. AI is all about problem solving and making independent decisions. See the links, autonomous decision making is what those drones do. They decide what is a target and whether to kill it, without asking a human or direct human control.

Going rogue happens all the time with computer programs. Yet so far with AI it hasn't been that destructive yet. Turning AI chat bots into racists could be called AI going rogue. Certainly that wasn't intended behavior, but humans are doing the programming and will make mistakes or leave unintentional loopholes open.

In programming: Anything that can go wrong, will go wrong.