| megaman79 said: Oh and let's not forget about AI controlled (or assisted) drone aircraft accidently bombing weddings, the wrong house or childrens parties in Iraq and Afghanistan. Hal 9000 had a perfect operational record, but it was humans that programmed it to begin with. |
I have never seen any confirmed reports of this, have you? Much more common is a terrorist bombing an event to blame it on others - which I do remember hearing confirmed reports of.
@topic,
As far as I know we have yet to move over to a point where an AI is given the "kill decision" as its termed. Right now we have AI controlled maneuvering and targeting but a human is always at the trigger making the decision to fire or not. The goal is definitely to get to the point where this happens but even then as was pointed out earlier battery life (among a myriad of other issues) will always limit their ability for any sort of global domination.
For those in the know the real key to when we should worry is not when robots are given the choice to kill or not - but instead when robots are allowed to design and build other robots with the goal of advancing their intellect. At that point the designs will quickly become more and more advanced and then we risk being overrun with a super-intellect that can solve the "problems" (otherwise known as our failsafes) in its path to world domination. All of this of course assumes that such a super-intellect would even be motivated to work against us rather than with us but who knows =P








