By using this site, you agree to our Privacy Policy and our Terms of Use. Close

The immediate danger is that we're training AI based on our (historic) actions. That has already proven problematic with systemic bias creeping into neural network algorithms. Right at a time when the West-East divide is only growing bigger, are we going to have opposing AIs as well? Ending up with a biased AI by training it on biased examples is pretty much guaranteed.

That's all we do so far with AI, feed it millions of examples and scenarios, training it on deriving the 'proper' response out of all the data. Nothing like the hollywood versions or Isaac Asimov's laws of robotics. AI algorithms are already used to kill anyway, from autonomous combat drones to mass scale target selection.

So will we end up with benevolent AI in the likes of Gandhi, Nelson Mandela, Martin Luther King, César Chávez, Volodymyr Zelenskyy or with AI thinking like Trump, Putin, Netanyahu, Pol Pot, Hitler...

God created men in his image is what the bible says. Maybe the biggest warning against creating AI...

As for a more pragmatic answer, AI can't be trusted since we don't trust what we don't know how it works. And AI is basically a black box with 'unpredictable' outcomes. The question should also be, can we trust the people creating / feeding these AI systems. It would be easy to say, set the goal for AI to optimize life for all humans. Yet we don't even trust 97% of scientists about climate change and certainly don't want to listen to them for solutions. Why would humans listen to an AI?

A self thinking AI will not be trusted, but AI can be useful as a tool for many things. Optimizing trade flows, traffic flows, directing plane traffic, until something goes wrong and then who is going to find out what caused the glitch. So far glitches have been pretty harmless, yet the more responsibility we're giving to AI algorithms, the more harmful any potential glitches can become.

It's also a question of systems. If human drivers crash, carry on, human error. If self driving cars crash, we ground all cars until the glitch has been found? Or do we accept that AI will be fallible as well, just hoping it will do better than human judgement. The advantage of AI is that it doesn't die, you can keep teaching AI, unlike humans. But that can also be a negative, making it harder to 'fix' AI problems.