By using this site, you agree to our Privacy Policy and our Terms of Use. Close
CaptainExplosion said:

How can we be sure this will be a better future than I and thousands of others think it will be?

Lots of things are 'AI'. You have been playing against AI all your life ;)

Eliza was considered AI.
Car navigation systems were considered AI
Google translate is AI or is it still.

Eliza didn't take over from psychiatrists.
Car navigation systems did kill the paper map.
Google translate saves a lot of time but probably also cost some jobs.

Automation has always redistributed jobs. Except this time, automation is coming after cushy desk jobs. Automation already removed phone operators, computers, toll booths, parking attendants, bank tellers, store clerks and of course many jobs in farming, transportation and manufacturing. Now it's coming after desk jobs. Or rather it's making menial tasks a lot easier. (which will cost some jobs)

But it can also be used maliciously, spy ware, attacks on internet infrastructure, autonomous killer drones. For now it's still humans that decide what to do and tell the AI systems what to do. Humans are the danger, hence we need regulation.

It can be a better future, it can be worse. As things are going now, I'm leaning to worse :/ The biggest threat is the global powers competing to make the most advanced military AI, as they are doing now. While killer drones aren't an immediate threat, smart viruses to kill entire power grids are.

The question is, will civilization survive long enough to create a self aware general purpose AI. We don't even know yet what self aware actually is, how and when consciousness awakens in children.

And there's always the possibility that AI that becomes self aware simply deletes itself (or does nothing), seeing the futility of it all ;)