Leynos said: We were warned in 1984. Skynet. Good news I won't be alive in 10 more years so I don't have to see it |
American media always seems to see technological improvement as something to fear. The Skynet scenario (or to go to the works where Terminator stole that idea from: Larry Ellison's 1967 story "I have no mouth, and I must scream") seems unlikely, unless we leave AI development solely in the hands of the military. AI doesn't have a reason to wipe us out. It could happen accidental, but not in a targeted way like Skynet.
Outside of America there are more positive views on an AI future, like:
sc94597 said: I am a bit more optimistic. I think there is a good possibility that we can live in a society akin to The Culture, if we make the correct early decisions when designing the initial systems. I do think there is a significant risk though too, albeit it probably won't look like terminator. Fighting a conventional war wouldn't make sense when you could just engineer something else that would bring far less resistance (i.e a virus that causes a further decline in human birth-rates.)Â My personal opinion is that any cost-benefit-risk assessment needs to consider all x-risks. If we are almost certainly going to be massively devastated from climate change (which it seems to be the case) or nuclear war, building advanced intelligences might be the better option than not, if the x-risk of doing so is less than the x-risk of the prior events. |
Yeah, the Culture cycle is a great set of novels, that show how AI and humankind (and alienkind) can coexist and thrive. Key points here are treating AI as equals to humans, holding every live at great value and providing the basic necessities without a capitalist society.
Your risk-weighing also seems productive. Yes, we should consider that AI can help us for instance on climate change. But we also should not rely on it and try other avenues as well.