By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Leynos said:
sc94597 said:

We are probably 2-10 years away from AI with real reasoning and creativity abilities in the sense that humans are capable of (and beyond.) That's about when we'll have Level 3/Level 4 agents that will be quite capable at out-of-distribution generalization. 

In that scenario, humans (until robotics catches up with ML, which might not be long) will be assisting AI agents, rather than vice-versa. AI agents will probably take the role of art directors. 

Quite honestly we should start worrying about how people are going to survive when the mass of human labor is outmoded. In a system designed for the bulk of the human population being laborers to survive, that system needs to be outmoded as well. 

We were warned in 1984. Skynet. Good news I won't be alive in 10 more years so I don't have to see it

I am a bit more optimistic. I think there is a good possibility that we can live in a society akin to The Culture, if we make the correct early decisions when designing the initial systems. I do think there is a significant risk though too, albeit it probably won't look like terminator. Fighting a conventional war wouldn't make sense when you could just engineer something else that would bring far less resistance (i.e a virus that causes a further decline in human birth-rates.) 

My personal opinion is that any cost-benefit-risk assessment needs to consider all x-risks. If we are almost certainly going to be massively devastated from climate change (which it seems to be the case) or nuclear war, building advanced intelligences might be the better option than not, if the x-risk of doing so is less than the x-risk of the prior events.