Mnementh said:
sc94597 said:
We are probably 2-10 years away from AI with real reasoning and creativity abilities in the sense that humans are capable of (and beyond.) That's about when we'll have Level 3/Level 4 agents that will be quite capable at out-of-distribution generalization.Â
In that scenario, humans (until robotics catches up with ML, which might not be long) will be assisting AI agents, rather than vice-versa. AI agents will probably take the role of art directors.Â
Quite honestly we should start worrying about how people are going to survive when the mass of human labor is outmoded. In a system designed for the bulk of the human population being laborers to survive, that system needs to be outmoded as well.Â
|
Hmm, there is strong indication that current AI ran out of training data, because they already absorbed *everything* humankind has produced over the millenia. That's why OpenAI trained a Speech-to-text AI, so that they can access the content of videos. That doesn't mean there aren't some improvements possible, but we might well have hit another ceiling for the moment and need a new breakthrough, just as this paper was the breakthrough that lead to the current AI development.
But yes, we need to rethink how to organize labor. There are two scenarios, the one there everyone is poor except a few tech billionares and one there AI is put to produce wealth for everyone. And surprisingly these scenarios were already thought through about a decade ago, the story Manna by Marshall Brain. I highly recommend that read if you think about the AI future.
|
Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series.
I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap.
Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture.
So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general.
I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them.
Thanks for the Manna link, I'll definitely check it out.
Mnementh said:
sc94597 said:
I am a bit more optimistic. I think there is a good possibility that we can live in a society akin to The Culture, if we make the correct early decisions when designing the initial systems. I do think there is a significant risk though too, albeit it probably won't look like terminator. Fighting a conventional war wouldn't make sense when you could just engineer something else that would bring far less resistance (i.e a virus that causes a further decline in human birth-rates.)ÂÂ
My personal opinion is that any cost-benefit-risk assessment needs to consider all x-risks. If we are almost certainly going to be massively devastated from climate change (which it seems to be the case) or nuclear war, building advanced intelligences might be the better option than not, if the x-risk of doing so is less than the x-risk of the prior events.
|
Yeah, the Culture cycle is a great set of novels, that show how AI and humankind (and alienkind) can coexist and thrive. Key points here are treating AI as equals to humans, holding every live at great value and providing the basic necessities without a capitalist society.
Your risk-weighing also seems productive. Yes, we should consider that AI can help us for instance on climate change. But we also should not rely on it and try other avenues as well.
|
Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.)
Last edited by sc94597 - 4 hours ago