Mnementh said:
Hmm, there is strong indication that current AI ran out of training data, because they already absorbed *everything* humankind has produced over the millenia. That's why OpenAI trained a Speech-to-text AI, so that they can access the content of videos. That doesn't mean there aren't some improvements possible, but we might well have hit another ceiling for the moment and need a new breakthrough, just as this paper was the breakthrough that lead to the current AI development. But yes, we need to rethink how to organize labor. There are two scenarios, the one there everyone is poor except a few tech billionares and one there AI is put to produce wealth for everyone. And surprisingly these scenarios were already thought through about a decade ago, the story Manna by Marshall Brain. I highly recommend that read if you think about the AI future. |
Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series.
I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap.
Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture.
So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general.
I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them.
Thanks for the Manna link, I'll definitely check it out.
Mnementh said:
Yeah, the Culture cycle is a great set of novels, that show how AI and humankind (and alienkind) can coexist and thrive. Key points here are treating AI as equals to humans, holding every live at great value and providing the basic necessities without a capitalist society. Your risk-weighing also seems productive. Yes, we should consider that AI can help us for instance on climate change. But we also should not rely on it and try other avenues as well. |
Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.)
Last edited by sc94597 - 6 hours ago