sc94597 said:
Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series. Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture. So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general. I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them. Thanks for the Manna link, I'll definitely check it out.
Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.) |
Hehe, I don't know if we hit another ceiling or if not. And even if, with the energy currently poured into AI development we could break through ceilings with ideas like the ones you are illustrating. I am just saying I wouldn't be too sure about it either way.
And yes, something post-capitalist is happening, if we want it or not, but we have to make decisions to go either into tech-feudalism with the Sam Altmans and Elon Musks as our new kings or a Culture like future. Obviously I would prefer the latter.
But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.