Mnementh said: Hehe, I don't know if we hit another ceiling or if not. And even if, with the energy currently poured into AI development we could break through ceilings with ideas like the ones you are illustrating. I am just saying I wouldn't be too sure about it either way. And yes, something post-capitalist is happening, if we want it or not, but we have to make decisions to go either into tech-feudalism with the Sam Altmans and Elon Musks as our new kings or a Culture like future. Obviously I would prefer the latter. But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet. |
I think the assessment that AI-research is a series of s-curves is probably the correct one. We scale exponentially, until we hit a ceiling, and then we move in a new direction where we scale exponentially again, until we hit a ceiling. That's described the post-2008 situation pretty well.
It is a good thing a lot of the talent in the industry is moving to companies that are more alignment-motivated, like Anthropic and now hopefully Safe Superintelligence Inc. The Musks and Altmans are more of the same parasitic class imo, and they're nothing without the talent they parasite off. The alignment problem is a technical problem, at the end of the day, and it means the actual technical workers (I'm including philosophers specialized in meta-ethics in this category), designing the AI, are the ones who are going to imbue the personhood and meta-ethics of these systems. This group (unlike the tech-leadership) tend to be politically in the libertarian left orientation, most seeming to be radical small "d" democrats (focused on deliberative democracy, liquid democracy, etc.) That is what gives me a lot of optimism on this.
Last edited by sc94597 - 4 hours ago