| Soundwave said: Not today there isn't but in the long run why would you think corporations won't try and merge these intelligences into one to see if they can become even more intelligent? The problem is not even that so much as an AI if it gets to point where it becomes self-learning and self-iterating will just take over the process without human input needed. You don't ask for permission to learn things, why would an AI as intelligent or more intelligent than you ask for permission? I wouldn't work, so yes you can call it a "community co-op" but really that would eventually be a "government" when you're talking about it having to function for millions of people in a country, and that means your entire existence is now tied to being obedient to said state. Don't think you'd be able to criticize it for too long before all that is shut down. |
The reason I don't think this is likely is because I build machine learning models for a living. Specialized models almost always outperform generalized models at performing specific tasks. Sometimes generalization helps improve performance, but then you just fine-tune that generalized model again and some of the generalization is lost in the process. So even when we have AGI's (which is a poorly defined concept) most automated specialized tasks likely will be performed by narrow intelligences or specialized general intelligences that have specifically been fine-tuned to perform them and are then no longer directly part of a singleton.
Furthermore, even if every corporation wanted to build a singleton, there are millions of corporations in the world. So how does the singleton form from millions of different corporate singletons all with different interests?
The whole concept of the singleton depends on there being an AGI that gains almost unlimited power within the span of months (until another AGI can be developed.) But being more intelligent than humans doesn't mean such an entity is unconstrained by physical reality.







