sc94597 said:
No it doesn't follow that "eventually there likely will be one that outstrips the others." Being intelligent at protein-folding is very different from being intelligent at astrophysics. Again, a general rule we know in AI Research is that specialized intelligences and fined-tuned intelligences outperform jack-of-all-trades intelligences at their specialized task. So even with AGI's you'll have some AGI's that are better at protein-folding than others, but not necessarily better at astrophysics. Even super-intelligences have constraints. |
If I have a great model intelligence for driving (which means it's good at "seeing" and reacting to the world), why wouldn't I then try and train the AI to do other things? An AI model that can "see" visually very well while driving could adapt that aspect for an AI model that can do ... surgery, for example. No? Now you start training it to also do surgery. After all, have to keep that market share growing only so much money to be made from self driving cars. That's just a small example.
See this is part and parcel the problem, human beings are limited intelligence themselves, they can't see the future very well or aren't terribly good at seeing consequences of actions they take outside of a very narrow view, a self learning AI especially could eat their lunch very quickly.







