By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Soundwave said:

If I have a great model intelligence for driving (which means it's good at "seeing" and reacting to the world), why wouldn't I then try and train the AI to do other things? An AI model that can "see" visually very well while driving could adapt that aspect for an AI model that can do ... surgery, for example. No? That's just a small example. 

See this is part and parcel the problem, human beings are limited intelligence themselves, they can't see or aren't terribly good at seeing consequences of actions they take outside of a very narrow view, a self learning AI especially could eat their lunch very quickly. 

A hawk is good at seeing and reacting to the world. That doesn't mean it is a good surgeon. Its brain is highly specialized for hunting.

Likewise, machine learning models have architectures. An architecture designed for learning how to drive isn't necessarily the same architecture that is good at surgery. And sure, you might be able to design an A.I that does both but it will almost certainly be worse at either than an A.I that is trained at driving or surgery alone (given the same architecture.) 

Heck, even within the same architecture we see the advantages of specialization vs. generalization. The LLM's that are used in industry and fine-tuned on domain specific data surpass GPT 4  at tasks related to that specific data. Or you can even train LLM's that surpass GPT4 at specific tasks (say coding) without surpassing it generally. 

And of course this all makes sense. Learning one thing has an opportunity cost in that you aren't learning everything else while you spend time learning that one thing, but you do learn that one thing very well. And then there is the issue of information assymetry and knowledge problems arising in centralized systems. 

Being super-human =|= being unconstrained or having no opportunity costs. 

Last edited by sc94597 - on 04 March 2024