By using this site, you agree to our Privacy Policy and our Terms of Use. Close
The_Yoda said:

<SNIP>

That’s it.  It seems that there is a lot more direction and hints from humans than was detailed in the original system card or in subsequent media reports.   There is also a decided lack of detail (we don’t know what the human prompts were) so it’s hard to evaluate even if GPT-4 “decided” on its own to “lie” to the Task Rabbit worker. 

In talking about AI, multiple people have brought up this example to me as evidence for how AI might get “out of control”.  Loss of control might indeed be an issue in the future, but I’m not sure this example is a great argument for it. 

These "A.I." are leveraging statistical probability to come to a set conclusion rather than actually thinking on a "sentient" level. - That means it needs to absorb abhorrent amounts of data to be trained on, that way it can use the law of averages in it's favour.
And that means we can influence A.I to exhibit set behaviours based on the data being input.

And that means there is going to be points where interactions don't work out as they should. I.E. The "Lie" or simple spelling errors in generative images.
Generative A.I has no sense of self worth, preservation and is unable to display empathy or other more "human" feelings/emotions.

Basically generative A.I is very good at specific tasks thanks to the fact it's built using probability/statistics. I.E. Generating images.
We will need a ton of these "narrow" Machine Learning algorithms to feed into a larger whole for A.I to become fully realised, it's a long way to go yet.

There is a ton of hyperbole around A.I. right now, we need to remember the technology is still in it's infancy, give it a few more years and things will get extremely interesting... But just the word "A.I." is going to sell stuff because it's the current "cool" thing.



--::{PC Gaming Master Race}::--