By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - (Rumor) Playstation 6 to be enhanced by generative AI. To feature Ray Reconstruction and Path Tracing

Soundwave said:

The "moat" will be mass protests in the streets, riots, overthrows of governments and corporations against this stuff from ever reaching that point. 

When it's 100,000 people losing their job, fine OK. 1 million alright. 10 million ... uh now it's getting dicey, millions of people in every country with young people having no chance of even ever getting a job ... you're going to have mass hysteria and people just burning and looting stuff all over the place. 

Oh I think there will be riots and overthrows of government, but not to ban AI, but to abolish intellectual property and property over natural resources and utilize the AI and robotics to address share needs. 

Besides, banning open-source models will be as effective as banning piracy. As in not very. 



Around the Network
The_Yoda said:

<SNIP>

That’s it.  It seems that there is a lot more direction and hints from humans than was detailed in the original system card or in subsequent media reports.   There is also a decided lack of detail (we don’t know what the human prompts were) so it’s hard to evaluate even if GPT-4 “decided” on its own to “lie” to the Task Rabbit worker. 

In talking about AI, multiple people have brought up this example to me as evidence for how AI might get “out of control”.  Loss of control might indeed be an issue in the future, but I’m not sure this example is a great argument for it. 

These "A.I." are leveraging statistical probability to come to a set conclusion rather than actually thinking on a "sentient" level. - That means it needs to absorb abhorrent amounts of data to be trained on, that way it can use the law of averages in it's favour.
And that means we can influence A.I to exhibit set behaviours based on the data being input.

And that means there is going to be points where interactions don't work out as they should. I.E. The "Lie" or simple spelling errors in generative images.
Generative A.I has no sense of self worth, preservation and is unable to display empathy or other more "human" feelings/emotions.

Basically generative A.I is very good at specific tasks thanks to the fact it's built using probability/statistics. I.E. Generating images.
We will need a ton of these "narrow" Machine Learning algorithms to feed into a larger whole for A.I to become fully realised, it's a long way to go yet.

There is a ton of hyperbole around A.I. right now, we need to remember the technology is still in it's infancy, give it a few more years and things will get extremely interesting... But just the word "A.I." is going to sell stuff because it's the current "cool" thing.



--::{PC Gaming Master Race}::--

sc94597 said:
Soundwave said:

The "moat" will be mass protests in the streets, riots, overthrows of governments and corporations against this stuff from ever reaching that point. 

When it's 100,000 people losing their job, fine OK. 1 million alright. 10 million ... uh now it's getting dicey, millions of people in every country with young people having no chance of even ever getting a job ... you're going to have mass hysteria and people just burning and looting stuff all over the place. 

Oh I think there will be riots and overthrows of government, but not to ban AI, but to abolish intellectual property and property over natural resources and utilize the AI and robotics to address share needs. 

Besides, banning open-source models will be as effective as banning piracy. As in not very. 

If people are angry I wouldn't be so sure it's just government buildings targeted. Burning down buildings works just as well at OpenAI and Microsoft's offices. They will get the point pretty fast. 

My personal feeling is this won't be allowed to ever develop to the point where it can crash the economy, there's no incentive for that to anyone, not even mega-rich people. 

There will eventually be laws that forbid corporations from firing people and using AI instead, but even more than that I think the societal backlash will be so intense that it won't even need to be legislated. Even corporations will go along with it, yes you save on labour costs, but at the end of the day what is the point of any kind of product (cheaply made or not) when there is no one to buy it because no one has jobs? 

Last edited by Soundwave - on 04 March 2024

Soundwave said:

Not today there isn't but in the long run why would you think corporations won't try and merge these intelligences into one to see if they can become even more intelligent? The problem is not even that so much as an AI if it gets to point where it becomes self-learning and self-iterating will just take over the process without human input needed. You don't ask for permission to learn things, why would an AI as intelligent or more intelligent than you ask for permission? 

I wouldn't work, so yes you can call it a "community co-op" but really that would eventually be a "government" when you're talking about it having to function for millions of people in a country, and that means your entire existence is now tied to being obedient to said state. Don't think you'd be able to criticize it for too long before all that is shut down. 

The reason I don't think this is likely is because I build machine learning models for a living. Specialized models almost always outperform generalized models at performing specific tasks. Sometimes generalization helps improve performance, but then you just fine-tune that generalized model again and some of the generalization is lost in the process. So even when we have AGI's (which is a poorly defined concept) most automated specialized tasks likely will be performed by narrow intelligences or specialized general intelligences that have specifically been fine-tuned to perform them and are then no longer directly part of a singleton. 

Furthermore, even if every corporation wanted to build a singleton, there are millions of corporations in the world. So how does the singleton form from millions of different corporate singletons all with different interests? 

The whole concept of the singleton depends on there being an AGI that gains almost unlimited power within the span of months (until another AGI can be developed.) But being more intelligent than humans doesn't mean such an entity is unconstrained by physical reality. 



Soundwave said:

My personal feeling is this won't be allowed to ever develop to the point where it can crash the economy, there's no incentive for that to anyone, not even mega-rich people. 

There will eventually be laws that forbid corporations from firing people and using AI instead, but even more than that I think the societal backlash will be so intense that it won't even need to be legislated. Even corporations will go along with it, yes you save on labour costs, but at the end of the day what is the point of any kind of product (cheaply made or not) when there is no one to buy it because no one has jobs? 

The problem with the idea it won't be developed is that countries compete just as much as corporations. We've already seen countries with rapidly declining birthrates but very little immigration pass bills protecting A.I development (i.e Japan.) And that is going to be the main reason it is developed. When you don't have enough working aged people you're going to have to sustain productivity somehow. 

If any single country allows it, then other countries are going to have to if they want to stay competitive. 



Around the Network
sc94597 said:
Soundwave said:

Not today there isn't but in the long run why would you think corporations won't try and merge these intelligences into one to see if they can become even more intelligent? The problem is not even that so much as an AI if it gets to point where it becomes self-learning and self-iterating will just take over the process without human input needed. You don't ask for permission to learn things, why would an AI as intelligent or more intelligent than you ask for permission? 

I wouldn't work, so yes you can call it a "community co-op" but really that would eventually be a "government" when you're talking about it having to function for millions of people in a country, and that means your entire existence is now tied to being obedient to said state. Don't think you'd be able to criticize it for too long before all that is shut down. 

The reason I don't think this is likely is because I build machine learning models for a living. Specialized models almost always outperform generalized models at performing specific tasks. Sometimes generalization helps improve performance, but then you just fine-tune that generalized model again and some of the generalization is lost in the process. So even when we have AGI's (which is a poorly defined concept) most automated specialized tasks likely will be performed by narrow intelligences or specialized general intelligences that have specifically been fine-tuned to perform them and are then no longer directly part of a singleton. 

Furthermore, even if every corporation wanted to build a singleton, there are millions of corporations in the world. So how does the singleton form from millions of different corporate singletons all with different interests? 

The whole concept of the singleton depends on there being an AGI that gains almost unlimited power within the span of months (until another AGI can be developed.) But being more intelligent than humans doesn't mean such an entity is unconstrained by physical reality. 

The internet is how much more advanced today than it was 30 years ago? Completely night and day. 

And the internet can't really "think" or self improve itself, all internet advancement has come because of humans. 

An AI can't really think and improve itself to any great degree today perhaps ... but can you guarantee that for the next 15 years? 20 years? 30 years? 40 years? 50 years? Once it can do that, how much more rapidly could it develop from that point? We all know how laughably basic computers from 50 years ago looked like. 



Soundwave said:

The internet is how much more advanced today than it was 30 years ago? Completely night and day. 

And the internet can't really "think" or self improve itself, all internet advancement has come because of humans. 

An AI can't really think and improve itself to any great degree today perhaps ... but can you guarantee that for the next 15 years? 20 years? 30 years? 40 years? 50 years? Once it can do that, how much more rapidly could it develop from that point? We all know how laughably basic computers from 50 years ago looked like. 

I am not arguing against there eventually being AI's that can think and self-improve. I am arguing against the idea that there will be a single A.I (aka singleton) that does this. I am also arguing that currently what is being called "AI" and iterated upon doesn't do this and nobody knows how to make one that can. We'll have many advancements and much automation before we do know how to create one. And if we create one, we likely will create many all with different interests, identities, and capacities. There is no reason to suspect they'll be aligned with each-other and coordinate. 



sc94597 said:
Soundwave said:

The internet is how much more advanced today than it was 30 years ago? Completely night and day. 

And the internet can't really "think" or self improve itself, all internet advancement has come because of humans. 

An AI can't really think and improve itself to any great degree today perhaps ... but can you guarantee that for the next 15 years? 20 years? 30 years? 40 years? 50 years? Once it can do that, how much more rapidly could it develop from that point? We all know how laughably basic computers from 50 years ago looked like. 

I am not arguing against there eventually being AI's that can think and self-improve. I am arguing against the idea that there will be a single A.I (aka singleton) that does this. I am also arguing that currently what is being called "AI" and iterated upon doesn't do this and nobody knows how to make one that can. We'll have many advancements and much automation before we do know how to create one. And if we create one, we likely will create many all with different interests, identities, and capacities. 

Ultimately I don't think it even matters, whether it's one AI or different strands of it, eventually there likely will be "one" that outstrips the others and can either do what the other AI models can or it can be taught to quickly learn everything they know. 

I mean if I was an AI developer and I developed the besteststest AI ever for say ... car driving, once it's mastered that, what do you think it going to be the next thought in my mind. "Hmmm ... maybe now it can do this". And then I click on an article seeing a competitors AI can not only drive cars but also do 20 other things. 

It's not terribly hard to see how this could converge in a hurry. 



Soundwave said:

Ultimately I don't think it even matters, whether it's one AI or different strands of it, eventually there likely will be "one" that outstrips the others and can either do what the other AI models can or it can be taught to quickly learn everything they know. 

I mean if I was an AI developer and I developed the besteststest AI ever for say ... car driving, once it's mastered that, what do you think it going to be the next thought in my mind. "Hmmm ... maybe now it can do this". And then I click on an article seeing a competitors AI can not only drive cars but also do 20 other things. 

It's not terribly hard to see how this could converge in a hurry. 

No it doesn't follow that "eventually there likely will be one that outstrips the others." Being intelligent at protein-folding is very different from being intelligent at astrophysics.

Again, a general rule we know in AI Research is that specialized intelligences and fined-tuned intelligences outperform jack-of-all-trades intelligences at their specialized task. 

So even with AGI's you'll have some AGI's that are better at protein-folding than others, but not necessarily better at astrophysics. Even super-intelligences have constraints. 



sc94597 said:
Soundwave said:

Ultimately I don't think it even matters, whether it's one AI or different strands of it, eventually there likely will be "one" that outstrips the others and can either do what the other AI models can or it can be taught to quickly learn everything they know. 

I mean if I was an AI developer and I developed the besteststest AI ever for say ... car driving, once it's mastered that, what do you think it going to be the next thought in my mind. "Hmmm ... maybe now it can do this". And then I click on an article seeing a competitors AI can not only drive cars but also do 20 other things. 

It's not terribly hard to see how this could converge in a hurry. 

No it doesn't follow that "eventually there likely will be one that outstrips the others." Being intelligent at protein-folding is very different from being intelligent at astrophysics.

Again, a general rule we know in AI Research is that specialized intelligences and fined-tuned intelligences outperform jack-of-all-trades intelligences at their specialized task. 

So even with AGI's you'll have some AGI's that are better at protein-folding than others, but not necessarily better at astrophysics. Even super-intelligences have constraints. 

If I have a great model intelligence for driving (which means it's good at "seeing" and reacting to the world), why wouldn't I then try and train the AI to do other things? An AI model that can "see" visually very well while driving could adapt that aspect for an AI model that can do ... surgery, for example. No? Now you start training it to also do surgery. After all, have to keep that market share growing only so much money to be made from self driving cars. That's just a small example. 

See this is part and parcel the problem, human beings are limited intelligence themselves, they can't see the future very well or aren't terribly good at seeing consequences of actions they take outside of a very narrow view, a self learning AI especially could eat their lunch very quickly.