By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - (Rumor) Playstation 6 to be enhanced by generative AI. To feature Ray Reconstruction and Path Tracing

Soundwave said:

If I have a great model intelligence for driving (which means it's good at "seeing" and reacting to the world), why wouldn't I then try and train the AI to do other things? An AI model that can "see" visually very well while driving could adapt that aspect for an AI model that can do ... surgery, for example. No? That's just a small example. 

See this is part and parcel the problem, human beings are limited intelligence themselves, they can't see or aren't terribly good at seeing consequences of actions they take outside of a very narrow view, a self learning AI especially could eat their lunch very quickly. 

A hawk is good at seeing and reacting to the world. That doesn't mean it is a good surgeon. Its brain is highly specialized for hunting.

Likewise, machine learning models have architectures. An architecture designed for learning how to drive isn't necessarily the same architecture that is good at surgery. And sure, you might be able to design an A.I that does both but it will almost certainly be worse at either than an A.I that is trained at driving or surgery alone (given the same architecture.) 

Heck, even within the same architecture we see the advantages of specialization vs. generalization. The LLM's that are used in industry and fine-tuned on domain specific data surpass GPT 4  at tasks related to that specific data. Or you can even train LLM's that surpass GPT4 at specific tasks (say coding) without surpassing it generally. 

And of course this all makes sense. Learning one thing has an opportunity cost in that you aren't learning everything else while you spend time learning that one thing, but you do learn that one thing very well. And then there is the issue of information assymetry and knowledge problems arising in centralized systems. 

Being super-human =|= being unconstrained or having no opportunity costs. 

Last edited by sc94597 - on 04 March 2024

Around the Network
sc94597 said:
Soundwave said:

If I have a great model intelligence for driving (which means it's good at "seeing" and reacting to the world), why wouldn't I then try and train the AI to do other things? An AI model that can "see" visually very well while driving could adapt that aspect for an AI model that can do ... surgery, for example. No? That's just a small example. 

See this is part and parcel the problem, human beings are limited intelligence themselves, they can't see or aren't terribly good at seeing consequences of actions they take outside of a very narrow view, a self learning AI especially could eat their lunch very quickly. 

A hawk is good at seeing and reacting to the world. That doesn't mean it is a good surgeon. Its brain is highly specialized for hunting.

Likewise, machine learning models have architectures. An architecture designed for learning how to drive isn't necessarily the same architecture that is good at surgery. And sure, you might be able to design an A.I that does both but it will almost certainly be worse at either than an A.I that is trained at driving or surgery alone (given the same architecture.) 

Heck, even within the same architecture we see the advantages of specialization vs. generalization. The LLM's that are used in industry and fine-tuned on domain specific data surpass GPT 4  at tasks related to that specific data. Or you can even train LLM's that surpass GPT4 at specific tasks (say coding) without surpassing it generally. 

And of course this all makes sense. Learning one thing has an opportunity cost in that you aren't learning everything else while you spend time learning that one thing, but you do learn that one thing very well. And then there is the issue of information assymetry and knowledge problems arising in centralized systems. 

Being super-human =|= being unconstrained or having no opportunity costs. 

A hawk has no incentive for wanting to be a surgeon though, lol. The owner of an AI model would be highly incentivized to try to expand it to as many things as possible, we already see this even today, was text prompting and static image creation enough for OpenAI and ChatGPT? No. They're not content with just that. Now they are trying to move into movies a year later with Sora AI, not content to just stick to creating image prompts. Next year this time they'll probably be touting something else it can do. 

And even within the Sora presentation, they showed off procedurally generated video games like a Minecraft demo. So obviously they didn't get the memo that their AI algorithms are only supposed to specialize in one thing and one thing only. 

What do you think is going to be more popular? An AI that can only spit out images or one that does several different things very well? That's not terribly difficult to predict which one will rapidly get more attention and more market share (and thus funding creating a snowball effect). Google isn't the no.1 search engine because it searches for one type of thing better than everyone else, it's the no.1 search engine because it searches for everything on average better than any competing search engine and thus has gained the lead "market share" in internet search and maintained that for years now. 

Last edited by Soundwave - on 04 March 2024

Soundwave said:
sc94597 said:

There are many alternatives to state-socialism. 

My more optimistic bet, after there is a period of instability, is that we'll see the abolition of most intellectual property, private ownership of natural resources, and most production will be automated peer production for direct use. Prices for things will approach their marginal cost, which itself will approach 0. There might still exist markets, but they won't be for commodities, but rather specialized goods. 

The printing press wasn't a purely additive piece of technology. It fundamentally changed the social and religious systems of Europe. Its invention, for example, allowed Protestantism to rise in Europe and was a precedent for the transition into capitalism and wage-labor. Almost every major European war in the 16th and 17th centuries could be attributed to its invention. 

Who will make the "things" that have "prices" on them, and where will the people "buying" those "things" get that "currency" from? It will likely have to be a centralized organization ... or "government". Or will some "god like" AI also run that? 

And what happens per chance when such an AI decides it would be better off without human beings or at least, so many of them. 

^This!! We can't trust AI with creative fields, let alone our lives. How many times do I have to say The Terminator was a warning? ChatGPT is a stepping stone towards Skynet.



Soundwave said:
sc94597 said:

A hawk is good at seeing and reacting to the world. That doesn't mean it is a good surgeon. Its brain is highly specialized for hunting.

Likewise, machine learning models have architectures. An architecture designed for learning how to drive isn't necessarily the same architecture that is good at surgery. And sure, you might be able to design an A.I that does both but it will almost certainly be worse at either than an A.I that is trained at driving or surgery alone (given the same architecture.) 

Heck, even within the same architecture we see the advantages of specialization vs. generalization. The LLM's that are used in industry and fine-tuned on domain specific data surpass GPT 4  at tasks related to that specific data. Or you can even train LLM's that surpass GPT4 at specific tasks (say coding) without surpassing it generally. 

And of course this all makes sense. Learning one thing has an opportunity cost in that you aren't learning everything else while you spend time learning that one thing, but you do learn that one thing very well. And then there is the issue of information assymetry and knowledge problems arising in centralized systems. 

Being super-human =|= being unconstrained or having no opportunity costs. 

A hawk has no incentive for wanting to be a surgeon though, lol. The owner of an AI model would be highly incentivized to try to expand it to as many things as possible, we already see this even today, was text prompting and static image creation enough for OpenAI and ChatGPT? No. They're not content with just that. Now they are trying to move into movies a year later with Sora AI, not content to just stick to creating image prompts. Next year this time they'll probably be touting something else it can do. 

And even within the Sora presentation, they showed off procedurally generated video games like a Minecraft demo. So obviously they didn't get the memo that their AI algorithms are only supposed to specialize in one thing and one thing only. 

What do you think is going to be more popular? An AI that can only spit out images or one that does several different things very well? That's not terribly difficult to predict which one will rapidly get more attention and more market share (and thus funding creating a snowball effect). Google isn't the no.1 search engine because it searches for one type of thing better than everyone else, it's the no.1 search engine because it searches for everything on average better than any competing search engine and thus has gained the lead "market share" in internet search and maintained that for years now. 

The thing limiting the hawk from becoming a surgeon isn't its lack of incentive, lol. 

The owner of the AI model would be even more incentivized to create a sub-model that better performs the task for less cost (training time and resources.) 

Sora isn't doing anything special when it generates video games. A video game is essentially a video with controllable inputs after-all. 

You wouldn't use Sora to generate text, for example. It is an entirely different category of model from an LLM (diffusion model that uses CNN's to noise/denoise vs. transformer-based text-to-text generator trained using RLHF.)

That is what I mean when I suggest architecture matters. For example, when you ask ChatGPT to produce images what it really does is call a diffusion-model (Dall-E) to do it. You don't train GPT to produce images, Chat GPT inputs a text-prompt into Dall-E and Dall-E produces the image, and ChatGPT returns the image to you. Why? Because rather than train one model to do everything, we get better results by training multiple different models that are good at different things and then query them. 

These are basic things that anyone talking about the topic should know.

Last edited by sc94597 - on 04 March 2024

CaptainExplosion said:
Soundwave said:

Who will make the "things" that have "prices" on them, and where will the people "buying" those "things" get that "currency" from? It will likely have to be a centralized organization ... or "government". Or will some "god like" AI also run that? 

And what happens per chance when such an AI decides it would be better off without human beings or at least, so many of them. 

^This!! We can't trust AI with creative fields, let alone our lives. How many times do I have to say The Terminator was a warning? ChatGPT is a stepping stone towards Skynet.

You might as well quit video games then. DLSS is generative AI. 



Around the Network
Zkuq said:
CaptainExplosion said:

And incorporating more AI into game development is a step towards AI killing us all, I wish I was making it up. You let AI advance too far and it will not only take away jobs, but it will get people killed. It can't be trusted in art, filmmaking or game development, so it damn well can't be trusted in government or military activities.

Companies need to abandon AI before we're all in even more danger.

AI will absolutely not be abandoned, period. It's too lucrative in many ways, and you can bet on everyone else pursuing AI, so you don't really want to get left behind either. Maybe it might be possible to get individual nations to not utilize AI, but you can't get the whole world to do that, and if your fear is AI killing us all, that's not even nearly good enough. The best you can probably realistically hope for is guiding AI development in a better direction, whatever exactly that might be.

Also, I fully expect AI to kill people sooner rather than later, if it hasn't already. I'm sure AI development for military purposes is in full speed and has been for a good while.

Well over 30,000 people so far in Gaza, and another 8,000 missing under the rubble



https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

Autonomous killer drones also already exist, ready to be unleashed

https://www.wsj.com/video/valkyrie-this-autonomous-ai-drone-could-be-the-militarys-next-weapon/69F7725D-45F1-4464-8271-7A07DB74227A

https://moderndiplomacy.eu/2023/11/26/a-i-controlled-killer-drones-become-reality/

AI is very efficient in wiping entire families of the map. Israel uses it to simultaneously bomb different houses of the same families to wipe them off the Earth. Genocide by AI.



CaptainExplosion said:

^This!! We can't trust AI with creative fields, let alone our lives. How many times do I have to say The Terminator was a warning? ChatGPT is a stepping stone towards Skynet.

Slippery slope arguments are a logical fallacy.

A movie does not real life make.



--::{PC Gaming Master Race}::--

Pemalite said:
CaptainExplosion said:

^This!! We can't trust AI with creative fields, let alone our lives. How many times do I have to say The Terminator was a warning? ChatGPT is a stepping stone towards Skynet.

Slippery slope arguments are a logical fallacy.

A movie does not real life make.

The movies are actually some what in a way part of the problem ... what's the first thing children are taught when they're frightened by a movie. "That can't happen in real life, it's just a movie". So people chuckle and make jokes and it becomes this thing that can only happen in movies. Even James Cameron (director/writer of Terminator 1/2) has stated this. 

To quote another famous James Cameron film (this quote is actually a quote from real life I believe)

"This ship is unsinkable. God himself could not sink this ship ..." - human fallacy

Last edited by Soundwave - on 05 March 2024

sc94597 said:
CaptainExplosion said:

^This!! We can't trust AI with creative fields, let alone our lives. How many times do I have to say The Terminator was a warning? ChatGPT is a stepping stone towards Skynet.

You might as well quit video games then. DLSS is generative AI. 

Lol, exactly.  People have already been supporting AI and more is coming.  AI is the future.  

There is irony in the S2 fans being super excited about DSLL 3.5 while saying how much they hate AI.



i7-13700k

Vengeance 32 gb

RTX 4090 Ventus 3x E OC

Switch OLED

Chrkeller said:
sc94597 said:

You might as well quit video games then. DLSS is generative AI. 

Lol, exactly.  People have already been supporting AI and more is coming.  AI is the future.  

I doubt it will be the glorious future it's advocates think it will be. There is a lot more bad coming than good.