By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Sony Discussion - (Rumor) Playstation 6 to be enhanced by generative AI. To feature Ray Reconstruction and Path Tracing

Soundwave said:
Chrkeller said:

I got it.  AI is bad, except for your ROI.  It is like trump preaching family values while sleeping with pornstars.  

I didn't invest in any stocks specifically for AI. I own stock in thousands of companies, much of it inherited, I don't vet every company individually, virtually no investor does that, when you buy even a standard index fund, you buy just that ... an index of hundreds of stocks. 

Your second point there doesn't make sense. Are "family values" somehow not valuable just because Trump said it and slept with pornstars? That doesn't mean there isn't value or truth in having basic family values because a disliked politician said it. 

You're fixated on the wrong things.

I'm insulated better probably than most on this board too, I'm well off, I'm in an industry where I'm self employed and don't have to worry about being laid off. No one is going to call me into an office and say I'm fired. I haven't had to work a day in my life for the last 15 years really. The fact that I am still concerned about where this is headed probably should be a warning to people who aren't in that position. If I'm worried, I'll still be a lot better off than a lot of people if this shit goes sideways. 

I guess I'm just old school.  I believe people should practice what they preach.  



Around the Network
Chrkeller said:
Soundwave said:

I didn't invest in any stocks specifically for AI. I own stock in thousands of companies, much of it inherited, I don't vet every company individually, virtually no investor does that, when you buy even a standard index fund, you buy just that ... an index of hundreds of stocks. 

Your second point there doesn't make sense. Are "family values" somehow not valuable just because Trump said it and slept with pornstars? That doesn't mean there isn't value or truth in having basic family values because a disliked politician said it. 

You're fixated on the wrong things.

I'm insulated better probably than most on this board too, I'm well off, I'm in an industry where I'm self employed and don't have to worry about being laid off. No one is going to call me into an office and say I'm fired. I haven't had to work a day in my life for the last 15 years really. The fact that I am still concerned about where this is headed probably should be a warning to people who aren't in that position. If I'm worried, I'll still be a lot better off than a lot of people if this shit goes sideways. 

I guess I'm just old school.  I believe people should practice what they preach.  

It should worry you when people who stand to profit from this are even iffy about it. 

I haven't invested in any company on the basis of their AI business, it just happens that many tech companies have in the last 2-3 years become heavily involved in this. Almost anyone that's putting a standard index fund buy into their 401k or whatever is going to own Microsoft, Nvidia, etc. etc. That doesn't mean they have no right to talk about what might be the largest society changing aspect of the next 30-40 years. 

The fact that Sam Altman, the head of OpenAI, someone who stands to make billions of dollars from this can't sit in front of a TV camera and state there is no risk, no serious dangers involved with this technology when he has every reason to say it's the greatest thing since sliced bread should worry people. In fact in front of congress, under oath I believe he straight up admitted AI could wipe out humanity. That's the fucking guy who stands to profit the most from this, lol. That should be setting off some alarm bells for people. 

Last edited by Soundwave - on 06 March 2024

Chrkeller said:

"I hate AI and won't support it!!"

-switch 2 is based around AI tools

"Well I'm fine with it!"

Pretty standard reaction.  People very quickly accept technology and advancement the second they benefit directly.  

I just accepted it because there's no point in not. It's going to help companies save massive amounts of money there's no way they don't use it. I wish it wasn't that way but it just is. It's like trying to stop the car to protect the horse and carriage business. I'm lucky because AI wont be able to replace me in time. I'll be retired before it ever gets that good but man I worry about the future for a lot of industries.



Soundwave said:

No one really got COVID right, so perhaps humanity as a species isn't as smart as they think they are. If they did it should have been stopped before it spread everywhere. We are arrogant, careless, and mistake prone, even the best of us. That's not anecdotal either. 

Already established that people as a whole are stupid. It's the individual that can stand out and be smart.

Soundwave said:

There is no past precedent because where exactly would you get a past precedent? We've had computers for what? Really 70 years only? That's nothing. There's nothing to prepare us for what is coming, Siri on an iPhone doesn't prepare you for shit, nothing in our past is going to tell you anything about this kind of a future. 

70 years is a long time. That's multiple pandemics/epidemics in that time.

However... It's okay to say "we don't know" what will happen in the future, it's far far better than conjuring up a conspiracy theory based on a movie that is not based on real life.

Soundwave said:

An intelligence that's allowed to keep growing and advancing in years or even decades what took monkeys/apes/humans hundreds of thousands of years of evolution to achieve, there's no way to predict what it might do, how much it could learn, or what it could be capable of. 

Where is your evidence that an intelligence would be able to escape it's walled garden?

A.I. Is capable of lots of stuff... And has helped humanity for several decades.

But there is a financial incentive for companies to contain and control A.I. to make money.

Chrkeller said:

Maybe if we had AI we could have predicted, tracked and handled covid better.

Advanced modelling of pandemics/epidemics has occurred over several centuries.

The issue we have is that countries tend to run with populist leaders, who are more than happy to ignore plans/science in order to make their voter base happy so they can get re-elected for another term.



--::{PC Gaming Master Race}::--

CaptainExplosion said:

How can we be sure this will be a better future than I and thousands of others think it will be?

Lots of things are 'AI'. You have been playing against AI all your life ;)

Eliza was considered AI.
Car navigation systems were considered AI
Google translate is AI or is it still.

Eliza didn't take over from psychiatrists.
Car navigation systems did kill the paper map.
Google translate saves a lot of time but probably also cost some jobs.

Automation has always redistributed jobs. Except this time, automation is coming after cushy desk jobs. Automation already removed phone operators, computers, toll booths, parking attendants, bank tellers, store clerks and of course many jobs in farming, transportation and manufacturing. Now it's coming after desk jobs. Or rather it's making menial tasks a lot easier. (which will cost some jobs)

But it can also be used maliciously, spy ware, attacks on internet infrastructure, autonomous killer drones. For now it's still humans that decide what to do and tell the AI systems what to do. Humans are the danger, hence we need regulation.

It can be a better future, it can be worse. As things are going now, I'm leaning to worse :/ The biggest threat is the global powers competing to make the most advanced military AI, as they are doing now. While killer drones aren't an immediate threat, smart viruses to kill entire power grids are.

The question is, will civilization survive long enough to create a self aware general purpose AI. We don't even know yet what self aware actually is, how and when consciousness awakens in children.

And there's always the possibility that AI that becomes self aware simply deletes itself (or does nothing), seeing the futility of it all ;)



Around the Network
Chrkeller said:
Soundwave said:

Exactly. Human beings more or less can only really process what's in front of them. Put against an intelligence that theoretically doesn't just concern itself with the immediate going ons (serious or not) of "current human events" but can plan things far further in advance for example ... would be a massive problem. 

We're not equipped to handle this kind of a threat, not even close. If it develops to the point where it can think to some degree but can learn and learn exponentially ... we could be in trouble very quickly, probably not even able to understand what's happened before its too late. 

Maybe.  But for now I'm concerned about my daughters' bodily rights.  

Fear of AI just doesn't make my list of concerns.  

If AI advances to being close enough to human beings then it'll very likely make not only the same horrible choices that humans are capable of but even worse decisions. That's the idea behind The Terminator. Skynet decided humanity was a threat so it nuked everyone it could.



CaptainExplosion said:
Chrkeller said:

Maybe.  But for now I'm concerned about my daughters' bodily rights.  

Fear of AI just doesn't make my list of concerns.  

If AI advances to being close enough to human beings then it'll very likely make not only the same horrible choices that humans are capable of but even worse decisions. That's the idea behind The Terminator. Skynet decided humanity was a threat so it nuked everyone it could.

"If" and "likely" means you are speculating.  My daughters losing their constitutional rights isn't speculation, it is literally happening.  

Dude it is common sense.  What is the larger concern?  A hypothetical threat that may or may not happen or the actual clear and present threat?  Anyone who says the former over the latter is foolish.  

Last edited by Chrkeller - on 07 March 2024

method114 said:
Chrkeller said:

"I hate AI and won't support it!!"

-switch 2 is based around AI tools

"Well I'm fine with it!"

Pretty standard reaction.  People very quickly accept technology and advancement the second they benefit directly.  

I just accepted it because there's no point in not. It's going to help companies save massive amounts of money there's no way they don't use it. I wish it wasn't that way but it just is. It's like trying to stop the car to protect the horse and carriage business. I'm lucky because AI wont be able to replace me in time. I'll be retired before it ever gets that good but man I worry about the future for a lot of industries.

100% agree and that was exactly my point a number of pages ago.  Technology advancements are going to happen and yes the job market will adjust, like it always has.  

Wholly agree with your post.  



Chrkeller said:
method114 said:

I just accepted it because there's no point in not. It's going to help companies save massive amounts of money there's no way they don't use it. I wish it wasn't that way but it just is. It's like trying to stop the car to protect the horse and carriage business. I'm lucky because AI wont be able to replace me in time. I'll be retired before it ever gets that good but man I worry about the future for a lot of industries.

100% agree and that was exactly my point a number of pages ago.  Technology advancements are going to happen and yes the job market will adjust, like it always has.  

Wholly agree with your post.  

Yes I agree. I am worried about the job market with AI but I do have hope that like you said it will just adjust like it always has in the past. 



method114 said:
Chrkeller said:

100% agree and that was exactly my point a number of pages ago.  Technology advancements are going to happen and yes the job market will adjust, like it always has.  

Wholly agree with your post.  

Yes I agree. I am worried about the job market with AI but I do have hope that like you said it will just adjust like it always has in the past. 

Where is it written the "job market will just bounce back!" as if there's some invisible force that ensures "jobs" will always bounce back. 

There is no such thing. There is no precedent for the amount of jobs AI could render useless.