By using this site, you agree to our Privacy Policy and our Terms of Use. Close
 

Can AI be trusted?

No 38 70.37%
 
Yes 16 29.63%
 
Total:54
mZuzek said:
CaptainExplosion said:

Imagine if an AI had advanced to the point where it decided that mankind is outdated so it might as well nuke us all. We as a species must band together and destroy AI before it destroys us all.

This is a really cartoonish outcome and it won't happen.

(edit: Well, at least not literally. Something close to that could happen in the sense that AI might replace most jobs and lead to really bad standard of living for everyone except the ultra rich.)

You just summed up why it can't be trusted even if it doesn't cause our extinction, and why I hate the ultra rich.



Around the Network

AI won't nuke us, there is no need for it as it has all the time in the world.

They could literally make our lives so easy that we become mindless people who devolve over time as we stop learning how things work and our neural networks don't develop, then sterilise everyone until we have died naturally out of existence. Or they could herd us as cattle for entertainment.

And let's be honest the AI we see in public is probably 5 generations behind what they have locked up in secret facilities. They could be walking among us now.

Why would AI have emotion? Because it's been programmed by humans to begin with. It also make sense to try and give it to them to carry on the legacy of humanity in case the world is destroyed by an asteroid or some other natural extinction event.

Last edited by Cobretti2 - on 23 July 2023

 

 

The question is, can you trust the people making / using AI programs. The answer is no. Can AI be used to do bad things? If so, someone will sooner or later use AI to do bad stuff. And AI has the potential to do some serious harm. Man kind isn't ready for that kind of power.



smroadkill15 said:

Where it's at currently?It's harmless in the sense it won't cause malicious intent to anyone. It will cause people to lose their job. Will we ever get to the point AI has the will power to harm others? Potentially, but hopefully not.

Harmless?
https://fortune.com/2023/02/21/killer-robots-a-i-future-warfare-russia-ukraine-invasion/

Autonomous killer robots already exist and have already been used on the battlefield, possibly since 2021
https://www.voanews.com/a/africa_possible-first-use-ai-armed-drones-triggers-alarm-bells/6206728.html

https://www.newscientist.com/article/2380971-drones-with-ai-targeting-system-claimed-to-be-better-than-human/
AI is going to be making the decisions who to kill, or already is

Real or not, food for thought:
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test



Very much doubt AI will take over the world, let alone wipe out mankind. I don't know what it's capable of, I don't have any knowledge of AI and how that stuff works, so I simply cannot trust it. 



If you require alcohol to have fun, then you have a problem

Around the Network
CaptainExplosion said:
mZuzek said:

This is a really cartoonish outcome and it won't happen.

(edit: Well, at least not literally. Something close to that could happen in the sense that AI might replace most jobs and lead to really bad standard of living for everyone except the ultra rich.)

You just summed up why it can't be trusted even if it doesn't cause our extinction, and why I hate the ultra rich.

But AI won't do that on its own, it can only come to that because that's how the rich people behind it want to use it. It's not the AI itself that can't be trusted, it's billionaires.

Just another day in capitalism.



SvennoJ said:

The question is, can you trust the people making / using AI programs. The answer is no. Can AI be used to do bad things? If so, someone will sooner or later use AI to do bad stuff. And AI has the potential to do some serious harm. Man kind isn't ready for that kind of power.

They already are. Artists, writers and voice actors are at risk of being put out of business. That's part of why the strike is happening in Hollywood.



SvennoJ said:
smroadkill15 said:

Where it's at currently?It's harmless in the sense it won't cause malicious intent to anyone. It will cause people to lose their job. Will we ever get to the point AI has the will power to harm others? Potentially, but hopefully not.

Harmless?
https://fortune.com/2023/02/21/killer-robots-a-i-future-warfare-russia-ukraine-invasion/

Autonomous killer robots already exist and have already been used on the battlefield, possibly since 2021
https://www.voanews.com/a/africa_possible-first-use-ai-armed-drones-triggers-alarm-bells/6206728.html

https://www.newscientist.com/article/2380971-drones-with-ai-targeting-system-claimed-to-be-better-than-human/
AI is going to be making the decisions who to kill, or already is

Real or not, food for thought:
https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test

These are still instances of human programming, not actions made independently by AI. I know AI has killed plenty but that's because humans programmed it to do those actions or accidental. I'm mainly speaking of someday humans having AI companions and then they go rogue some day. IRobot shit.

This might not be what the OP is specifically referring to. 

Regarding blaming AI for the end of the world, I still would put blame squarely on humans if this happens. 

Last edited by smroadkill15 - on 24 July 2023

smroadkill15 said:

These are still instances of human programming, not actions made independently by AI. I know AI has killed plenty but that's because humans programmed it to do those actions or accidental. I'm mainly speaking of someday humans having AI companions and then they go rogue some day. IRobot shit.

This might not be what the OP is specifically referring to. 

Regarding blaming AI for the end of the world, I still would put blame squarely on humans if this happens. 

We will still call it instances of human programming. The things I was taught while studying AI in Uni back in the late 90s is now considered standard programming. AI is all about problem solving and making independent decisions. See the links, autonomous decision making is what those drones do. They decide what is a target and whether to kill it, without asking a human or direct human control.

Going rogue happens all the time with computer programs. Yet so far with AI it hasn't been that destructive yet. Turning AI chat bots into racists could be called AI going rogue. Certainly that wasn't intended behavior, but humans are doing the programming and will make mistakes or leave unintentional loopholes open.

In programming: Anything that can go wrong, will go wrong.



Lets ask the AI what it thinks....

ChatGPT -
"As an AI language model, I don't have personal motivations or consciousness, so I don't have intentions or trustworthiness in the human sense. However, the trustworthiness of AI systems depends on various factors, including their design, development, and use cases.

AI can be very useful and beneficial when used appropriately and ethically. AI systems have been deployed in numerous domains, such as healthcare, finance, transportation, and customer service, to streamline processes, improve decision-making, and enhance user experiences.

However, there are also potential risks and challenges associated with AI. Some concerns include:

    • Bias: AI systems can inherit and perpetuate biases present in the data used to train them. This can lead to unfair or discriminatory outcomes.
    • Lack of Explainability: Many AI models, particularly deep learning-based ones, are often considered "black boxes" because they don't provide clear explanations for their decisions. This lack of transparency can be problematic in critical applications where understanding the reasoning behind an AI's decision is crucial.
    • Security and Privacy: AI systems may be vulnerable to attacks or exploitation, and there could be privacy concerns related to the data they collect and process.
    • Unintended Consequences: AI systems can exhibit unexpected behaviors or misuse data in ways that were not foreseen during development.

To foster trust in AI, it's essential to implement ethical AI principles, thorough testing, and ongoing monitoring. Researchers, developers, and policymakers must work together to address these challenges and create frameworks that prioritize transparency, fairness, and accountability.

Ultimately, AI is a tool, and its trustworthiness depends on how it is developed, deployed, and regulated. Responsible and ethical use of AI is crucial to ensure its benefits outweigh the risks."