By using this site, you agree to our Privacy Policy and our Terms of Use. Close
 

Can AI be trusted?

No 34 68.00%
 
Yes 16 32.00%
 
Total:50
Mnementh said:
Jumpin said:

The AI we have today is an algorithmic tool used by humans. The term “AI” is a marketing term - it’s a different sort of technology than Skynet, Star Trek, or Asimov’s positronic brain. And there is no technology that resembles Ultron.

As I’ll note - Asimov’s AI was incredibly helpful to humanity. Asimov also much more deeply considered the matter than James Cameron and others did - Cameron was looking for a villain for his assassin story. James Cameron has also described the danger of AI is how humans use it, much like how they use other technologies.

Anyway, in the robot series, I Robot begins near the dawn of the positronic brain. It goes on to discuss the products, mainly robots, that use the positronic brain to better the lives of humanity - end loneliness, make exploration and industry function unlike anything before - all told through the stories of robot psychologists linked to Susan Calvin. The story follows the decades of her life during the 21st century, ending with the development of the FTL drive by the AI, allowing humanity to travel to other planets

Fast forward 3000 years to the Spacer Trilogy/Robot Trilogy, and we have 50 Utopian planets where people are virtually immortal thanks to the technology developed. There’s a planet called Solaria where every human owns their own Barony with tens of thousands of robots employed. Crime is virtually non-existent in the spacer worlds… the stories that follow involve extremely rare crimes.

Ten thousand years later, humanity rules the galaxy.

So, not all of science fiction agrees on AI = evil. Including the guy who virtually put AI on the map.

I also recommend the Culture novels by Iain Banks. They describe the Culture - a multi-galaxy civilization in which living intelligent beings and AIs have the same civil rights and live together. Very cool novels.

https://en.wikipedia.org/wiki/The_Culture

Late reply, but thanks for the recommendation! I’ll be looking into it.



I describe myself as a little dose of toxic masculinity.

Around the Network

The immediate danger is that we're training AI based on our (historic) actions. That has already proven problematic with systemic bias creeping into neural network algorithms. Right at a time when the West-East divide is only growing bigger, are we going to have opposing AIs as well? Ending up with a biased AI by training it on biased examples is pretty much guaranteed.

That's all we do so far with AI, feed it millions of examples and scenarios, training it on deriving the 'proper' response out of all the data. Nothing like the hollywood versions or Isaac Asimov's laws of robotics. AI algorithms are already used to kill anyway, from autonomous combat drones to mass scale target selection.

So will we end up with benevolent AI in the likes of Gandhi, Nelson Mandela, Martin Luther King, César Chávez, Volodymyr Zelenskyy or with AI thinking like Trump, Putin, Netanyahu, Pol Pot, Hitler...

God created men in his image is what the bible says. Maybe the biggest warning against creating AI...

As for a more pragmatic answer, AI can't be trusted since we don't trust what we don't know how it works. And AI is basically a black box with 'unpredictable' outcomes. The question should also be, can we trust the people creating / feeding these AI systems. It would be easy to say, set the goal for AI to optimize life for all humans. Yet we don't even trust 97% of scientists about climate change and certainly don't want to listen to them for solutions. Why would humans listen to an AI?

A self thinking AI will not be trusted, but AI can be useful as a tool for many things. Optimizing trade flows, traffic flows, directing plane traffic, until something goes wrong and then who is going to find out what caused the glitch. So far glitches have been pretty harmless, yet the more responsibility we're giving to AI algorithms, the more harmful any potential glitches can become.

It's also a question of systems. If human drivers crash, carry on, human error. If self driving cars crash, we ground all cars until the glitch has been found? Or do we accept that AI will be fallible as well, just hoping it will do better than human judgement. The advantage of AI is that it doesn't die, you can keep teaching AI, unlike humans. But that can also be a negative, making it harder to 'fix' AI problems.



AI will lead us to the stars I believe. But we have to be careful though along the way in this journey.



BiON!@ 



I describe myself as a little dose of toxic masculinity.

Can people who make ai be trusted?
Noo.

Soo,noo



 

My youtube gaming page.

http://www.youtube.com/user/klaudkil

Around the Network
SvennoJ said:

The immediate danger is that we're training AI based on our (historic) actions. That has already proven problematic with systemic bias creeping into neural network algorithms. Right at a time when the West-East divide is only growing bigger, are we going to have opposing AIs as well? Ending up with a biased AI by training it on biased examples is pretty much guaranteed.

That's all we do so far with AI, feed it millions of examples and scenarios, training it on deriving the 'proper' response out of all the data. Nothing like the hollywood versions or Isaac Asimov's laws of robotics. AI algorithms are already used to kill anyway, from autonomous combat drones to mass scale target selection.

So will we end up with benevolent AI in the likes of Gandhi, Nelson Mandela, Martin Luther King, César Chávez, Volodymyr Zelenskyy or with AI thinking like Trump, Putin, Netanyahu, Pol Pot, Hitler...

God created men in his image is what the bible says. Maybe the biggest warning against creating AI...

As for a more pragmatic answer, AI can't be trusted since we don't trust what we don't know how it works. And AI is basically a black box with 'unpredictable' outcomes. The question should also be, can we trust the people creating / feeding these AI systems. It would be easy to say, set the goal for AI to optimize life for all humans. Yet we don't even trust 97% of scientists about climate change and certainly don't want to listen to them for solutions. Why would humans listen to an AI?

A self thinking AI will not be trusted, but AI can be useful as a tool for many things. Optimizing trade flows, traffic flows, directing plane traffic, until something goes wrong and then who is going to find out what caused the glitch. So far glitches have been pretty harmless, yet the more responsibility we're giving to AI algorithms, the more harmful any potential glitches can become.

It's also a question of systems. If human drivers crash, carry on, human error. If self driving cars crash, we ground all cars until the glitch has been found? Or do we accept that AI will be fallible as well, just hoping it will do better than human judgement. The advantage of AI is that it doesn't die, you can keep teaching AI, unlike humans. But that can also be a negative, making it harder to 'fix' AI problems.

The thing is that as other user said before, the current "AI" available to the public is nothing more than name for marketing and publicity, what we currently have are some algorithms based on some math theories applied to choosing and generation, so we can treat that as both as probabilistic and deterministc phenomena, but it is not that they are self-thinking algorithms and machines that are improving themselves beyond the scope of what humans have done, in fact that problem was discussed by Ada Lovelace and others more than 100 years ago, for example, Turing Said something decades later akin to that with enough power and storage we could reach a point were calculating machines could do things indistinguishable from human thinking, while Lovelace and previous computer pioneers thought that for a machine to "thinks" firstly we should tackle the problem of what thinking is, being concious, being cognizant how and how would we imitate that, the later is more of what we have also thanks to science fiction always understood as true artificial inteligence, and that we currently don't really have created, while the first approach of a machine being able to kinda brute force a process due to capacity and give the impression about "thinking" is more what he have today, but is not that we currently can tal about sapient, benevolent or malevolent machines.

For example, everyone went wild with Chapt GPT and how it seems to "know everything" and by extension be able to "solve everything" but even as advanced as it is with respect to previous attempts with natural and other language models, people have already started to notice how the thing not only can be very off mark when asked about some topics, but also that it tends to repeats tha same mistakes no matter how supposedly "intelligent" the thing is, I have relatives working on schools at different levels, and most of them can already tell when a student has used that shit, since they know how to formulate homework and projects to the students in a way the Chat GPT can't answer properly or simply doesn't know the answer, so it's very evident when the thing starts to wander off the subject and just resumes things in the same repetitive structure as it always do.

I also have been using stable diffusion and several implementations of those "AI image generators", and from what I have observed, no only that things need to be manually finetuned most of the time if you want a more specific result and quality, for example if the implementation model does not contain anything about a character, object, place or another word referring to a thing in specific, even if you input a kilometric "prompt" you won't get what you desire because the thing doesn't know what you are referring to, nor it can't just invent it by itself,  not only that we have seen the problem  about ugly, deformed and badly drawed body parts(specially hands) it is that it doesn't know how to draw certain poses, angles, actions, etc., also the thing creating those images again uses a semi random, way to generate images based on the tuning of the algorithm parameters, so if you have the prompt and the random seed that was used to generate a image you can replicate a result EXACTLY, not only that, but some reseachers found that in fact sometimes the AI spits out almost the exact image that was used out to "train"  that shit, which also helps back up the argument of artists, photographers and others about the thing being just a giant collage maker, copy-paste plagiarizing tool that creates derivatives using work that the dudes behind that tools didn't even asked permission to use(microsoft did the same shit with code from people in github), even I came across a image on the gallery of one of those sites, were the thing that was basically replicating the watermaks of photographies that newspappers PAY to be able to use, and that aren't supposed to even be shared without the proper license, same for the drawings which they took of image aggregator sites like danbooru, and without which the supposedly AI wouldn't even be able to generate a stick figure.

So  as far as I can tell AI is nowhere near a self thinking AI, less even to let it take care of a lot of tasks that require more input and fine tuning by people, even less so in the case of countries where we don't even have proper modernization and computarization, of even to most basic task that could be a bit more automated by using computers.

Last edited by foxmccloud64 - on 03 February 2024

Depends on whether the person feeding it data and using it can be trusted.



For what? For some purposes, probably yes, for some purposes, definitely not. In the future, it will become trustworthy for more purposes. Of course all this also depends on what you consider AI. I believe some people have said that AI is a term we use for certain things we don't really understand, and I feel that's a pretty good description. Would a game developer consider NPC behaviour AI? Possibly only barely, because under the hood it's probably usually a fairly simple algorithm. Neural networks? Now that's something we probably don't understand as well.



foxmccloud64 said:

The thing is that as other user said before, the current "AI" available to the public is nothing more than name for marketing and publicity, what we currently have are some algorithms based on some math theories applied to choosing and generation, so we can treat that as both as probabilistic and deterministc phenomena, but it is not that they are self-thinking algorithms and machines that are improving themselves beyond the scope of what humans have done, in fact that problem was discussed by Ada Lovelace and others more than 100 years ago, for example, Turing Said something decades later akin to that with enough power and storage we could reach a point were calculating machines could do things indistinguishable from human thinking, while Lovelace and previous computer pioneers thought that for a machine to "thinks" firstly we should tackle the problem of what thinking is, being concious, being cognizant how and how would we imitate that, the later is more of what we have also thanks to science fiction always understood as true artificial inteligence, and that we currently don't really have created, while the first approach of a machine being able to kinda brute force a process due to capacity and give the impression about "thinking" is more what he have today, but is not that we currently can tal about sapient, benevolent or malevolent machines.

For example, everyone went wild with Chapt GPT and how it seems to "know everything" and by extension be able to "solve everything" but even as advanced as it is with respect to previous attempts with natural and other language models, people have already started to notice how the thing not only can be very off mark when asked about some topics, but also that it tends to repeats tha same mistakes no matter how supposedly "intelligent" the thing is, I have relatives working on schools at different levels, and most of them can already tell when a student has used that shit, since they know how to formulate homework and projects to the students in a way the Chat GPT can't answer properly or simply doesn't know the answer, so it's very evident when the thing starts to wander off the subject and just resumes things in the same repetitive structure as it always do.

I also have been using stable diffusion and several implementations of those "AI image generators", and from what I have observed, no only that things need to be manually finetuned most of the time if you want a more specific result and quality, for example if the implementation model does not contain anything about a character, object, place or another word referring to a thing in specific, even if you input a kilometric "prompt" you won't get what you desire because the thing doesn't know what you are referring to, nor it can't just invent it by itself,  not only that we have seen the problem  about ugly, deformed and badly drawed body parts(specially hands) it is that it doesn't know how to draw certain poses, angles, actions, etc., also the thing creating those images again uses a semi random, way to generate images based on the tuning of the algorithm parameters, so if you have the prompt and the random seed that was used to generate a image you can replicate a result EXACTLY, not only that, but some reseachers found that in fact sometimes the AI spits out almost the exact image that was used out to "train"  that shit, which also helps back up the argument of artists, photographers and others about the thing being just a giant collage maker, copy-paste plagiarizing tool that creates derivatives using work that the dudes behind that tools didn't even asked permission to use(microsoft did the same shit with code from people in github), even I came across a image on the gallery of one of those sites, were the thing that was basically replicating the watermaks of photographies that newspappers PAY to be able to use, and that aren't supposed to even be shared without the proper license, same for the drawings which they took of image aggregator sites like danbooru, and without which the supposedly AI wouldn't even be able to generate a stick figure.

So  as far as I can tell AI is nowhere near a self thinking AI, less even to let it take care of a lot of tasks that require more input and fine tuning by people, even less so in the case of countries where we don't even have proper modernization and computarization, of even to most basic task that could be a bit more automated by using computers.

Humans can be very dogmatic as well, generate the same response to the same stimuli, come up with the same things over and over. But the difference in human thinking is that we can re-evaluate and learn new ways to respond.

Human change in thinking works by 'editing' prior experience. Human memory is not static, it always changes. Either by degradation over time or simply by accessing those memories again. Every time you remember something, you slightly alter the memory. That's the fundamental difference with human thinking and AI atm. AI pools from a vast static source of examples that remain the same. When we make a new picture, all the examples and ideas we access for that in our mind also change with the creation of that new picture. New associative links get established, details slightly change, other links and details might be forgotten.

And that goes on all the time, your mind doesn't stop between queries. Day and night memories get reorganized, links get strengthened or weakened. Your belief systems based on your memories constantly gets refined. And a big part of human thinking is we're all part of a consensus network. Humans share ideas and information all the time, every interaction slightly changes each participant. Hence echo chambers are so dangerous as they so easily create self reinforcing loops in thinking leading to radicalization. And why 1984 so promptly states, to control people is to control information. And that's true for AI as well.

Humans go one step further than where AI is right now. Classification, association all works fine with AI. Yet goal oriented problem solving is a big hurdle. (Not just for AI lol) It takes humans many many years to learn that process. But is that really different from trying to find a solution based on prior examples. Association, the apple falls from the tree, why, theory of gravity is born.

New algorithm successfully re-discovered fundamental equations in physics, chemistry
https://www.cbc.ca/radio/quirks/artificial-intelligence-ai-scientist-1.6811085
"What our work is really thinking about is, could we use AI to discover new theories, not just applying our old theories in new contexts?"

That's the real question, and how do humans come up with new theories. Can an AI program help find the Grand Unified Theory.

What is a new idea anyway.

Humans are easily manipulated, can not be trusted.
AI is even easier manipulated, can be trusted less than humans.

Yet as another cog in the consensus network, AI can be very useful as stated in that article.


I always find the Mayday episodes where a plane crash happens because humans and machine end up working against each other, the most interesting. Sometimes it's the pilots misunderstanding how the automation works, sometimes it's a glitch or fault in the system. Both are fallible, one is better at problem solving, the other doesn't suffer from fatigue or limited attention span. The challenge is getting both to understand each other. The automation to realize what the pilots are thinking / trying to do, and the pilots to understand what the automation is trying to do and why.

Maybe that's a good way to look at it. Can self flying planes be trusted or do we keep pilots on board as a second opinion.



I fed my novel into it, and it actually had some thoughtful commentary.