By using this site, you agree to our Privacy Policy and our Terms of Use. Close
SvennoJ said:

The immediate danger is that we're training AI based on our (historic) actions. That has already proven problematic with systemic bias creeping into neural network algorithms. Right at a time when the West-East divide is only growing bigger, are we going to have opposing AIs as well? Ending up with a biased AI by training it on biased examples is pretty much guaranteed.

That's all we do so far with AI, feed it millions of examples and scenarios, training it on deriving the 'proper' response out of all the data. Nothing like the hollywood versions or Isaac Asimov's laws of robotics. AI algorithms are already used to kill anyway, from autonomous combat drones to mass scale target selection.

So will we end up with benevolent AI in the likes of Gandhi, Nelson Mandela, Martin Luther King, César Chávez, Volodymyr Zelenskyy or with AI thinking like Trump, Putin, Netanyahu, Pol Pot, Hitler...

God created men in his image is what the bible says. Maybe the biggest warning against creating AI...

As for a more pragmatic answer, AI can't be trusted since we don't trust what we don't know how it works. And AI is basically a black box with 'unpredictable' outcomes. The question should also be, can we trust the people creating / feeding these AI systems. It would be easy to say, set the goal for AI to optimize life for all humans. Yet we don't even trust 97% of scientists about climate change and certainly don't want to listen to them for solutions. Why would humans listen to an AI?

A self thinking AI will not be trusted, but AI can be useful as a tool for many things. Optimizing trade flows, traffic flows, directing plane traffic, until something goes wrong and then who is going to find out what caused the glitch. So far glitches have been pretty harmless, yet the more responsibility we're giving to AI algorithms, the more harmful any potential glitches can become.

It's also a question of systems. If human drivers crash, carry on, human error. If self driving cars crash, we ground all cars until the glitch has been found? Or do we accept that AI will be fallible as well, just hoping it will do better than human judgement. The advantage of AI is that it doesn't die, you can keep teaching AI, unlike humans. But that can also be a negative, making it harder to 'fix' AI problems.

The thing is that as other user said before, the current "AI" available to the public is nothing more than name for marketing and publicity, what we currently have are some algorithms based on some math theories applied to choosing and generation, so we can treat that as both as probabilistic and deterministc phenomena, but it is not that they are self-thinking algorithms and machines that are improving themselves beyond the scope of what humans have done, in fact that problem was discussed by Ada Lovelace and others more than 100 years ago, for example, Turing Said something decades later akin to that with enough power and storage we could reach a point were calculating machines could do things indistinguishable from human thinking, while Lovelace and previous computer pioneers thought that for a machine to "thinks" firstly we should tackle the problem of what thinking is, being concious, being cognizant how and how would we imitate that, the later is more of what we have also thanks to science fiction always understood as true artificial inteligence, and that we currently don't really have created, while the first approach of a machine being able to kinda brute force a process due to capacity and give the impression about "thinking" is more what he have today, but is not that we currently can tal about sapient, benevolent or malevolent machines.

For example, everyone went wild with Chapt GPT and how it seems to "know everything" and by extension be able to "solve everything" but even as advanced as it is with respect to previous attempts with natural and other language models, people have already started to notice how the thing not only can be very off mark when asked about some topics, but also that it tends to repeats tha same mistakes no matter how supposedly "intelligent" the thing is, I have relatives working on schools at different levels, and most of them can already tell when a student has used that shit, since they know how to formulate homework and projects to the students in a way the Chat GPT can't answer properly or simply doesn't know the answer, so it's very evident when the thing starts to wander off the subject and just resumes things in the same repetitive structure as it always do.

I also have been using stable diffusion and several implementations of those "AI image generators", and from what I have observed, no only that things need to be manually finetuned most of the time if you want a more specific result and quality, for example if the implementation model does not contain anything about a character, object, place or another word referring to a thing in specific, even if you input a kilometric "prompt" you won't get what you desire because the thing doesn't know what you are referring to, nor it can't just invent it by itself,  not only that we have seen the problem  about ugly, deformed and badly drawed body parts(specially hands) it is that it doesn't know how to draw certain poses, angles, actions, etc., also the thing creating those images again uses a semi random, way to generate images based on the tuning of the algorithm parameters, so if you have the prompt and the random seed that was used to generate a image you can replicate a result EXACTLY, not only that, but some reseachers found that in fact sometimes the AI spits out almost the exact image that was used out to "train"  that shit, which also helps back up the argument of artists, photographers and others about the thing being just a giant collage maker, copy-paste plagiarizing tool that creates derivatives using work that the dudes behind that tools didn't even asked permission to use(microsoft did the same shit with code from people in github), even I came across a image on the gallery of one of those sites, were the thing that was basically replicating the watermaks of photographies that newspappers PAY to be able to use, and that aren't supposed to even be shared without the proper license, same for the drawings which they took of image aggregator sites like danbooru, and without which the supposedly AI wouldn't even be able to generate a stick figure.

So  as far as I can tell AI is nowhere near a self thinking AI, less even to let it take care of a lot of tasks that require more input and fine tuning by people, even less so in the case of countries where we don't even have proper modernization and computarization, of even to most basic task that could be a bit more automated by using computers.

Last edited by foxmccloud64 - on 03 February 2024