By using this site, you agree to our Privacy Policy and our Terms of Use. Close
foxmccloud64 said:

The thing is that as other user said before, the current "AI" available to the public is nothing more than name for marketing and publicity, what we currently have are some algorithms based on some math theories applied to choosing and generation, so we can treat that as both as probabilistic and deterministc phenomena, but it is not that they are self-thinking algorithms and machines that are improving themselves beyond the scope of what humans have done, in fact that problem was discussed by Ada Lovelace and others more than 100 years ago, for example, Turing Said something decades later akin to that with enough power and storage we could reach a point were calculating machines could do things indistinguishable from human thinking, while Lovelace and previous computer pioneers thought that for a machine to "thinks" firstly we should tackle the problem of what thinking is, being concious, being cognizant how and how would we imitate that, the later is more of what we have also thanks to science fiction always understood as true artificial inteligence, and that we currently don't really have created, while the first approach of a machine being able to kinda brute force a process due to capacity and give the impression about "thinking" is more what he have today, but is not that we currently can tal about sapient, benevolent or malevolent machines.

For example, everyone went wild with Chapt GPT and how it seems to "know everything" and by extension be able to "solve everything" but even as advanced as it is with respect to previous attempts with natural and other language models, people have already started to notice how the thing not only can be very off mark when asked about some topics, but also that it tends to repeats tha same mistakes no matter how supposedly "intelligent" the thing is, I have relatives working on schools at different levels, and most of them can already tell when a student has used that shit, since they know how to formulate homework and projects to the students in a way the Chat GPT can't answer properly or simply doesn't know the answer, so it's very evident when the thing starts to wander off the subject and just resumes things in the same repetitive structure as it always do.

I also have been using stable diffusion and several implementations of those "AI image generators", and from what I have observed, no only that things need to be manually finetuned most of the time if you want a more specific result and quality, for example if the implementation model does not contain anything about a character, object, place or another word referring to a thing in specific, even if you input a kilometric "prompt" you won't get what you desire because the thing doesn't know what you are referring to, nor it can't just invent it by itself,  not only that we have seen the problem  about ugly, deformed and badly drawed body parts(specially hands) it is that it doesn't know how to draw certain poses, angles, actions, etc., also the thing creating those images again uses a semi random, way to generate images based on the tuning of the algorithm parameters, so if you have the prompt and the random seed that was used to generate a image you can replicate a result EXACTLY, not only that, but some reseachers found that in fact sometimes the AI spits out almost the exact image that was used out to "train"  that shit, which also helps back up the argument of artists, photographers and others about the thing being just a giant collage maker, copy-paste plagiarizing tool that creates derivatives using work that the dudes behind that tools didn't even asked permission to use(microsoft did the same shit with code from people in github), even I came across a image on the gallery of one of those sites, were the thing that was basically replicating the watermaks of photographies that newspappers PAY to be able to use, and that aren't supposed to even be shared without the proper license, same for the drawings which they took of image aggregator sites like danbooru, and without which the supposedly AI wouldn't even be able to generate a stick figure.

So  as far as I can tell AI is nowhere near a self thinking AI, less even to let it take care of a lot of tasks that require more input and fine tuning by people, even less so in the case of countries where we don't even have proper modernization and computarization, of even to most basic task that could be a bit more automated by using computers.

Humans can be very dogmatic as well, generate the same response to the same stimuli, come up with the same things over and over. But the difference in human thinking is that we can re-evaluate and learn new ways to respond.

Human change in thinking works by 'editing' prior experience. Human memory is not static, it always changes. Either by degradation over time or simply by accessing those memories again. Every time you remember something, you slightly alter the memory. That's the fundamental difference with human thinking and AI atm. AI pools from a vast static source of examples that remain the same. When we make a new picture, all the examples and ideas we access for that in our mind also change with the creation of that new picture. New associative links get established, details slightly change, other links and details might be forgotten.

And that goes on all the time, your mind doesn't stop between queries. Day and night memories get reorganized, links get strengthened or weakened. Your belief systems based on your memories constantly gets refined. And a big part of human thinking is we're all part of a consensus network. Humans share ideas and information all the time, every interaction slightly changes each participant. Hence echo chambers are so dangerous as they so easily create self reinforcing loops in thinking leading to radicalization. And why 1984 so promptly states, to control people is to control information. And that's true for AI as well.

Humans go one step further than where AI is right now. Classification, association all works fine with AI. Yet goal oriented problem solving is a big hurdle. (Not just for AI lol) It takes humans many many years to learn that process. But is that really different from trying to find a solution based on prior examples. Association, the apple falls from the tree, why, theory of gravity is born.

New algorithm successfully re-discovered fundamental equations in physics, chemistry
https://www.cbc.ca/radio/quirks/artificial-intelligence-ai-scientist-1.6811085
"What our work is really thinking about is, could we use AI to discover new theories, not just applying our old theories in new contexts?"

That's the real question, and how do humans come up with new theories. Can an AI program help find the Grand Unified Theory.

What is a new idea anyway.

Humans are easily manipulated, can not be trusted.
AI is even easier manipulated, can be trusted less than humans.

Yet as another cog in the consensus network, AI can be very useful as stated in that article.


I always find the Mayday episodes where a plane crash happens because humans and machine end up working against each other, the most interesting. Sometimes it's the pilots misunderstanding how the automation works, sometimes it's a glitch or fault in the system. Both are fallible, one is better at problem solving, the other doesn't suffer from fatigue or limited attention span. The challenge is getting both to understand each other. The automation to realize what the pilots are thinking / trying to do, and the pilots to understand what the automation is trying to do and why.

Maybe that's a good way to look at it. Can self flying planes be trusted or do we keep pilots on board as a second opinion.