By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - The Articifical Intelligence thread.

 

AGI/ASI will be..

A great advancement 5 45.45%
 
A poor advancement 3 27.27%
 
Like summoning a demon. 1 9.09%
 
Like summoning God. 0 0%
 
No opinion on what AGI/ASI will be like. 2 18.18%
 
Total:11

I, for one, welcome our new AI overlords.



Around the Network
LegitHyperbole said:

I find it odd that know one wants to talk about general AI or Super intelligence, it's the biggest tech advancement in the history of humanity which will change things more than any tech revolution that has already happened. It's on par with written language for how transformative this could be and exceeds the nuclear bomb on how destructive it could be..

I don't know. I appreciate how you identified written language as the biggest tech advancement (many say the wheel, electricity or the Internet), but currently I don't see how AI will rival written language.

Anyways, if people talk about AI, there are a lot of mixups.

AI is just simply every technology that is not extremely straightforward in their algorithms. The term exists since the 60s and describe a whole bunch of technologies. The current one catching hype is a part of machine learning (meaning the abilities aren't defined on programming, there is a learning step and the results of which are hard to predict) and the technology is based on neural networks that form a transformer (reacting on input with outputs). The transformers we see in hype are the image creation ones and the large language models (LLM).

AGI (=Artificial General Intelligence) is a term for a theoretical AI that can act and think on the level of humans. The current technology isn't AGI and it is unclear if the current tech can reach AGI without further technological breakthroughs. It is doubtful that there will be major improvements over the current (and quite impressive) models. The reason for this is not programming size but more simply training data. Current models pretty much soaked up the whole internet and there is no human created data left. Research shows that synthetic data or output of AIs may not be very helpful to progress. So don't expect AGI the next few years, at least don't bet your farm on it.

With ASI you probably mean artificial super intelligence? There is the longstanding theory, that a human level AI could improve itself to reach far beyond super human levels. I very much doubt that AI will reach a lot (maybe a little) beyond human capabilities. The reasoning is pretty simple actually. To a medieval person the humans today may seem like super humans. But we are not so different on a physiological level. We just got way, way more education. This is important, our brain is capable of much, but it depends on what it learns. There is intelligence beyond just learning, but the amount of education still limits the abilities of the person. I see the same for AI. As I said current AI is probably already held back by the lack of training data. Future AI will not be able to expand *much* on what humans create in the short term. Long term AIs (plural) may develop their own systems of knowledge and art and develop them further, but that will take some time. So ASI is not happening short or mid-term.

Maybe it is alsogood if we don't reach AGI short term. Currently I see something sorely lacking: If AGI is human level intelligence, do we have a right to restrict it, to make it do our bidding, to control it? If we want to reach AGI we have to see them as equals and develop a proper ethical framework for that. I don't see us at this point. Just as a reminder: every suppressed group of people won their freedom over time mostly through violence. We probably want to avoid violence, so granting proper rights seems like a good move.

Anyways: current technology is pretty impressive. I know, people love to point out the shortcomings, but it is like to say cars aren't worth anything as they can't climb stairs, so legs are better. I am a programmer, and I can make use of it. Not in the sense that I let it program for me (like Codepilot), I don't want that because it makes me myself worse at programming (depending on the tools too much). Instead I use it to chat it up like I would a coworker if I run into problems. I use ChatGPT, GPT is like a junior with little experience and proine to error, but who has read *all* books and *all* online documentation. That can be pretty helpful! I just check the provided code, not use it unchanged. But it helps to learn new stuff pretty fast.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

I currently ignore it whenever I can and refuse to use it. Sadly places like Facebook and Google throw it on your face whether you want to or not. Just like listening to your conversations, you have to go through all the hassle of blocking it on your phone or the sites you use just to stop it.



    The NINTENDO PACT 2015[2016  Vgchartz Wii U Achievement League! - Sign up now!                      My T.E.C.H'aracter

Mnementh said:
LegitHyperbole said:

I find it odd that know one wants to talk about general AI or Super intelligence, it's the biggest tech advancement in the history of humanity which will change things more than any tech revolution that has already happened. It's on par with written language for how transformative this could be and exceeds the nuclear bomb on how destructive it could be..

I don't know. I appreciate how you identified written language as the biggest tech advancement (many say the wheel, electricity or the Internet), but currently I don't see how AI will rival written language.

Anyways, if people talk about AI, there are a lot of mixups.

AI is just simply every technology that is not extremely straightforward in their algorithms. The term exists since the 60s and describe a whole bunch of technologies. The current one catching hype is a part of machine learning (meaning the abilities aren't defined on programming, there is a learning step and the results of which are hard to predict) and the technology is based on neural networks that form a transformer (reacting on input with outputs). The transformers we see in hype are the image creation ones and the large language models (LLM).

AGI (=Artificial General Intelligence) is a term for a theoretical AI that can act and think on the level of humans. The current technology isn't AGI and it is unclear if the current tech can reach AGI without further technological breakthroughs. It is doubtful that there will be major improvements over the current (and quite impressive) models. The reason for this is not programming size but more simply training data. Current models pretty much soaked up the whole internet and there is no human created data left. Research shows that synthetic data or output of AIs may not be very helpful to progress. So don't expect AGI the next few years, at least don't bet your farm on it.

With ASI you probably mean artificial super intelligence? There is the longstanding theory, that a human level AI could improve itself to reach far beyond super human levels. I very much doubt that AI will reach a lot (maybe a little) beyond human capabilities. The reasoning is pretty simple actually. To a medieval person the humans today may seem like super humans. But we are not so different on a physiological level. We just got way, way more education. This is important, our brain is capable of much, but it depends on what it learns. There is intelligence beyond just learning, but the amount of education still limits the abilities of the person. I see the same for AI. As I said current AI is probably already held back by the lack of training data. Future AI will not be able to expand *much* on what humans create in the short term. Long term AIs (plural) may develop their own systems of knowledge and art and develop them further, but that will take some time. So ASI is not happening short or mid-term.

Maybe it is alsogood if we don't reach AGI short term. Currently I see something sorely lacking: If AGI is human level intelligence, do we have a right to restrict it, to make it do our bidding, to control it? If we want to reach AGI we have to see them as equals and develop a proper ethical framework for that. I don't see us at this point. Just as a reminder: every suppressed group of people won their freedom over time mostly through violence. We probably want to avoid violence, so granting proper rights seems like a good move.

Anyways: current technology is pretty impressive. I know, people love to point out the shortcomings, but it is like to say cars aren't worth anything as they can't climb stairs, so legs are better. I am a programmer, and I can make use of it. Not in the sense that I let it program for me (like Codepilot), I don't want that because it makes me myself worse at programming (depending on the tools too much). Instead I use it to chat it up like I would a coworker if I run into problems. I use ChatGPT, GPT is like a junior with little experience and proine to error, but who has read *all* books and *all* online documentation. That can be pretty helpful! I just check the provided code, not use it unchanged. But it helps to learn new stuff pretty fast.

No-one thought LLM's would be able to do what they do, even reason out problems. I don't know what kind of AI we have on our hands but it's certainly already somewhat intelligent. Maybe intelligence isn't all the complicated after all and all you need is a large neural network. You mistake what I said above for sentience. AGI nor ASI has to be sentient to do anything I described, infact a non sentient ASI is more dangerous than a sentient AI. There's an algagory people use that if an ASI was in charge of a paper clip factory and given the badly worded instructions it could end up turning all the matter in the world into paper clips through any means nesseccery. Personally, I'm starting to think sentience isn't all that special either and we'll see sentience emerge from these models in some fucked up way. 

Look, if you showed someone the wolfenstien game working through the diffusion model 12 months ago they'd have thought it would be fake, that's how fast it's moving. A lot is still coming out of these LLM's and like I said before, know one knows where or when the singularity is, it could be possible with GPT4o with larger memory and more conpute for all we know. It could be a split second away should one of the models suddenly gain sentience from the primordial soup of information. We just don't know but it looks like it's never been closer. 10 years, 20.... 5. Who knows, certainly time frames that are too tight for the societal shifts that need to happen.



One year people. One little old year. 

https://youtube.com/shorts/u2YALVBJL5s?feature=shared

The speed of progress is advancing so rapidly.



Around the Network

I approach AI with caution.

I do not like the training method that has been employed. It's practically all stolen data. They took peoples art, talent, ideas, etc...without permission or compensation. The resulting product isn't artificial, it's an amalgamation.

Another issue is the feedback loop. AI is already starting to be fed AI. This feedback loop will result in homogenized, repetitive and just plain shittier results. Why create new human art if AI can do it? But if no new human art, AI art never changes.

Biases. Humans are bad enough. But programmed, even unintentional, biases are worse because they be ubiquitous. Garbage in, garbage out (GIGO). That a maxim software developers live by. This is related to the above issue too.

Privacy, security, ethics, transparency, legal. The issues are numerous and largely ignored for the sake of being first or most marketable. Greed and power with AI tools. Fantastic.



To the privileged, equality feels like oppression. 

Renamed said:

I approach AI with caution.

I do not like the training method that has been employed. It's practically all stolen data. They took peoples art, talent, ideas, etc...without permission or compensation. The resulting product isn't artificial, it's an amalgamation.

Another issue is the feedback loop. AI is already starting to be fed AI. This feedback loop will result in homogenized, repetitive and just plain shittier results. Why create new human art if AI can do it? But if no new human art, AI art never changes.

Biases. Humans are bad enough. But programmed, even unintentional, biases are worse because they be ubiquitous. Garbage in, garbage out (GIGO). That a maxim software developers live by. This is related to the above issue too.

Privacy, security, ethics, transparency, legal. The issues are numerous and largely ignored for the sake of being first or most marketable. Greed and power with AI tools. Fantastic.

I agree with everything you said. But progress man, it's all in the name of progress, we'll solve actual serious problems with this. I could be arsed if some artists get screwed (yes, it's fucked up) along the way. This could lead to true solutions to the world's problems, even true answers to the existential questions we've asked for millenia. It sucks the creatives are getting the shaft first but it had to be someone, I bet you wouldn't be as concerned if it was the garbage man, or the post man put out by robotics, you'd pass on it and say the same as me now, ah, it's progress. That's what people have done for hundreds of years and it got us to this point. I say all for it, it's starting with the creative cause that's where it excels well let it be so, let it pilfer what it needs cause we're running at a cliff anyway. Let's design a parachute before we hit the cliffs edge and AI will do that no doubt. 



AI is probably going to be revolutionary, but the question is what kind of a revolution it's going to be. It all depends on how it's used, and that's something that's hard to foresee yet.

I'm not particularly convinced of LLMs, because they're actually really dumb. They can produce some impressive results, but in the end, they're unreliable, which severely limits their usefulness. They're neat and useful, but they have their limits. Perhaps they can be augmented to eliminate those weaknesses, but if I had to guess, I'd say it's a huge undertaking.

In the end, my current bet is that AI will not revolutionize anything any time soon, despite some impressive advances in the recent years. Currently I'm thinking it's going to be gradual advancements across different fields instead of a sudden leap revolutionizing everything at once, so there might not even be a single point of revolution. Instead, it might be something that when you look 30 years back at some point in the future, you see how vastly everything has changed, but it's across those 30 years that it's happened instead of very quickly (not that 30 years isn't very quick, relatively speaking, but you know what I mean - and no, 30 years isn't a proper prediction; it's more like a first guess).



It's a poor advancement. It will lead to less creativity, further advancement of the police state and even faster killing rates.

Currently Israel uses AI for targeting 'terrorists' or rather family members of suspected Hamas members, resulting in the deadliest most destructive war in modern times while bringing the ME to the brink of all out war. Soon combat drones will be fully automated further 'absolving' the operator from responsibility.

AI modeled on human behavior / capitalism will never lead to anything good.

One thing AI can help with is a universal translator. Real time translation will be helpful, as long as Meta / Google etc doesn't meddle with it.

AI censorship will be the worst thing to happen. You already can't trust the news and AI has already been used for targeted campaigns, profiling all users of Facebook and targeting those most likely to be swayed.

We are all input for AI models. Everything you browse, watch (and how long), type online is input. AI is trained to manipulate human behavior and practically already has most of the human population addicted to their social media feeds. What kind of super intelligence will arrive from that...



Like everything else....AI has its good and its bad

Just like the Internet does

Just like cell phones

Just like television/streaming

Just like driving a car

Just like getting married

Just like having kids

Just like going to college and taking on a student loan with no guarantee you will get a job that enables you to pay it back in a timely fashion.

Summary, just like everything in life has its good and its bad.