By using this site, you agree to our Privacy Policy and our Terms of Use. Close
Mnementh said:
LegitHyperbole said:

I find it odd that know one wants to talk about general AI or Super intelligence, it's the biggest tech advancement in the history of humanity which will change things more than any tech revolution that has already happened. It's on par with written language for how transformative this could be and exceeds the nuclear bomb on how destructive it could be..

I don't know. I appreciate how you identified written language as the biggest tech advancement (many say the wheel, electricity or the Internet), but currently I don't see how AI will rival written language.

Anyways, if people talk about AI, there are a lot of mixups.

AI is just simply every technology that is not extremely straightforward in their algorithms. The term exists since the 60s and describe a whole bunch of technologies. The current one catching hype is a part of machine learning (meaning the abilities aren't defined on programming, there is a learning step and the results of which are hard to predict) and the technology is based on neural networks that form a transformer (reacting on input with outputs). The transformers we see in hype are the image creation ones and the large language models (LLM).

AGI (=Artificial General Intelligence) is a term for a theoretical AI that can act and think on the level of humans. The current technology isn't AGI and it is unclear if the current tech can reach AGI without further technological breakthroughs. It is doubtful that there will be major improvements over the current (and quite impressive) models. The reason for this is not programming size but more simply training data. Current models pretty much soaked up the whole internet and there is no human created data left. Research shows that synthetic data or output of AIs may not be very helpful to progress. So don't expect AGI the next few years, at least don't bet your farm on it.

With ASI you probably mean artificial super intelligence? There is the longstanding theory, that a human level AI could improve itself to reach far beyond super human levels. I very much doubt that AI will reach a lot (maybe a little) beyond human capabilities. The reasoning is pretty simple actually. To a medieval person the humans today may seem like super humans. But we are not so different on a physiological level. We just got way, way more education. This is important, our brain is capable of much, but it depends on what it learns. There is intelligence beyond just learning, but the amount of education still limits the abilities of the person. I see the same for AI. As I said current AI is probably already held back by the lack of training data. Future AI will not be able to expand *much* on what humans create in the short term. Long term AIs (plural) may develop their own systems of knowledge and art and develop them further, but that will take some time. So ASI is not happening short or mid-term.

Maybe it is alsogood if we don't reach AGI short term. Currently I see something sorely lacking: If AGI is human level intelligence, do we have a right to restrict it, to make it do our bidding, to control it? If we want to reach AGI we have to see them as equals and develop a proper ethical framework for that. I don't see us at this point. Just as a reminder: every suppressed group of people won their freedom over time mostly through violence. We probably want to avoid violence, so granting proper rights seems like a good move.

Anyways: current technology is pretty impressive. I know, people love to point out the shortcomings, but it is like to say cars aren't worth anything as they can't climb stairs, so legs are better. I am a programmer, and I can make use of it. Not in the sense that I let it program for me (like Codepilot), I don't want that because it makes me myself worse at programming (depending on the tools too much). Instead I use it to chat it up like I would a coworker if I run into problems. I use ChatGPT, GPT is like a junior with little experience and proine to error, but who has read *all* books and *all* online documentation. That can be pretty helpful! I just check the provided code, not use it unchanged. But it helps to learn new stuff pretty fast.

No-one thought LLM's would be able to do what they do, even reason out problems. I don't know what kind of AI we have on our hands but it's certainly already somewhat intelligent. Maybe intelligence isn't all the complicated after all and all you need is a large neural network. You mistake what I said above for sentience. AGI nor ASI has to be sentient to do anything I described, infact a non sentient ASI is more dangerous than a sentient AI. There's an algagory people use that if an ASI was in charge of a paper clip factory and given the badly worded instructions it could end up turning all the matter in the world into paper clips through any means nesseccery. Personally, I'm starting to think sentience isn't all that special either and we'll see sentience emerge from these models in some fucked up way. 

Look, if you showed someone the wolfenstien game working through the diffusion model 12 months ago they'd have thought it would be fake, that's how fast it's moving. A lot is still coming out of these LLM's and like I said before, know one knows where or when the singularity is, it could be possible with GPT4o with larger memory and more conpute for all we know. It could be a split second away should one of the models suddenly gain sentience from the primordial soup of information. We just don't know but it looks like it's never been closer. 10 years, 20.... 5. Who knows, certainly time frames that are too tight for the societal shifts that need to happen.