By using this site, you agree to our Privacy Policy and our Terms of Use. Close

LegitHyperbole said:

No-one thought LLM's would be able to do what they do, even reason out problems. I don't know what kind of AI we have on our hands but it's certainly already somewhat intelligent. Maybe intelligence isn't all the complicated after all and all you need is a large neural network. You mistake what I said above for sentience. AGI nor ASI has to be sentient to do anything I described, infact a non sentient ASI is more dangerous than a sentient AI. There's an algagory people use that if an ASI was in charge of a paper clip factory and given the badly worded instructions it could end up turning all the matter in the world into paper clips through any means nesseccery. Personally, I'm starting to think sentience isn't all that special either and we'll see sentience emerge from these models in some fucked up way. 

I partially agree. Mostly because we have no clear definition of intelligence to begin with. Many think intelligence is an on-off switch: you either have it or not. Others think of intelligence as a linear progression. Given some entities X and Y we can clearly say which one is more or less intelligent. But both are off. Wikipedia clearly reflects this definition mess:

"Intelligence has been defined in many ways: the capacity for abstractionlogicunderstandingself-awarenesslearningemotional knowledgereasoningplanningcreativitycritical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context."

You see given many areas of which intelligence consists. Wikipedia follows up with:

"Most psychologists believe that intelligence can be divided into various domains or competencies."

And this is I think the most important part. There are many different cognitive abilities that are summed up under the umbrella term of intelligence. They are not intrinsically linked, but in humans often a human with high abilities in one area excels in the others as well. But this is a human specific thing. The correlation may be different in animals (there is some research in it, but not too much yet) and clearly in machines.

That it is different in machines is obvious, because in some areas machines have outperformed humans a lot. Take chess for example, computers can reliably defeat human opponents for years now. Not even a century ago people thought a machine that is able to beat us in chess must easily beat us in simple reasoning tasks or other cognitive abilites. But it doesn't. And that is something we have to be clear about: different cognitive abilities are different. And there may be tomorrow a breakthrough in another area besides transformers/LLMs that progress another ability. But for transformers including LLMs I see that the amount of training data may limit their abilities and we already hitting this limit. That said, even within the limit the abilities are very impressive and we will need years to figure out all the ways we can make use of it.

To talk about another field of intelligence and artificial intelligence: many disregard motor abilities because for most of us they come without much thought. But these are cognitive abilities are actually very hard, which we see in the field of robotics. But also there were recently massive breakthroughs, which are shown in some of these demo videos by robotics companies. The agility and strength isn't really the point here, but the quickness of reaction and the ability for some more complicated fine motor tasks. This is all very impressive. Yet, I still have to see a demo of a robot tying their own shoelaces, something which we do instictively, but is a very hard task.

LegitHyperbole said:

Look, if you showed someone the wolfenstien game working through the diffusion model 12 months ago they'd have thought it would be fake, that's how fast it's moving. A lot is still coming out of these LLM's and like I said before, know one knows where or when the singularity is, it could be possible with GPT4o with larger memory and more conpute for all we know. It could be a split second away should one of the models suddenly gain sentience from the primordial soup of information. We just don't know but it looks like it's never been closer. 10 years, 20.... 5. Who knows, certainly time frames that are too tight for the societal shifts that need to happen.

By Wolfenstein you mean the Doom demo. I am actually *not* impressed by that. Because I can see how it fits in to current transformer technology pretty easily. It basically makes a movie out of training data with the only novelty being that the movie is live prompted by inputs. Actually that may become a big thing, but not in gaming. But the limitations were also all too visible. The model had a very poor representation of state. Enemy and barrels came back once they were out of view and damage (both to enemies and the character) was highly inconsistent. These inconsistency implied that the model had pretty much no representation of state. Which means it is pretty much not usable in gaming.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]