By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General Discussion - The Articifical Intelligence thread.

 

AGI/ASI will be..

A great advancement 6 50.00%
 
A poor advancement 3 25.00%
 
Like summoning a demon. 1 8.33%
 
Like summoning God. 0 0%
 
No opinion on what AGI/ASI will be like. 2 16.67%
 
Total:12
LegitHyperbole said:
SvennoJ said:

No, an fps replicator ;)

AI can be useful to optimize games. Brute force guided search what to nip and tuck to maintain stable frame rates. The more games it optimizes the better it could get at it. I guess AI upscaling is already a form of that.

If Ubisoft trained an Ai on all of there IP to spit out some form of a game on the cheap, would it be any different from a team making a new game. 

Yes, it wouldn't have anything new instead of 5% new things :p

What AI could do is localize games. In the sense of, put this game in my city. GTA 9 pick a city on Google maps, play it there. Yet thinking up original characters, story lines and missions, better left to humans. Integrating those into a city of choice, AI could (eventually) do that.

So if you're fine with the same characters and gameplay, yes AI could generate more Assassin's Creed games. But someone would need to enter all the historical city data ... There's no Google Earth of ancient times!



Around the Network

I would like to see AI paired up with quantum computing. Now it sounds scary at first thought. But, I think/hope the human race will survive in the end.  Time will tell.



LegitHyperbole said:

No-one thought LLM's would be able to do what they do, even reason out problems. I don't know what kind of AI we have on our hands but it's certainly already somewhat intelligent. Maybe intelligence isn't all the complicated after all and all you need is a large neural network. You mistake what I said above for sentience. AGI nor ASI has to be sentient to do anything I described, infact a non sentient ASI is more dangerous than a sentient AI. There's an algagory people use that if an ASI was in charge of a paper clip factory and given the badly worded instructions it could end up turning all the matter in the world into paper clips through any means nesseccery. Personally, I'm starting to think sentience isn't all that special either and we'll see sentience emerge from these models in some fucked up way. 

I partially agree. Mostly because we have no clear definition of intelligence to begin with. Many think intelligence is an on-off switch: you either have it or not. Others think of intelligence as a linear progression. Given some entities X and Y we can clearly say which one is more or less intelligent. But both are off. Wikipedia clearly reflects this definition mess:

"Intelligence has been defined in many ways: the capacity for abstractionlogicunderstandingself-awarenesslearningemotional knowledgereasoningplanningcreativitycritical thinking, and problem-solving. It can be described as the ability to perceive or infer information; and to retain it as knowledge to be applied to adaptive behaviors within an environment or context."

You see given many areas of which intelligence consists. Wikipedia follows up with:

"Most psychologists believe that intelligence can be divided into various domains or competencies."

And this is I think the most important part. There are many different cognitive abilities that are summed up under the umbrella term of intelligence. They are not intrinsically linked, but in humans often a human with high abilities in one area excels in the others as well. But this is a human specific thing. The correlation may be different in animals (there is some research in it, but not too much yet) and clearly in machines.

That it is different in machines is obvious, because in some areas machines have outperformed humans a lot. Take chess for example, computers can reliably defeat human opponents for years now. Not even a century ago people thought a machine that is able to beat us in chess must easily beat us in simple reasoning tasks or other cognitive abilites. But it doesn't. And that is something we have to be clear about: different cognitive abilities are different. And there may be tomorrow a breakthrough in another area besides transformers/LLMs that progress another ability. But for transformers including LLMs I see that the amount of training data may limit their abilities and we already hitting this limit. That said, even within the limit the abilities are very impressive and we will need years to figure out all the ways we can make use of it.

To talk about another field of intelligence and artificial intelligence: many disregard motor abilities because for most of us they come without much thought. But these are cognitive abilities are actually very hard, which we see in the field of robotics. But also there were recently massive breakthroughs, which are shown in some of these demo videos by robotics companies. The agility and strength isn't really the point here, but the quickness of reaction and the ability for some more complicated fine motor tasks. This is all very impressive. Yet, I still have to see a demo of a robot tying their own shoelaces, something which we do instictively, but is a very hard task.

LegitHyperbole said:

Look, if you showed someone the wolfenstien game working through the diffusion model 12 months ago they'd have thought it would be fake, that's how fast it's moving. A lot is still coming out of these LLM's and like I said before, know one knows where or when the singularity is, it could be possible with GPT4o with larger memory and more conpute for all we know. It could be a split second away should one of the models suddenly gain sentience from the primordial soup of information. We just don't know but it looks like it's never been closer. 10 years, 20.... 5. Who knows, certainly time frames that are too tight for the societal shifts that need to happen.

By Wolfenstein you mean the Doom demo. I am actually *not* impressed by that. Because I can see how it fits in to current transformer technology pretty easily. It basically makes a movie out of training data with the only novelty being that the movie is live prompted by inputs. Actually that may become a big thing, but not in gaming. But the limitations were also all too visible. The model had a very poor representation of state. Enemy and barrels came back once they were out of view and damage (both to enemies and the character) was highly inconsistent. These inconsistency implied that the model had pretty much no representation of state. Which means it is pretty much not usable in gaming.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

SvennoJ said:
shavenferret said:

Has anybody worked with AI in some capacity?

Not in the last decade. Before that I worked on GPS navigation. Which had what was then called AI for route finding, dead reckoning, voice recognition, address matching, basically a lot of fuzzy matching. As well as tracking cell phone tower data to predict / determine the locations of traffic jams and feed back the actual traffic speed on major roads.

Of course of all those the rule based 'AI' was most reliable, the neural net parts (voice recognition) the least, or rather the hardest to fix/improve. With limited processing power it all had to be either rule based or simple neural nets. Server based is still not great though, Siri and my TV often still don't understand my Dutch accent. :/

Anyway none of those were in any way threatening, but we did had some moral concerns about tracking user data and the whole scale tracking of all cell phone location data.

Never used AI for code generation, but if AI could help find bugs, that could be useful. Not just make the program crash but actually find why it crashes under what circumstances. That's the hardest part of the job, those infernal 'can't replicate' easily crashes and unintended behavior. Which still happens when everything is rule based. Murphy's law in software "Everything that can go wrong, eventually will go wrong". We proved that rule all the time lol.

One of the worst was tracking down unexpected slow down and increase in memory use in the routing engine. Eventually it turned out to be caused by a unique situation in the data network. A logging area in Germany somehow had an exact geometric pattern of exactly equal length roads, a grid pattern. A condition in the code kept both options open when search paths reach the next crossing with the exact same value. Since it was a large grid pattern like a checkerboard, it basically replicated the checkerboard problem. Doubling the open options (paths to investigate) at every intersection. Yet only when this area fell within search range it would start exploring there and thus slowing down the useful search paths the longer the search had to go on from there. Either completing it slowly or eventually running out of memory.

Another one was just if not harder to find. A condition where left turn prohibitions from all sides on a crossing could create another looping condition. Both bugs and others were eventually found with visualization of the search tree. Then spotting by eye where suspicious behavior occurs, activity that goes on too long in an area, areas that aren't reached, unexpected jumps etc. AI could be useful to spot things like that.

Visualization of code running has been very useful for optimizing disk access as well. Finding patterns in data access to organize data more efficiently, reducing the number of reads, block sizes to use, optimizing what should stay in memory and for how long. Using the human mind for pattern recognition. AI used to improve compression and data organization would be useful. Of course at one point Huffman coding was considered AI.


Anyway we were all about reducing costs and optimization, this just doesn't compute to me:

https://www.eurogamer.net/google-reaches-grim-ai-milestone-replicates-playable-version-of-doom-entirely-by-algorithm

How to make things less efficient...


Sorry that we germans make crazy streets. :)

Routing is a *very* hard problem, I am impressed you worked on that. And you are right, before the current hype of LLMs all these different technologies were called AI. Currently there is a focus on neuronal networks and even more on transformer architecture. Which may be unhealthy for the field. I have a strong feeling the biggest progress will not come from advance in one area of AI, but from combining different AI technologies in a useful way.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

Mnementh said:

Sorry that we germans make crazy streets. :)

Routing is a *very* hard problem, I am impressed you worked on that. And you are right, before the current hype of LLMs all these different technologies were called AI. Currently there is a focus on neuronal networks and even more on transformer architecture. Which may be unhealthy for the field. I have a strong feeling the biggest progress will not come from advance in one area of AI, but from combining different AI technologies in a useful way.

Oh Belgians are worse :p I came to an intersection there where all 4 directions had the same destination town name and Meisse was unreachable in the data. Due to turn restrictions you could only leave the town, not enter.

The human brain likely has different processes for different types of 'intelligence' as well. Neural networks are great for image processing, or sensory data processing in general. Transformer achitecture is great for root learning, learning a routine, reflex memory, which is basically what that Doom demo is.

Yet how higher reasoning, deduction etc work is still a problem.

And then how to tie it all together in an active loop which could be what we call consciousness. That voice in your head reasoning things out, combining inputs from sensory data, long term and short term memory to reason out a problem and come up with a plan / decision.

And we know all too well from humans that their decision making is based/biased on experience and beliefs. Hence the worry about using what data to train AI. Human history is fraught with bad reasoning and disastrous outcomes :/

Humans are also not a singular intelligence. Collectively we try to balance out our differences to arrive at a social contract and use each other's input to solve problems. Strength in diversity, same as evolution works. The danger with AI is, easy to clone, or rather one version. It's basically creating a dictator. Some dictators can be good, most aren't and certainly can't adjust well to changes.

Humans still haven't found a working system to live together in peace. How are we going to create a general intelligence that will...

Anyway LLMs can help with many things, good and bad. So far just another (very powerful) tool.



Around the Network

It was exposed before, but a new video just popped up.

This is the older article

‘Lavender’: The AI machine directing Israel’s bombing spree in Gaza

https://www.972mag.com/lavender-ai-israeli-army-gaza/

The Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties, +972 and Local Call reveal.


No doubt Israel has expanded on the system by now