By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Elon Musk to start an AI Game Studio

Well Ai content is being sold already;

Skins like this one as an example were sold and probably more companies are doing so...

I am not a video game expert but I can tell you that some are dreaming of the idea of an AI skin generator in game, 

You type terms for what kind of skin you like and then the AI generator will create the skin and the user can then use it for a small fee...

It is a lot of work tho because you also need to make deals with big companies like Marvel/Coca cola/Disney etc cause everytime someone creates a skin with the term Marvel in it a piece of the fee will go to Disney/Marvel... and some big companies like EA will probably tro to get the exclusive AI filter right for their video games...etc..






Around the Network
Mnementh said:
sc94597 said:

We are probably 2-10 years away from AI with real reasoning and creativity abilities in the sense that humans are capable of (and beyond.) That's about when we'll have Level 3/Level 4 agents that will be quite capable at out-of-distribution generalization. 

In that scenario, humans (until robotics catches up with ML, which might not be long) will be assisting AI agents, rather than vice-versa. AI agents will probably take the role of art directors. 

Quite honestly we should start worrying about how people are going to survive when the mass of human labor is outmoded. In a system designed for the bulk of the human population being laborers to survive, that system needs to be outmoded as well. 

Hmm, there is strong indication that current AI ran out of training data, because they already absorbed *everything* humankind has produced over the millenia. That's why OpenAI trained a Speech-to-text AI, so that they can access the content of videos. That doesn't mean there aren't some improvements possible, but we might well have hit another ceiling for the moment and need a new breakthrough, just as this paper was the breakthrough that lead to the current AI development.

But yes, we need to rethink how to organize labor. There are two scenarios, the one there everyone is poor except a few tech billionares and one there AI is put to produce wealth for everyone. And surprisingly these scenarios were already thought through about a decade ago, the story Manna by Marshall Brain. I highly recommend that read if you think about the AI future.

Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series. 

I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap. 

Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture. 

So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general. 

I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them. 

Thanks for the Manna link, I'll definitely check it out. 

Mnementh said:

sc94597 said:

I am a bit more optimistic. I think there is a good possibility that we can live in a society akin to The Culture, if we make the correct early decisions when designing the initial systems. I do think there is a significant risk though too, albeit it probably won't look like terminator. Fighting a conventional war wouldn't make sense when you could just engineer something else that would bring far less resistance (i.e a virus that causes a further decline in human birth-rates.) 

My personal opinion is that any cost-benefit-risk assessment needs to consider all x-risks. If we are almost certainly going to be massively devastated from climate change (which it seems to be the case) or nuclear war, building advanced intelligences might be the better option than not, if the x-risk of doing so is less than the x-risk of the prior events.

Yeah, the Culture cycle is a great set of novels, that show how AI and humankind (and alienkind) can coexist and thrive. Key points here are treating AI as equals to humans, holding every live at great value and providing the basic necessities without a capitalist society.

Your risk-weighing also seems productive. Yes, we should consider that AI can help us for instance on climate change. But we also should not rely on it and try other avenues as well.

Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.) 

Last edited by sc94597 - 4 hours ago

sc94597 said:
Mnementh said:

Hmm, there is strong indication that current AI ran out of training data, because they already absorbed *everything* humankind has produced over the millenia. That's why OpenAI trained a Speech-to-text AI, so that they can access the content of videos. That doesn't mean there aren't some improvements possible, but we might well have hit another ceiling for the moment and need a new breakthrough, just as this paper was the breakthrough that lead to the current AI development.

But yes, we need to rethink how to organize labor. There are two scenarios, the one there everyone is poor except a few tech billionares and one there AI is put to produce wealth for everyone. And surprisingly these scenarios were already thought through about a decade ago, the story Manna by Marshall Brain. I highly recommend that read if you think about the AI future.

Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series. 

I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap. 

Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture. 

So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general. 

I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them. 

Thanks for the Manna link, I'll definitely check it out. 

Mnementh said:

Yeah, the Culture cycle is a great set of novels, that show how AI and humankind (and alienkind) can coexist and thrive. Key points here are treating AI as equals to humans, holding every live at great value and providing the basic necessities without a capitalist society.

Your risk-weighing also seems productive. Yes, we should consider that AI can help us for instance on climate change. But we also should not rely on it and try other avenues as well.

Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.) 

Hehe, I don't know if we hit another ceiling or if not. And even if, with the energy currently poured into AI development we could break through ceilings with ideas like the ones you are illustrating. I am just saying I wouldn't be too sure about it either way.

And yes, something post-capitalist is happening, if we want it or not, but we have to make decisions to go either into tech-feudalism with the Sam Altmans and Elon Musks as our new kings or a Culture like future. Obviously I would prefer the latter.

But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

Mnementh said:

Hehe, I don't know if we hit another ceiling or if not. And even if, with the energy currently poured into AI development we could break through ceilings with ideas like the ones you are illustrating. I am just saying I wouldn't be too sure about it either way.

And yes, something post-capitalist is happening, if we want it or not, but we have to make decisions to go either into tech-feudalism with the Sam Altmans and Elon Musks as our new kings or a Culture like future. Obviously I would prefer the latter.

But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.

I think the assessment that AI-research is a series of s-curves is probably the correct one. We scale exponentially, until we hit a ceiling, and then we move in a new direction where we scale exponentially again, until we hit a ceiling. That's described the post-2008 situation pretty well.  

It is a good thing a lot of the talent in the industry is moving to companies that are more alignment-motivated, like Anthropic and now hopefully Safe Superintelligence Inc. The Musks and Altmans are more of the same parasitic class imo, and they're nothing without the talent they parasite off. The alignment problem is a technical problem, at the end of the day, and it means the actual technical workers (I'm including philosophers specialized in meta-ethics in this category), designing the AI, are the ones who are going to imbue the personhood and meta-ethics of these systems. This group (unlike the tech-leadership) tend to be politically in the libertarian left orientation, most seeming to be radical small "d" democrats (focused on deliberative democracy, liquid democracy, etc.) That is what gives me a lot of optimism on this. 

Last edited by sc94597 - 1 hour ago

sc94597 said:

Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series. 

I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap. 

Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture. 

So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general. 

I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them. 

Thanks for the Manna link, I'll definitely check it out. 

So I started to skim the links you provided, thank you for that. Especially neuro-symbolic machines align with my thinking. At university (or it is called college in the US I guess) I learned Prolog in my computer science. It was the classical approach of symbolic reasoning, an earlier product of AI research. But it was tedious, as you had to manually model every knowledge and you could run into edge problems pretty fast.

But with the success of LLMs recently and seeing their particular limitations my thought was: what if an LLM at training is instructed to translate their summation of the training data into a symbolic model akin to Prolog (sorry, Prolog is the one I know, there are probably better symbolic languages out there). This has many advantages: a symbolic world model works better to hold context than textual reference, so it could help an LLM to keep focused on the task instead of "forgetting" earlier parts of the conversation. A world model is also helping to keep an general understanding of things. Also while a big LLM can build the world model inside the neural net it comes at high computational cost and massive training data. Symbolic representation of a world model can be cheaper and computational faster.

LLMs are powerful because they operate on text. We have text/writing since a few thousand years and have developed it to be used in many contexts as a powerful tool. For instance modern math is interfaced with specialized text (mathematical symbols) which LLMs obviously can work with. We have textual notations for many games like chess or go. And most programming languages operate with text. All this LLMs can interface with and operate on. And yes, this includes symbolic reasoning, as prolog shows. So yes, combining an LLM with a symbolic reasoning system with a textual interface, which an LLM can query while processing requests could be an improvement I think.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

Around the Network
Mnementh said:
sc94597 said:

Yes, scaling by adding more data likely won't work anymore, BUT we are starting to see scaling using search-refinement in test-time compute. That is what OpenAI is doing, quite roughly in this first iteration, with their o1 model series. 

I've fed o1-preview some graduate level algorithm design exams, and it consistently is able to get in the 90%-100% range on them. A few of these algorithm designs are not something it would be trained on. That is enough for me to think it is capable of doing some rudimentary reasoning in a way previous LLM's (which would get 50-60% on these exams, when I fed it to them) could not. We're practically at Level 2 Reasoners in OpenAI's roadmap. 

Also there is still not a fully realized multi-modal model. The current "multi-modal" models are really a stitching together of various specialized models. So advancement can be done in that direction as well. And of course LLM's aren't the be-all end all of AI research. LeCun for example, is working on his JEPA architecture. 

So there is still a lot that can be done to advance things, even without going into topics like neuro-symbolic hybrids (which AlphaGo, AlphaGeometry, AlphaProteo technically are), and neuromorphic computing in general. 

I don't think we'll have another AI winter. The difference between now and previous eras is that previous AI winters were mostly caused by funding waves drying up in academia and the fact that we were hardware-limited. AI research is now an industry in itself (which it really wasn't in the past) and that is likely not going to end. There might be bear markets for AI development, but they're going to follow the regular business cycle, and advancements will still happen during them. 

Thanks for the Manna link, I'll definitely check it out. 

So I started to skim the links you provided, thank you for that. Especially neuro-symbolic machines align with my thinking. At university (or it is called college in the US I guess) I learned Prolog in my computer science. It was the classical approach of symbolic reasoning, an earlier product of AI research. But it was tedious, as you had to manually model every knowledge and you could run into edge problems pretty fast.

But with the success of LLMs recently and seeing their particular limitations my thought was: what if an LLM at training is instructed to translate their summation of the training data into a symbolic model akin to Prolog (sorry, Prolog is the one I know, there are probably better symbolic languages out there). This has many advantages: a symbolic world model works better to hold context than textual reference, so it could help an LLM to keep focused on the task instead of "forgetting" earlier parts of the conversation. A world model is also helping to keep an general understanding of things. Also while a big LLM can build the world model inside the neural net it comes at high computational cost and massive training data. Symbolic representation of a world model can be cheaper and computational faster.

LLMs are powerful because they operate on text. We have text/writing since a few thousand years and have developed it to be used in many contexts as a powerful tool. For instance modern math is interfaced with specialized text (mathematical symbols) which LLMs obviously can work with. We have textual notations for many games like chess or go. And most programming languages operate with text. All this LLMs can interface with and operate on. And yes, this includes symbolic reasoning, as prolog shows. So yes, combining an LLM with a symbolic reasoning system with a textual interface, which an LLM can query while processing requests could be an improvement I think.

Prolog is still the common general-purpose logic programming language (although it has many dialects now.) I took a Knowledge Representation and Reasoning course earlier this year, and we used Prolog. We also used a language called Clingo for answer-set programming. 

To an extent LLM's (especially now that they're incorporating search in the form of "reasoning tokens") are already neuro-symbolic hybrid systems. We've also seen the success of neuro-symbolic systems in the Alpha series of models, where rule-sets are made explicit and then the reinforcement learning is used to optimally traverse those rule-sets. These models are super-human in their narrow fields. So it is definitely a powerful combo. 

Last edited by sc94597 - 3 hours ago

Leynos said:
Ryuu96 said:

4Chan.

8chan

You play as a hero and every request you refuse from an NPC results in them labelling you as a paedophile because all the NPCs are modelled after Musk's fragile ego.

In all seriousness, the wealthiest man in the world who owns 3 big corporations complaining about big corporations owning too many studios is hilarious, I love how Musk always thinks he's one of the people and not just yet another asshole billionaire who received hand me downs from daddy and bought his way into most things. His adventure into gaming will go as well as his adventure into most things outside of his area of expertise (SpaceX/Tesla) and even then it's debatable how much he does in those companies anymore.

So as long as he doesn't acquire any studios he can waste his money on an AI studio, it'll be a disaster, of course the man with multiple workplace allegations to his companies wants to cut out as many humans as possible though. An AI studio sounds horrible, devoid of creativity, a soulless piece of "art" which will result in a lot more of those blatantly obvious rip-off games, like all those mobile titles now coming to console, haha. Anyone who gives a shit about the medium of videogames, film or tv should want AI as regulated as possible.

AI is largely just an excuse for billionaire corporations to fire more people and save costs.

We'll end up with an industry filled with shit like this turned up to 11.

Last edited by Ryuu96 - 3 hours ago

vidyaguy said:

What is he going to train it on?

rule34



Ryuu96 said:
Leynos said:

8chan

You play as a hero and every request you refuse from an NPC results in them labelling you as a paedophile because all the NPCs are modelled after Musk's fragile ego.

In all seriousness, the wealthiest man in the world who owns 3 big corporations complaining about big corporations owning too many studios is hilarious, I love how Musk always thinks he's one of the people and not just yet another asshole billionaire who received hand me downs from daddy and bought his way into most things. His adventure into gaming will go as well as his adventure into most things outside of his area of expertise (SpaceX/Tesla) and even then it's debatable how much he does in those companies anymore.

So as long as he doesn't acquire any studios he can waste his money on an AI studio, it'll be a disaster, of course the man with multiple workplace allegations to his companies wants to cut out as many humans as possible though. An AI studio sounds horrible, devoid of creativity, a soulless piece of "art" which will result in a lot more of those blatantly obvious rip-off games, like all those mobile titles now coming to console, haha. Anyone who gives a shit about the medium of videogames, film or tv should want AI as regulated as possible.

AI is largely just an excuse for billionaire corporations to fire more people and save costs.

We'll end up with an industry filled with shit like this turned up to 11.

LOL. An upside to this is that AAA industry slop cannot be longer called "at least a 7", just because it is mostly free of bugs and models and systems are kinda polished. I always hate this way of thinking, because that still means lack of creativity. Basically Concord: there is nothing wrong of it if you just check technological or graphical checkboxes, yet it failed to resonate with gamers. AI supported studios can do similar things pretty quick I think, some polished stuff that checks basic boxes but without creativity.

Still, I think small indies can profit by AI as well. The humans inject creativity, while AI is used to scale it bigger than a small team can on their own. This might be a way of the future.



3DS-FC: 4511-1768-7903 (Mii-Name: Mnementh), Nintendo-Network-ID: Mnementh, Switch: SW-7706-3819-9381 (Mnementh)

my greatest games: 2017, 2018, 2019, 2020, 2021, 2022, 2023, 2024

10 years greatest game event!

bets: [peak year] [+], [1], [2], [3], [4]

I'm sure it will succeed

Gamers turned gaming into something very political, since they agree with Elon political views they will support Elon's game