By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming Discussion - Elon Musk to start an AI Game Studio

Mnementh said:
sc94597 said:

Yes, our future, AI-driven economy, definitely needs to be post-capitalist (and probably absent of states as well.) 

But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.

I'm fairly certain investors will get bored and move onto the next thing before we come close to "AGI" if it's possible. It happens all the time, companies hype up this "new amazing thing" investors pump in money, it doesn't change the world, they move onto hyping up the next "new amazing thing" and I've said it before that I think AI will provide some good and some very bad things and thus needs heavy regulation but it won't change the world like corporations are hyping it up and there's a decent chance it will eventually "crash" and it's going to hurt a number of companies; Microsoft, Meta, Google, Nvidia, etc.

These corps are hyping up AI like a space idiot saying we'll send a man to Pluto in the next 20 years! Only for the space idiot to only reach the Moon, I mean that's still an achievement but it's a far cry from the promise made, these corporations are hyping up AI to ridiculous levels, they're pumping in billions upon billions (and not to mention destroying all their climate pledges in the process but that's another issue) and saying to investors "Don't worry, just another $30bn, AI will make us filthy rich eventually" and at some point investors are going to get bored, as they always do.

Microsoft, Meta to Feel AI Scrutiny as Investors Wait for Payoff

Of course they'll be some good uses, some bad uses, some promises will be kept, a lot will be broken, a lot of companies taken down in the process, as with every advancement and of course AI is a lot more than just AGI/LLM and has many beneficial uses outside of those things, I'm pretty much in the middle, it won't change the world like the corps are hyping up, it won't doom humanity like some are saying, it'll be a fairly middle of the road change, a fairly boring stance but it feels like everyone either goes to one extreme or the other when it comes to AI.

Last edited by Ryuu96 - 3 hours ago

Around the Network
Mnementh said:
Ryuu96 said:

You play as a hero and every request you refuse from an NPC results in them labelling you as a paedophile because all the NPCs are modelled after Musk's fragile ego.

In all seriousness, the wealthiest man in the world who owns 3 big corporations complaining about big corporations owning too many studios is hilarious, I love how Musk always thinks he's one of the people and not just yet another asshole billionaire who received hand me downs from daddy and bought his way into most things. His adventure into gaming will go as well as his adventure into most things outside of his area of expertise (SpaceX/Tesla) and even then it's debatable how much he does in those companies anymore.

So as long as he doesn't acquire any studios he can waste his money on an AI studio, it'll be a disaster, of course the man with multiple workplace allegations to his companies wants to cut out as many humans as possible though. An AI studio sounds horrible, devoid of creativity, a soulless piece of "art" which will result in a lot more of those blatantly obvious rip-off games, like all those mobile titles now coming to console, haha. Anyone who gives a shit about the medium of videogames, film or tv should want AI as regulated as possible.

AI is largely just an excuse for billionaire corporations to fire more people and save costs.

We'll end up with an industry filled with shit like this turned up to 11.

LOL. An upside to this is that AAA industry slop cannot be longer called "at least a 7", just because it is mostly free of bugs and models and systems are kinda polished. I always hate this way of thinking, because that still means lack of creativity. Basically Concord: there is nothing wrong of it if you just check technological or graphical checkboxes, yet it failed to resonate with gamers. AI supported studios can do similar things pretty quick I think, some polished stuff that checks basic boxes but without creativity.

Still, I think small indies can profit by AI as well. The humans inject creativity, while AI is used to scale it bigger than a small team can on their own. This might be a way of the future.

Of course there are decent uses for AI in videogame development, AI isn't anything new at the end of the day, it has been used for dozens of years already and it can be beneficial to small indie teams in other ways, it entirely depends on where/what/how it is used but I seriously doubt all these "worker first" and more altruistic uses of AI are what these billionaire twats like Musk have in mind, hence the need for regulation/unions, etc. Billionaires like Musk, Microsoft, and others, just want to use AI to fire as many workers as possible and pump out soulless factory-like games. But governments have been absolutely fucking useless at regulating AI so far.

Last edited by Ryuu96 - 3 hours ago

I think Ben Goertzel describes the abilities of LLM's (and this applies to diffusion models too) pretty well at 9:05 - 19:56 in this video.


There is emergent "reasoning" and "creativity" but it isn't of the same kind or extent as a human, yet.

For example he says at 12:36

"I've often given in the domain of music modeling is if you train an LLM on music up to the year 1900 only, it's not going to invent grindcore, neoclassical metal or progressive jazz. I mean, it may do cool, creative things. It'll make up new songs that you never heard before if you ask it to put West African drumming together with Western classical music. It may even manage to, like, set Mozart to a polyrhythm or something, right? But to get to jazz and metal and world music and all, this is just a number of creative leaps that I really don't think anything like a current LLM is going to be able to take.

And certainly experimentation with Suno or any music in any existing music model. You can see the practical limitations here, right? It's pretty on the initially it's really cool. Like you can pick an artist to make an arbitrary number of songs in the style of that artist. It gets their voice right. You can make a competent metal guitar solo. On the other hand, it's all banal music in the end. Like you're not getting anything really awesome out of it. Even within the defined genres that it knows, let alone like inventing some new genre of music or some profoundly new new style, right?

So there clearly are limitations which are more severe than the limitations we have, but it's not quite clear how to formalize or quantify those limitations right now.

Yeah, that's a great example. I saw a vision generator model. Oh that's okay. I saw a vision generator model and they prompted it to generate for 1956, 1957, 1958. And you could just see the morph of the styles for all of the different years. And of course, when it went past 2024, it just ran out of distribution because there was no training data. So it kind of mode collapsed. But interestingly, if you went forward enough to about 2070, you started seeing Star Trek uniforms and so on. But but you know, my intuition though is if these models are learning very abstract representations of the world, then why wouldn't they? I don't think they are.

Oh. That's interesting. Why not?

I don't think they're learning very abstract representations of the world just from looking looking inside of what they're doing. I don't I don't see how they could be. And when you, when you try to use them to derive mathematical proof, which I've played with quite a lot because that's my original background of a PhD in math, when you try to use them to derive mathematical proof, they mix things up in very silly, basic ways that lead you to think if they're building an abstract representation, it's not the right one, right? Like it doesn't represent the actual mathematical structures. And then in the case of math, there sort of is a correct abstract representation and they're not getting it right. Like you, you can in many cases give a proof sketch and it will fill in the details of your sketch, which is which is interesting. You can even give it you can give a verbal outline of a theorem, and it will turn that into like a formal logic theorem statement. So it can do quite a lot of things, but then it will it will mix things up in very, very silly ways, which like no, no graduate student would, would ever do. Right. And so it seems from that example, the abstractions it's learning are really not the ones that a human mathematician would would use.

And that's probably connected with the fact that, I mean, in the automated theorem proving world, which we had represented here by Joseph Urban from Czech Technical Institute. I mean, using llms to do theorem proving. I mean, that's not what they're doing, right? I mean, that's not what Google did with alpha geometry either, right? I mean, they use the LLM to translate Math Olympiad problems and such into formal logic and then use different sort of AI system to do the actual math, right?

So I think music is a domain where it's clear that creativity is limited, and it feels like the internal representation is not quite the right one to be profoundly creative, but math is a bit more rigorous. So when you see the sorts of errors it makes, it's really quite clear that it's not getting the abstraction right, even when it can spit out what the definition is. And this is the frustrating thing we've all seen. Like, it will it will spit out the precise definition of, say, a non-well-founded set based on Axel's Anti-foundation axiom. But then when you ask it to derive a consequence of that, it'll 70% of the time be good, 30% of the time come up with other gibberish where like if you if you understood that definition that you just cited, you could never come up with that, with that, with that gibberish."



Too many studios are owned by big corporations, so the man who owns some of the biggest corporations ever is going to own them instead.

... He's an idiot isn't he?



Hmm, pie.

The Fury said:

Too many studios are owned by big corporations, so the man who owns some of the biggest corporations ever is going to own them instead.

... He's an idiot isn't he?

You're lucky Musk isn't here! He'll be calling you a pedo too!



Around the Network
Ryuu96 said:
Mnementh said:

But for that I think it is also essential to treat AIs as equal. There is much talk about AGI in regards of threats to humanity or technological possibility. But we have to consider, that a true AI is also basically a person. We need to talk about moral implications, that is something I don't see yet.

I'm fairly certain investors will get bored and move onto the next thing before we come close to "AGI" if it's possible. It happens all the time, companies hype up this "new amazing thing" investors pump in money, it doesn't change the world, they move onto hyping up the next "new amazing thing" and I've said it before that I think AI will provide some good and some very bad things and thus needs heavy regulation but it won't change the world like corporations are hyping it up and there's a decent chance it will eventually "crash" and it's going to hurt a number of companies; Microsoft, Meta, Google, Nvidia, etc.

These corps are hyping up AI like a space idiot saying we'll send a man to Pluto in the next 20 years! Only for the space idiot to only reach the Moon, I mean that's still an achievement but it's a far cry from the promise made, these corporations are hyping up AI to ridiculous levels, they're pumping in billions upon billions (and not to mention destroying all their climate pledges in the process but that's another issue) and saying to investors "Don't worry, just another $30bn, AI will make us filthy rich eventually" and at some point investors are going to get bored, as they always do.

So we're at the point now where LLM's (Claude Sonnet 3.5) are able to control desktops in virtual environments and perform basic office tasks. 

For example, I work as an MLE/Data Scientist in the health-care industry. The company I work for has already saved costs by incorporating LLM's into their IVR systems and is now able to expand its contracts because of this. We (I work on an analytics/reporting team) use LLM's to help parse unstructured data and to help with provider enrollment form processing. LLM's (combined with strict rules-engines that reduce hallucinations to <1%) are also helping expedite prior authorizations. 

A peer-team of ours that creates RPA (Robotics Process Automation) processes is already experimenting with Claude's Desktop control for light-automations. Basically you can go to Claude, ask it to do things on a virtual desktop environment, and even though it is slower than a human it -- unlike a human -- doesn't sleep or need breaks. So many tasks can be automated using the virtual desktop environment. This is likely something that will be scaled in a year or so. 

In AWS they have a set of LLM (and classical ML) tools packaged in a service called "Amazon Transcribe", that is being used to automate insights and analytics pipelines from unstructured text data in our company. We use that data for some of our reports. It's also being piloted for use for transcription services internal to the company, in general.  

Reporting productivity has increased by about 20% with the use of coding assistants. Same thing with classical ML model production for prediction and classification tasks. 

This is with the current "dumb" models in a famously slow, highly regulated industry. Everything is just going to get smarter from here on out. 

This hope that AI research isn't going to have a big effect on industry, and that it is the next cryptocurrency or metaverse, is understandable given what that means, but lets not get blind here. We're looking at something that probably will be as minimally impactful as the internet and big data paradigm-shifts were. Both had their bear markets and slowdowns, but they still fundamentally changed the global economy and society. Sometimes for the better and others for the worse. Even if AI has it Dot-Com bubble moment (which it probably will), it is going to keep progressing at a rapid rate, just like the internet did after 2000. 



sc94597 said:
Ryuu96 said:

I'm fairly certain investors will get bored and move onto the next thing before we come close to "AGI" if it's possible. It happens all the time, companies hype up this "new amazing thing" investors pump in money, it doesn't change the world, they move onto hyping up the next "new amazing thing" and I've said it before that I think AI will provide some good and some very bad things and thus needs heavy regulation but it won't change the world like corporations are hyping it up and there's a decent chance it will eventually "crash" and it's going to hurt a number of companies; Microsoft, Meta, Google, Nvidia, etc.

These corps are hyping up AI like a space idiot saying we'll send a man to Pluto in the next 20 years! Only for the space idiot to only reach the Moon, I mean that's still an achievement but it's a far cry from the promise made, these corporations are hyping up AI to ridiculous levels, they're pumping in billions upon billions (and not to mention destroying all their climate pledges in the process but that's another issue) and saying to investors "Don't worry, just another $30bn, AI will make us filthy rich eventually" and at some point investors are going to get bored, as they always do.

So we're at the point now where LLM's (Claude Sonnet 3.5) are able to control desktops in virtual environments and perform basic office tasks. 

For example, I work as an MLE/Data Scientist in the health-care industry. The company I work for has already saved costs by incorporating LLM's into their IVR systems and is now able to expand its contracts because of this. We (I work on an analytics/reporting team) use LLM's to help parse unstructured data and to help with provider enrollment form processing. LLM's (combined with strict rules-engines that reduce hallucinations to <1%) are also helping expedite prior authorizations. 

A peer-team of ours that creates RPA (Robotics Process Automation) processes is already experimenting with Claude's Desktop control for light-automations. Basically you can go to Claude, ask it to do things on a virtual desktop environment, and even though it is slower than a human it -- unlike a human -- doesn't sleep or need breaks. So many tasks can be automated using the virtual desktop environment. This is likely something that will be scaled in a year or so. 

In AWS they have a set of LLM (and classical ML) tools packaged in a service called "Amazon Transcribe", that is being used to automate insights and analytics pipelines from unstructured text data in our company. We use that data for some of our reports. It's also being piloted for use for transcription services internal to the company, in general.  

Reporting productivity has increased by about 20% with the use of coding assistants. Same thing with classical ML model production for prediction and classification tasks. 

This is with the current "dumb" models in a famously slow, highly regulated industry. Everything is just going to get smarter from here on out. 

This hope that AI research isn't going to have a big effect on industry, and that it is the next cryptocurrency or metaverse, is understandable given what that means, but lets not get blind here. We're looking at something that probably will be as minimally impactful as the internet and big data paradigm-shifts were. Both had their bear markets and slowdowns, but they still fundamentally changed the global economy and society. Sometimes for the better and others for the worse. Even if AI has it Dot-Com bubble moment (which it probably will), it is going to keep progressing at a rapid rate, just like the internet did after 2000. 

In fairness, I said I was middle of the road on it, not that it would be the next cryptocurrency or metaverse, two utterly irrelevant things that benefit next to nobody, Lmao. AI has at least got off the ground, I can't really say the same for either of those two, they had like a year of hype, maybe not even that, then came crashing down, of course now that Musk is in Gov they'll probably be another push for cryptocurrency to be relevant.

I'm not convinced it will change the world like the tech companies are hyping it up, I'm not convinced we're anywhere close to things like AGI, I just think it will provide some good and some bad changes, the medical field is one area where I think AI will have a lot of beneficial uses, it's largely the tech industry, like OpenAI, Microsoft, Meta, etc. Who I'm accusing of massively overhyping how AI is going to change the world and everyone's lives, it's just the next opposite extreme of those who say AI is going to kill us all and watched too much Terminator, Lol.

I'm also unconvinced that investors will stick with it long enough, they're already growing a tad frustrated at the level of money being pumped into AI initiatives without much return, when it comes to AGI...I'm really unconvinced and then there's people like Sam Altman who I really get a bad vibe from, there's something about him that screams "scam artist" to me, Lol. I'm not sure why Microsoft tried so hard to save him.

Last edited by Ryuu96 - 2 hours ago

IcaroRibeiro said:

I'm sure it will succeed

Gamers turned gaming into something very political, since they agree with Elon political views they will support Elon's game

Developers turned gaming political, gamers are providing the equal and opposite reaction.

Games are expensive and require active participation.  Political agreement isn't enough to make people buy and play a bad game.  We literally just watched Concord crash and burn.  The target audience might have "supported" it in articles and social media posts but they also kept their money in their pockets.



Ryuu96 said:

In fairness, I said I was middle of the road on it, not that it would be the next cryptocurrency or metaverse, two utterly irrelevant things that benefit next to nobody, Lmao. AI has at least got off the ground, I can't really say the same for either of those two, they had like a year of hype, maybe not even that, then came crashing down, of course now that Musk is in Gov they'll probably be another push for cryptocurrency to be relevant.

I'm not convinced it will change the world like the tech companies are hyping it up, I'm not convinced we're anywhere close to things like AGI, I just think it will provide some good and some bad changes, the medical field is one area where I think AI will have a lot of beneficial uses, it's largely the tech industry, like OpenAI, Microsoft, Meta, etc. Who I'm accusing of massively overhyping how AI is going to change the world and everyone's lives, it's just the next opposite extreme of those who say AI is going to kill us all and watched too much Terminator, Lol.

I'm also unconvinced that investors will stick with it long enough, they're already growing a tad frustrated at the level of money being pumped into AI initiatives without much return, when it comes to AGI...I'm really unconvinced and then there's people like Sam Altman who I really get a bad vibe from, there's something about him that screams "scam artist" to me, Lol. I'm not sure why Microsoft tried so hard to save him.

Sorry, I didn't mean to put words in your mouth. Was talking more generally, as I've seen a lot of people who are skeptical of the impact of this research compare it to crypto-currencies and the metaverse. 

I think at the very minimum, regardless of whether human-level autonomous agents are created in the next half-decade or so (although I think they will be), even with the current technologies that already exist there is going to be a lot of economic dislocation. That alone is a "world-changing" impact. We are very close to a huge number of office and call center jobs being fully automated. These are median-income jobs that many economies (and more importantly -- families) depend on.  

Even without AGI, these technologies probably will be world-changing in that they are already having an effect on head-counts and potentially causing more un(der)-employment. 



Leynos said:
LegitHyperbole said:

He wants to make games great again ... with Ai.

My God, It's difficult to simp for this man sometimes.

Anyone simping over that idiot is also an idiot. Never simp with a nazi.

He's a Nazi...lmao? When did this happen?