Ryuu96 said:
I'm fairly certain investors will get bored and move onto the next thing before we come close to "AGI" if it's possible. It happens all the time, companies hype up this "new amazing thing" investors pump in money, it doesn't change the world, they move onto hyping up the next "new amazing thing" and I've said it before that I think AI will provide some good and some very bad things and thus needs heavy regulation but it won't change the world like corporations are hyping it up and there's a decent chance it will eventually "crash" and it's going to hurt a number of companies; Microsoft, Meta, Google, Nvidia, etc. These corps are hyping up AI like a space idiot saying we'll send a man to Pluto in the next 20 years! Only for the space idiot to only reach the Moon, I mean that's still an achievement but it's a far cry from the promise made, these corporations are hyping up AI to ridiculous levels, they're pumping in billions upon billions (and not to mention destroying all their climate pledges in the process but that's another issue) and saying to investors "Don't worry, just another $30bn, AI will make us filthy rich eventually" and at some point investors are going to get bored, as they always do. Microsoft, Meta to Feel AI Scrutiny as Investors Wait for Payoff Of course they'll be some good uses, some bad uses, some promises will be kept, a lot will be broken, a lot of companies taken down in the process, as with every advancement and of course AI is a lot more than just AGI/LLM and has many beneficial uses outside of those things, I'm pretty much in the middle, it won't change the world like the corps are hyping up, it won't doom humanity like some are saying, it'll be a fairly middle of the road change, a fairly boring stance but it feels like everyone either goes to one extreme or the other when it comes to AI. |
At this point "free" models (the free is relative, but this is for another discussion and unimportant to this one) are judged about a few months behind commercial ones. This has initially a lot to do with the leak of the Llama weights, but independent (from the AI companies) research jumped on it as it gave them a way to research on a similar level and they kept on improving at a rapid rate. Also private enthusiasts helped improving the models. We are at a situation there we may not be able to run a big multilingual model like GPT-4 on a normal PC, but a single language (like english) one works well with only small setbacks towards the big ones. This allows the independent research to do their own stuff on small budgets. And enthusiast always find more ways to reduce computational needs of the models without compromising much of their abilities. Also, all the free stuff works together, while the AI companies keep their secrets from each other, impeding their improvements.
Therefore I think, that the VC-money was needed to kick off the LLM breakthrough, but now it could dry up and research would still be going forward. Also, if big money leaves the AI field, we may see much less hype, but all the researchers now working at the AI companies will flood back into the open research and mingle with other researchers, combining their know-how. So yeah, with or without VC money, I see the field going forward.
This means, that the timeframe to reach human level AGI isn't so much depending on the VC-money, but more on the question if hard technological ceilings exist, that need new ideas to break through, or if all barriers left are soft ones that can be solved by combining known ideas as sc9 and me discussed.
Ryuu96 said:
Of course there are decent uses for AI in videogame development, AI isn't anything new at the end of the day, it has been used for dozens of years already and it can be beneficial to small indie teams in other ways, it entirely depends on where/what/how it is used but I seriously doubt all these "worker first" and more altruistic uses of AI are what these billionaire twats like Musk have in mind, hence the need for regulation/unions, etc. Billionaires like Musk, Microsoft, and others, just want to use AI to fire as many workers as possible and pump out soulless factory-like games. But governments have been absolutely fucking useless at regulating AI so far. |
I agree: big companies will always use everything - including AI - to reduce their dependence on human workers and improve their winnings. But that they do already without AI.
I can't stop thinking about the big discussion that you shouldn't compare other games to Baldur's Gate 3. While I agree that small indies play in another league, that year saw also the release of Starfield, Diablo 4, Hogwarts Legacy and Final Fantasy XVI. Clearly Bethesda, Blizzard, Warner and Square have resources to match the ones of Larian and these IPs (with exception of the new Starfield) are big and well-known IPs as well, bigger I would argue than Baldur's Gate or D&D even. The reason why Larian could keep up or as most GOTYs concluded exceed these other products is not down to resources, but to freedom of creativity. And these differences existed without AI. This is a big company problem, not so much an AI problem. AI is just another tool which turns to shit if touched by big corpos.
And the european union isn't that bad in going towards regulation, most discussions seem good. It is only slow.