By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - Gaming - Yes, I support (some) AI use in gaming

I have always liked the idea of procedurally generated infinite worlds. That's not far off from AI.
Though I'd rather have a game where it goes by standard algorithm so that planet #286379 has that valuable material for every player.
And I do prefer all dialogue to be human written. Even if NPCs can say only one line. On the other hand, if you have AI giving every minor NPC different voice, that's cool.

But I'd like what has been traditionally called "enemy AI" to develop. Like, player often waits for an enemy to step around corner to shoot them? AI learns this and gets careful, starts taking a quick peek. And so on.



Around the Network
IcaroRibeiro said:
ConciousMan said:

Ok I don't know what kind of experience you have, but since when software development is just coding? The way more important thing is to make things fixable for me. Does Claude Code make you understand the code more or less? For me coding is being less important, making things scalable and secure is what brings money IMO.

Claude makes significantly more readable code than humans, it even comes with pre-generated docstring and comments, which developers HATE to write 

The rest of your post is attacking a strawman. I talked specifically about coding, and then you come with architecture, security and system design

AI can be used to help with those as well, but even if you put an agent on meetings to analyize technical discussions it can't plan, model and design a software by itself. That's why agents still need specifications, documents and human supervision 

For coding itself though? I'm sorry but Gen AI is already ahead of humans

I am not attacking you, I found you posting about how intelligent softwares being better at coding than most software devs is infactual. That's your opinion, not a fact. AI can create a huge codebase that's very difficult to refactor and the technical debt will be huge if the majority of code will be generated instead of being written. Imagine understanding only 80% of your backend or 3D engine code. Good luck in your software role if you keep spouting non factual statement like this.

Also, good luck trusting Claude, when they have leaked the part of their source code 🤣. Nvidia kept releasing buggy drivers lately must also be the part of using AI for writing drivers too. While I agree that AI can automate a lot of software development related work it's no where ready to create novel solutions such as the Web3 protocols.



ConciousMan said:
IcaroRibeiro said:

Claude makes significantly more readable code than humans, it even comes with pre-generated docstring and comments, which developers HATE to write 

The rest of your post is attacking a strawman. I talked specifically about coding, and then you come with architecture, security and system design

AI can be used to help with those as well, but even if you put an agent on meetings to analyize technical discussions it can't plan, model and design a software by itself. That's why agents still need specifications, documents and human supervision 

For coding itself though? I'm sorry but Gen AI is already ahead of humans

I am not attacking you, I found you posting about how intelligent softwares being better at coding than most software devs is infactual. That's your opinion, not a fact. AI can create a huge codebase that's very difficult to refactor and the technical debt will be huge if the majority of code will be generated instead of being written. Imagine understanding only 80% of your backend or 3D engine code. Good luck in your software role if you keep spouting non factual statement like this.

Also, good luck trusting Claude, when they have leaked the part of their source code 🤣. Nvidia kept releasing buggy drivers lately must also be the part of using AI for writing drivers too. While I agree that AI can automate a lot of software development related work it's no where ready to create novel solutions such as the Web3 protocols.

... this will only happen if you make the AI go on full vibe-code your project reading specifications and documents. I'm not advocating for removing engineers from their development roles 

I assure you if my professional role is at risk is definitely not for recognizing AI code better than me, but to be stubborn in not using it to improve my job 

Of course AI can't create novel solutions, have anybody in this thread stated otherwise? 

Thing is most of software development work is simply busy boiler plate code and busy glue code, and those AI will do quickly and better

You don't need to trust me, you can research by yourself. This is a very know subject among software engineering academics, look for "Software Engineering Economics" by Bary Borhm, he describes this. He estimates the average programmer spends 30% to 60% of their time reading code, instead of writing code. So anyway, developers were already spending 60% of their time reading human code. After AI they will spend 60% of their time reading AI code. That's it

I work mostly with ML/Data and things are even more extreme than traditional software engineering. Actual modeling and data analysis was at best 20% of my job, writing boring pipelines and making graphs to help me to understand the data was where 80% of my time was spent. Now AI can quickly create data pipelines for me, and I focus in what AI can't do by itself

I work for a research institute, they have run experiments regarding how AI help (or disturb) development. Results are still inconclusive in some areas (Design), while in other AI seems to be neutral to negative (ex: Front end development for iOS systems), while other tends to be very positive (Data Engineering and Automation) 

Be mindful those results were based on in-house teams, and therefore cannot be extrapolates industry-wide. Other researchers will come with their own conclusions in future

Last edited by IcaroRibeiro - 2 days ago

I hate to say it, but if video games are going to be increasingly made by AI ... my feeling is that I'd be ok in that case with piracy. You don't want to pay human staff and/or lay off people and you want to use generative art trained on other people's work without crediting them, then I think piracy is fair game (easy to do on the PC platform).

Nintendo might be the rare exception where I'd pay for their games, but that would probably be about it. Other companies ... I don't really care. Again though that's under the above stated case. 

Last edited by Soundwave - 2 days ago

Oh do you? Well think about all the water wasted and heat released by data centres next time you say you support SOME AI use in game development.



Around the Network
Soundwave said:

I hate to say it, but if video games are going to be increasingly made by AI ... my feeling is that I'd be ok in that case with piracy. 

I'm personally fine with piracy and against intellectual property regardless.

The owners of game IPs and the people laboring to produce content for them are usually different entities >99% of the time. There are exceptions (Valve is employee owned, outside of Gabe's majority share, as an example), but they are rare. 



Koragg said:

AI should only be used to do mundane and repetitive tasks

How about not at all? We've got data centres making heat islands. You really wanna risk heat islands just to eliminate mundane and repetitive tasks?



CaptainExplosion said:

How about not at all? We've got data centres making heat islands. You really wanna risk heat islands just to eliminate mundane and repetitive tasks?

What about tasks that don't need to be farmed out to huge data centres? Apple's CPUs have been including AI processing units since the start of the decade, and several of the more recent models from Intel and AMD include them as well. I think it's clear that the tide is turning against generative AI in a big way, but does that also make it wrong for, say, someone to use on-device AI to save 10-15 minutes in the creation of a Word document or spreadsheet?



OlfinBedwere said:
CaptainExplosion said:

How about not at all? We've got data centres making heat islands. You really wanna risk heat islands just to eliminate mundane and repetitive tasks?

What about tasks that don't need to be farmed out to huge data centres? Apple's CPUs have been including AI processing units since the start of the decade, and several of the more recent models from Intel and AMD include them as well. I think it's clear that the tide is turning against generative AI in a big way, but does that also make it wrong for, say, someone to use on-device AI to save 10-15 minutes in the creation of a Word document or spreadsheet?

Since when can there be an AI without a data centre?

The only AI I see these days is the kind that wastes water, steals from human artists, makes child porn, blows up Iranian civilians, and makes heat islands. All this is because small minded billionaires wanted to save even more money, not caring that they're golden goose is destroying the world one way or another.



CaptainExplosion said:

Since when can there be an AI without a data centre?

The only AI I see these days is the kind that wastes water, steals from human artists, makes child porn, blows up Iranian civilians, and makes heat islands. All this is because small minded billionaires wanted to save even more money, not caring that they're golden goose is destroying the world one way or another.

Well, from a consumer standpoint, since around 2018 or so, which was when Nvidia first included the technology in their GPUs.

The likes of Google and OpenAI might offer AI products run from the massive data centres you're talking about, but the truth of the matter is that anyone with access to a PC with a reasonably modern GPU (or a CPU with AI features) and a decent amount of memory can run an AI model on their computer. It just takes a lot of technical know-how to get it running properly, it'd run far slower than the commercially-available options, and you're limited to open-source models.

Besides, AI is a much broader field than I think you realize. Not only does it include the likes of DLSS and FSR upscaling, but applications like Photoshop and video editing suites make use of the technology to greatly accelerate tasks that would take far longer than if they just ran on the CPU - and, ironically, this is something that actually saves energy, since it means the CPU spends a lot less time running flat-out than it would otherwise.