I don't even necessarily hate AI, in fact I think it could be great, but... I'm not convinced this is it just yet. The issue is that most AI (or at least the most prominent AI) at the moment is large language models, which aren't really capable of thinking. They yield remarkably great results, but in the end, they just spew out what is stastitically most likely to be the desired outcome, based on the training data. And training data, I admit, there's some room to discuss semantics, but it's dubiously acquired, at best.
Based on my limited experiences, AI is also pushed way too hard considering its limited capabilites. Where I work as a software developer, it's pushed fairly strongly, but at the same time, we also need to review everything it generates because the work we do is important enough to not have room for the stupid mistakes AI will probably make, and let me tell you, the code it generates tends to heavily disregard the existing coding conventions we have and probably just go by its training data instead. Even if you manage to bend it to the existing conventions, it makes stupid mistakes that look legit but are something most humans would probably never make. Even for generating automated tests, it's much worse than you'd think. Basically it seems that it's usable only for things you don't really care much about and are fine to be only kind of OK.
The current AI is pretty great for what it's good at though, and based on my experience, that's learning new things quickly. I wouldn't necessarily trust it in larger things, but for individual questions, it can be great - especially if they're hard to search for (some programming language syntactical features come to mind as great examples, but they're not the only thing). As long as you're ready to fact-check or otherwise treat AI answers as not entirely trustworthy - and have that be cost-efficient - it can be great. I just worry that we won't be able to remedy the flaws of current AI because they're pretty inherent to the current AI, and that will lead to the AI bubble bursting.
As for AI gaming applications, I don't have great expectations. I suspect there isn't enough training data for reliable enough AI assistance in certain problematic situations, and often when I look for help online, I'm not interested in the dubious/untrustworthy answers. Gaming Copilot, for example, seems like mildly helpful at best, and probably more harmful than incorrect information you'd find online otherwise, because AI always seems so confident, so you can't really assess the trustworthiness properly.







