By using this site, you agree to our Privacy Policy and our Terms of Use. Close

Forums - General - Majority of CEOs report AI brings no financial benefits

Tagged games:

Otter said:

Interesting to hear but we all know the end result is that it will increase productivity. I don't think anyone that's used it for research, maths, coding etc doubts this. It is inevitably the future but it's not perfect.

As a software developer, I don't really think it increases my productivity, and I'm not sure the current AI models we have can be fixed. The issue is that inline suggestions are trash too often, so you need to review every single piece of them, which slows you down and disrupts any flow you might be having. For smaller suggestions, it's probably fine, but it's also an extremely marginal efficiency benefit, because it's effectively acting as a glorified typing assistant (as someone online aptly remarked). For actual prompts, the results could be better, but reliability is still trash. AI can't even get tests right, and if you prompt it to fix them, you might still be off after several rounds of corrections... Of course none of this is helped by AI really liking to disregard the existing practices in the codebase.

I guess the bottom line is that AI is unreliable, so it needs to be carefully reviewed all the time, and it needs manual intervention quite often even when it produces mostly useful results. I think it's just a pain to work with in general. It could be great with more reliable models but we're not there and I'm not sure LLMs can really be there - ever. I'm sure AI will end up being helpful, but the real question is whether an entirely new model is needed for that to happen and if so, when will it happen?

I guess my experience is also affected by our product being something that must be as reliable as possible. If I was doing something less critical where stupid mistakes can be accepted from time to time or even often, AI could be more useful, but that's not really the case.

On the other hand, AI can be incredible helpful for learning new things. Unreliability is not an issue, because you're learning and kinda need to verify the things you've learned somehow anyway. Sometimes it's not more useful than an online search, but especially with search results being increasingly plagued by low-quality, search-engine-optimized content that never gets straight to the point, AI does seem like an attractive option.



Around the Network
Zkuq said:

On the other hand, AI can be incredible helpful for learning new things. Unreliability is not an issue, because you're learning and kinda need to verify the things you've learned somehow anyway. Sometimes it's not more useful than an online search, but especially with search results being increasingly plagued by low-quality, search-engine-optimized content that never gets straight to the point, AI does seem like an attractive option.

You can easily 'learn' the wrong things with AI :/ A
And AI aggregated search results still need to be checked.

I ran into this yesterday trying to find out the brightness of PSVR1



That's the measurement for PSVR2, not 1, based on 1 single Reddit post
https://www.reddit.com/r/PSVR/comments/zqdd66/psvr_peak_brightness_nits_or_cdm2/

AI states it as fact, while the article referenced in the post clearly says PSVR2.


Maybe Google AI is just shit but I often find it blatantly stating falsehoods. 

Learning from a 'textbook' full of errors doesn't seem like a good way to learn new things. At least for now the sources are still shown.

Google AI doesn't learn either. I clicked on dive deeper into AI mode, told it it was wrong, that is was for PSVR2, it agreed and then said it's likely similar to 100 nits of Quest 2, which is correct. Yet I go back to the normal search and it still states it as fact for PSVR1.

It's just a harmless example, however I'm not impressed. Can't trust AI.



SvennoJ said:
Zkuq said:

On the other hand, AI can be incredible helpful for learning new things. Unreliability is not an issue, because you're learning and kinda need to verify the things you've learned somehow anyway. Sometimes it's not more useful than an online search, but especially with search results being increasingly plagued by low-quality, search-engine-optimized content that never gets straight to the point, AI does seem like an attractive option.

You can easily 'learn' the wrong things with AI :/ A
And AI aggregated search results still need to be checked.

I ran into this yesterday trying to find out the brightness of PSVR1



That's the measurement for PSVR2, not 1, based on 1 single Reddit post
https://www.reddit.com/r/PSVR/comments/zqdd66/psvr_peak_brightness_nits_or_cdm2/

AI states it as fact, while the article referenced in the post clearly says PSVR2.


Maybe Google AI is just shit but I often find it blatantly stating falsehoods. 

Learning from a 'textbook' full of errors doesn't seem like a good way to learn new things. At least for now the sources are still shown.

Google AI doesn't learn either. I clicked on dive deeper into AI mode, told it it was wrong, that is was for PSVR2, it agreed and then said it's likely similar to 100 nits of Quest 2, which is correct. Yet I go back to the normal search and it still states it as fact for PSVR1.

It's just a harmless example, however I'm not impressed. Can't trust AI.

Ah. I absolutely don't trust AI results when I'm searching for stuff, precisely because they're so unreliable, and I don't think anyone should, at least if it's anything even remotely important. I think AI responses among search results are very, very irresponsible with the current state of AI.

Like the rest of my post, I meant that in the software development context, where you might perhaps ask some fairly specific questions to figure out how to do something you're supposed to do. In those cases, you often have the skills to immediately evaluate how likely the answer is going to work, and if you make a mistake, you'll figure it out soon enough. Also, personally I often verify some of the more detailed claims AI makes, even in programming-related answers, because they're probably the most likely to cause subtle issues.

In regard to software development, there are also some things that are quite tricky to search for, especially if you don't yet know the name of the thing, but AI is often able to give really good answers. For example, if you see something like "x = y ? 40 : z", I think those symbols used to make it hard to search for. I don't they they do anymore, but regardless, AI is really good with things that might be hard to search for (there are still some things that are trickier to search for).