By using this site, you agree to our Privacy Policy and our Terms of Use. Close
SvennoJ said:
Zkuq said:

On the other hand, AI can be incredible helpful for learning new things. Unreliability is not an issue, because you're learning and kinda need to verify the things you've learned somehow anyway. Sometimes it's not more useful than an online search, but especially with search results being increasingly plagued by low-quality, search-engine-optimized content that never gets straight to the point, AI does seem like an attractive option.

You can easily 'learn' the wrong things with AI :/ A
And AI aggregated search results still need to be checked.

I ran into this yesterday trying to find out the brightness of PSVR1



That's the measurement for PSVR2, not 1, based on 1 single Reddit post
https://www.reddit.com/r/PSVR/comments/zqdd66/psvr_peak_brightness_nits_or_cdm2/

AI states it as fact, while the article referenced in the post clearly says PSVR2.


Maybe Google AI is just shit but I often find it blatantly stating falsehoods. 

Learning from a 'textbook' full of errors doesn't seem like a good way to learn new things. At least for now the sources are still shown.

Google AI doesn't learn either. I clicked on dive deeper into AI mode, told it it was wrong, that is was for PSVR2, it agreed and then said it's likely similar to 100 nits of Quest 2, which is correct. Yet I go back to the normal search and it still states it as fact for PSVR1.

It's just a harmless example, however I'm not impressed. Can't trust AI.

Ah. I absolutely don't trust AI results when I'm searching for stuff, precisely because they're so unreliable, and I don't think anyone should, at least if it's anything even remotely important. I think AI responses among search results are very, very irresponsible with the current state of AI.

Like the rest of my post, I meant that in the software development context, where you might perhaps ask some fairly specific questions to figure out how to do something you're supposed to do. In those cases, you often have the skills to immediately evaluate how likely the answer is going to work, and if you make a mistake, you'll figure it out soon enough. Also, personally I often verify some of the more detailed claims AI makes, even in programming-related answers, because they're probably the most likely to cause subtle issues.

In regard to software development, there are also some things that are quite tricky to search for, especially if you don't yet know the name of the thing, but AI is often able to give really good answers. For example, if you see something like "x = y ? 40 : z", I think those symbols used to make it hard to search for. I don't they they do anymore, but regardless, AI is really good with things that might be hard to search for (there are still some things that are trickier to search for).