https://www.reddit.com/r/science/s/xzPCZvpM5c
Top comment is absolutely right:Summary: A new study reveals that when interacting with AI tools like ChatGPT, everyone—regardless of skill level—overestimates their performance. Researchers found that the usual Dunning-Kruger Effect disappears, and instead, AI-literate users show even greater overconfidence in their abilities.
The study suggests that reliance on AI encourages “cognitive offloading,” where users trust the system’s output without reflection or double-checking. Experts say AI literacy alone isn’t enough; people need platforms that foster metacognition and critical thinking to recognize when they might be wrong.
VA seems to have been duped by the AIs over willingness to agree with him and compliment him. He took those comments to heart and made himself look a bit silly in the process.Might this have something to do with LLMs being sycophantic (the classic "You are absolutely right!" glazing) or perhaps LLMs just being LLMs and not magic (i.e. prone to "hallucinations" and other issues which will be "fixed soon")?
I do use LLMs occasionally but only for things where I can easily verify that the LLM is correct.