Show me one evidence where what I presented from AI is not objective?Flannel Jesus wrote: ↑Wed Oct 29, 2025 12:58 pm I saw this on Reddit today and it made me think of VA and how he uses AI.
https://www.reddit.com/r/science/s/xzPCZvpM5c
Top comment is absolutely right:Summary: A new study reveals that when interacting with AI tools like ChatGPT, everyone—regardless of skill level—overestimates their performance. Researchers found that the usual Dunning-Kruger Effect disappears, and instead, AI-literate users show even greater overconfidence in their abilities.
The study suggests that reliance on AI encourages “cognitive offloading,” where users trust the system’s output without reflection or double-checking. Experts say AI literacy alone isn’t enough; people need platforms that foster metacognition and critical thinking to recognize when they might be wrong.
VA seems to have been duped by the AIs over willingness to agree with him and compliment him. He took those comments to heart and made himself look a bit silly in the process.Might this have something to do with LLMs being sycophantic (the classic "You are absolutely right!" glazing) or perhaps LLMs just being LLMs and not magic (i.e. prone to "hallucinations" and other issues which will be "fixed soon")?
I do use LLMs occasionally but only for things where I can easily verify that the LLM is correct.
I have always ask those who oppose to present 'MY' AI's answers for 'their' AI to counter or themselves to counter, but most do not, if they counter, there is always a rational rebuttal to it.
Whatever I get with the help of AI, I verify it is objective, else I would reject it.
Generally, what is most helpful is because my English is not that good, I am using AI to present my points in a more polished manner.
I am well aware, for profit sake LLMs are programmed to please users, else they will reject LLMs and profits will be affected.
The general rule with LLMS is to be a radical skeptic with answers from LLMs, then establish its objectivity. Many a time, I ask LLMs for the criteria of objectivity they based on that they agree with me. I never accept 'MY' AI response if it does not agree with my own assessment.
For people who ask LLMs to rate their intelligence, e.g. Eodnhoj7. LLMs immediately what the users is expecting, so it will give a positive favorable result to please the user, to retain him for the sake of profit.
One thing is very critical in using LLMs, the user must have a high competency in his own criteria of objectivity which is as near as possible to the scientific criteria objectivity. Note my many threads on Framework and System of Objectivity. I have ensured on this with my discussions with LLMs.
Example in the case of Eodnhoj7's AI confirming his thesis A+++, but 'MY' AI rated his as delusional.
It is not that 'MY' AI can do that on its own. I have to provide 'MY' AI with all the philosophical principles from Kant, Wittgenstein and others which are relevant. MY AI will then cross-check the references I provided, check its reasonableness and argue from there.
Another example, when I asked AI to rate Trump, it will come up with the popular answers which is populated by the left and the ignorant. Then I have to introduce some objective method, i.e. an Employee Performance Appraisal with effective criteria.
viewtopic.php?t=44966
Even then, the initial criteria from AI were very inefficient. I have to go through a long discussion with my AI to arrive at what is the ~final criteria which is transparent an open for further discussions.
Who had ever done this exercise effectively? This is only done because of a high sense of objectivity. AI is most useful to provide the data and evidence, the objective methodology is mine, not AI's.
As such, the response of the AI is limited to the philosophical competency of the user.
AI's potential is there to be exploited, the gains from AI is proportionate to the competency of the user in using his level of objectively.