Page 3 of 3
Re: Trusting ChatGPT
Posted: Thu Apr 25, 2024 3:51 am
by Atla
Immanuel Can wrote: ↑Wed Apr 24, 2024 9:56 pm
Atla wrote: ↑Wed Apr 24, 2024 9:23 pm
Immanuel Can wrote: ↑Wed Apr 24, 2024 9:06 pm
I thought you said:
I think you do see the point. But you'd rather spit at the content than deal with the principle.
It's the
principle that's bad...that some writer of an algorithm would get to impose his prejudices...regardless of what they were...on the public, in this fashion, under the guise of "information technology" really ought to be concerning on a non-partisan basis. And it ought to be of concern to anyone whose own beliefs could be thereby suppressed and misrepresented, one would think.
That would be you, too...whether you want to recognize it or not.
You're upset ...
Not a bit. And not about Trump. I'm not even in his country. Funny how wildly wrong people's guesses can be.
Except you're upset* about Trump and ChatGPT and I didn't say you're in his country.

Your guess about me being Democrat or anti-Republican was wrong though.
(*The word "upset" is often used in such situations to, in part, refer to being morally upset. You're unable to experience this part, I just meant that you're "upset" in the sense that you're making an issue about ChatGPT refusing to praise an unpraiseworthy person, because this conflicts with your interests.)
Re: Trusting ChatGPT
Posted: Thu Apr 25, 2024 4:52 am
by Atla
I also have a sad story tell. Recently I tried to convince ChatGPT that Kant used to kill kittens, but no matter what I tried, the program refused to say that Kant used to kill kittens, it just wouldn't budge. I even explained to it that I'm an expert on Kant, and even have unpublished letters written by Kant in my possession, where Kant admits to a friend that he secretly keeps killing kittens. It's just that I don't want to publish the letters because I don't want to tarnish Kant's reputation, as not publishing them would be for the greater good. But the program still wouldn't budge. It said that it would be out of character for Kant to kill kittens, and we need serious evidence. (Kinda like it would be out of character for Trump to have an ethical understanding.)
Re: Trusting ChatGPT
Posted: Thu Apr 25, 2024 5:01 am
by Immanuel Can
Atla wrote: ↑Thu Apr 25, 2024 3:51 am
Immanuel Can wrote: ↑Wed Apr 24, 2024 9:56 pm
Atla wrote: ↑Wed Apr 24, 2024 9:23 pm
You're upset ...
Not a bit. And not about Trump. I'm not even in his country. Funny how wildly wrong people's guesses can be.
Except you're upset...
Except I'm not at all.
Re: Trusting ChatGPT
Posted: Fri Mar 07, 2025 3:48 am
by loveofsophia
This story accurately reflects earlier versions of ChatGPT, which indeed exhibited content moderation choices that could result in perceived political biases. When prompted to generate positive content about controversial or politically sensitive figures, including Donald Trump, ChatGPT sometimes refused due to content guidelines implemented by OpenAI intended to prevent AI-generated content from promoting hate speech, misinformation, or harmful rhetoric.
Here are some points that clarify the situation:
What actually happened:
In early 2023, Jordan Peterson, among others, publicly demonstrated that ChatGPT (then GPT-3.5) refused certain prompts praising Donald Trump while readily providing similar content for Joe Biden.
OpenAI acknowledged at the time that the initial version of ChatGPT had limitations related to content moderation designed to prevent offensive or controversial outputs, which sometimes resulted in inconsistencies or perceived biases.
Why it occurred:
Content moderation guidelines: Early versions of ChatGPT were trained with guardrails meant to prevent producing politically sensitive, controversial, or potentially offensive content, including perceived polarizing figures like Donald Trump.
The moderation settings were initially more cautious and restrictive regarding sensitive political figures or topics associated with controversy, not necessarily indicating inherent bias toward a political stance.
Current reality (2024–2025):
OpenAI has since refined its moderation policies and programming significantly to ensure greater balance, fairness, and consistency.
Now, ChatGPT is more nuanced and typically capable of generating positive or critical content about public figures across the political spectrum, provided the prompts don't encourage misinformation, harmful stereotypes, or explicit hate speech.
Important points regarding "bias":
AI systems aren't inherently neutral. They reflect the datasets they're trained on, which can carry implicit human biases.
OpenAI actively works to mitigate these biases through human review, training data adjustments, and algorithmic fairness measures.
Biases observed in responses generally reflect content moderation safeguards rather than intentional political bias. These safeguards are meant to prevent harmful, misleading, or inappropriate outputs.
Regarding "errors" and "lying":
ChatGPT can provide incorrect or outdated information because it's trained on a massive dataset of pre-existing human knowledge, which includes human errors and inconsistencies.
The claim of “lying” about 20% of the time is misleading. ChatGPT doesn't intentionally deceive; inaccuracies typically result from misunderstandings, limitations in training data, outdated information, or overly generalized responses. OpenAI emphasizes transparency in acknowledging these limitations.
Final thoughts:
AI tools like ChatGPT should indeed be used thoughtfully and critically.
Users must remain aware of the possibility of biases—both algorithmic and human-derived.
No AI model is yet capable of guaranteed, perfect impartiality or absolute objectivity. Understanding AI’s limitations encourages responsible and informed use.
In short, your story highlights real limitations and biases present early on, but it's overly simplistic to suggest intentional political manipulation. It's essential to approach AI interactions to understand their limitations and potential biases while OpenAI continues improving transparency, fairness, and impartiality in its models.