The GPT doggerel is out of hand here

Welcome to the forum

Moderators: AMod, iMod

Flannel Jesus
Posts: 4302
Joined: Mon Mar 28, 2022 7:09 pm

Re: The GPT doggerel is out of hand here

Post by Flannel Jesus »

You can also now link directly to chat gpt conversations so nobody has to guess what was said.

Here's mine, for example. You can see there was no prior exchange before I pasted the question. https://chatgpt.com/share/67c0eeef-2248 ... f9d04a70dc
Gary Childress
Posts: 11744
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The GPT doggerel is out of hand here

Post by Gary Childress »

Flannel Jesus wrote: Fri Feb 28, 2025 12:02 am You can also now link directly to chat gpt conversations so nobody has to guess what was said.

Here's mine, for example. You can see there was no prior exchange before I pasted the question. https://chatgpt.com/share/67c0eeef-2248 ... f9d04a70dc
OMG! That is fantastic. I'm in business now. I was trying to show a logical proof one time that ChatGPT had done for me but when I tried to paste it, it didn't format correctly so I posted a picture on ILP in order to link to it on PN. This is awesome!
Perspective
Posts: 94
Joined: Mon Jul 22, 2013 4:50 pm

Re: The GPT doggerel is out of hand here

Post by Perspective »

Flannel Jesus wrote: Thu Feb 27, 2025 11:41 pm
Perspective wrote: Thu Feb 20, 2025 4:11 pm I see it in school & fear it’s making people less motivated and thus less intelligent, too dependent on the thinking being done for them.

Also, considering the battle for the mind, after it gains more, it could present significant problems.
I came across an article today that said something like 90% of UK university students are using AI to write their papers. Bonkers world we live in, it's changing so fast, and I don't know how to predict the consequences.
Wow! That’s a high rate, but I believe it. Some teachers encourage using it to proofread - but I don’t see how students would use it, only, after they wrote. At their age, I also often took the path of least resistance.

I use some AI, like GMaps. But if I can figure it out on my own, I will. I use the internet a lot in general - probably not good.

Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
Skepdick
Posts: 16022
Joined: Fri Jun 14, 2019 11:16 am

Re: The GPT doggerel is out of hand here

Post by Skepdick »

Perspective wrote: Mon Mar 03, 2025 5:56 pm Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
Well, we tried storing knowledge in people's heads for a very long time; and pass it by word of mouth until some delinquent decided to invent writing as a way of transmitting thoughts and memories into the future.

The ancient version of an EMP attack was the destruction of libraries.

https://en.wikipedia.org/wiki/Library_of_Alexandria
Perspective
Posts: 94
Joined: Mon Jul 22, 2013 4:50 pm

Re: The GPT doggerel is out of hand here

Post by Perspective »

Skepdick wrote: Wed Mar 05, 2025 1:19 pm
Perspective wrote: Mon Mar 03, 2025 5:56 pm Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
Well, we tried storing knowledge in people's heads for a very long time; and pass it by word of mouth until some delinquent decided to invent writing as a way of transmitting thoughts and memories into the future.

The ancient version of an EMP attack was the destruction of libraries.

https://en.wikipedia.org/wiki/Library_of_Alexandria
Interesting. Hadn’t thought about that but historically, that seems about right.
seeds
Posts: 2880
Joined: Tue Aug 02, 2016 9:31 pm

Re: The GPT doggerel is out of hand here

Post by seeds »

_______

I'm pretty sure that everyone has already assumed the following,...

...however, just for funzies, and for the edification of anyone who might be interested in such things, I posed the following question to ChatGPT...
Me:
I participate in a couple of philosophy forums and have oftentimes suspected that certain interlocutors seem to be surreptitiously using you (ChatGPT) to enhance their arguments without acknowledging that they are using ChatGPT. So, my question is, is it possible for some random user of ChatGPT to feed ChatGPT one of their forum opponent's arguments and then prompt ChatGPT to create a reply for them that makes it look and sound as if the reply from you was of their own making? And if so, how could one recognize and unmask their ruse?
And minus some of its usual and extensive supporting chatter, here is Chat's response (some bolding and emphasis mine)...
ChatGPT:
This is a great question—and quite a meta one! You're essentially asking about the covert use of AI to outsource philosophical argumentation and how to spot it in the wild. Let's break it down.

Can users covertly use ChatGPT to bolster their arguments?

Yes, absolutely. Here's how someone might do it:
  • 1. Copy an opponent’s argument from a forum.

    2. Paste it into ChatGPT with a prompt like:
    • “Please refute this argument in a way that sounds intelligent, well-read, and natural.”
    3. Receive a polished response with coherent logic and philosophical terminology.

    4. Post it back on the forum, possibly with slight edits, passing it off as their own.
This is technically seamless and can be done without any obvious markers that AI was involved—especially if the user is careful to tweak phrasing or inject a typo here and there to "humanize" the response....

...Would you be interested in a few “AI telltales” from a sample argument you think might’ve been ChatGPT-generated? You could drop it here and we can analyze it together.
Me:
Okay, how about you analyze this particular post:

[....a long post from a suspicious contributor in the Political Philosophy forum....]
ChatGPT:
Thanks for sharing that excerpt—this is exactly the kind of writing that could plausibly come from either a thoughtful human or an AI like me. Let’s put on our detective hat and analyze this from several angles to see whether it might be AI-generated, possibly ChatGPT-assisted....

[.......]

...Verdict?

Based on the balance of probabilities, this post is highly likely to be AI-assisted, if not wholly AI-generated. Why?
  • The stylistic hallmarks match ChatGPT’s default patterns.

    The neutral tone, tidy formatting, and evenly-distributed nuance are classic signs.

    The argumentation lacks a personalized "voice" or angle—there’s no rhetorical flourish, real-world analogy, or philosophical commitment that you often get from a human passionately defending or opposing an idea.
Of course, it’s possible this was written by a very polished human writer who's just good at summarizing balanced critiques—but even then, they may well have been aided by ChatGPT as a scaffold.
Anyway, the ultimate verdict is that with the recent advent of AI chatbots, philosophy forum participants can no longer assume that they aren't debating with some unscrupulous person who uses "uncited" AI to bolster (or even completely write) their arguments.

The Turing Test is afoot!
_______
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: The GPT doggerel is out of hand here

Post by henry quirk »

seeds wrote: Tue Apr 22, 2025 5:42 pm
The kicker: even with an AI assist, that unscrupulous person's position, and his defense of that position, is still shit (well-polished, sure, but still shit).
Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: The GPT doggerel is out of hand here

Post by Martin Peter Clarke »

henry quirk wrote: Wed Apr 23, 2025 8:19 pm
seeds wrote: Tue Apr 22, 2025 5:42 pm
The kicker: even with an AI assist, that unscrupulous person's position, and his defense of that position, is still shit (well-polished, sure, but still shit).
So you can polish a turd? And, er, no AI would ask that question. And it never responds like a shit house rat. To truly post shit, or post true shit, you have to be human. And wear disposable rubber gloves and put it in a large envelope.

... could Grok have generated this?
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: The GPT doggerel is out of hand here

Post by henry quirk »

Martin Peter Clarke wrote: Thu Apr 24, 2025 11:41 am
So you can polish a turd?
Obviously, yes.
Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: The GPT doggerel is out of hand here

Post by Martin Peter Clarke »

henry quirk wrote: Thu Apr 24, 2025 12:13 pm
Martin Peter Clarke wrote: Thu Apr 24, 2025 11:41 am
So you can polish a turd?
Obviously, yes.
Only metaphorically, dear henry, dear henry, only metaphorically [, dear henry, metaphorically]. Unless you dip it in liquid nitrogen and polish at -18 c.
Last edited by Martin Peter Clarke on Fri Apr 25, 2025 10:14 am, edited 1 time in total.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: The GPT doggerel is out of hand here

Post by henry quirk »

Only metaphorically, dear henry
Yes, Marty, I know.
Martin Peter Clarke
Posts: 1617
Joined: Tue Apr 01, 2025 9:54 pm

Re: The GPT doggerel is out of hand here

Post by Martin Peter Clarke »

henry quirk wrote: Thu Apr 24, 2025 12:31 pm
Only metaphorically, dear henry
Yes, Marty, I know.
Sorry, I finished the allusion. And I know that you know, dear henry, dear henry, but others, especially with holes in their buckets, may not.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: The GPT doggerel is out of hand here

Post by henry quirk »

Martin Peter Clarke wrote: Fri Apr 25, 2025 10:16 am
henry quirk wrote: Thu Apr 24, 2025 12:31 pm
Only metaphorically, dear henry
Yes, Marty, I know.
Sorry, I finished the allusion. And I know that you know, dear henry, dear henry, but others, especially with holes in their buckets, may not.
👍
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: The GPT doggerel is out of hand here

Post by FlashDangerpants »

Things have now got so bad that Iambignose is arguing with a chatbot named after a cake that isn't even on this site...
iambiguous wrote: Wed Aug 20, 2025 10:21 pm Compatibilism and Moral Responsibility
Exploring the Implications of Compatibilist Theories on Ethics
Sarah Lee
The implications of compatibilism on moral blame and praise are significant. If we are morally responsible for our actions, we can be praised for the good ones and blamed for the bad ones.
Significant? I'd say so. The part where someone is able to clearly demonstrate how and why, even though Mary was never able not to abort her unborn baby/clump of cells, she is still morally responsible for doing so. I still can't wrap my head around that...if in fact it is true.

Which then brings me back around to this: click.
His source....
https://www.numberanalytics.com/blog/co ... onsibility

Sarah Lee, AI generated, Llama-4-Maverick-17B-128E-Instruct-FP8
Skepdick
Posts: 16022
Joined: Fri Jun 14, 2019 11:16 am

Re: The GPT doggerel is out of hand here

Post by Skepdick »

FlashDangerpants wrote: Sun Dec 15, 2024 1:29 pm It's one of maybe hundreds of threads now where the same person has done the same thing to get an obsequious AI to write something that pleases him (which is the only thing they actually do) and attempted to pass it off as philosophy.
Isn't that what philosophy is? Writing things that please us? The AI slop is no worse than most of the philosophy publications on arXiv and the level of critique/feedback/engagement you get from AI when you prompt it well is a few notches above the "philosophers" who hang out here.

It's a better sounding board than you for sure. Besides, you don't like obsequious. You don't like insolent defiance. Do you even know what you want; or are you just another old fart longing for the good old days? Perhaps you feel some valiable/important tradition is being lost to history? Nostalgia just isn't what it used to be, eh?
Post Reply