The GPT doggerel is out of hand here
-
Flannel Jesus
- Posts: 4302
- Joined: Mon Mar 28, 2022 7:09 pm
Re: The GPT doggerel is out of hand here
You can also now link directly to chat gpt conversations so nobody has to guess what was said.
Here's mine, for example. You can see there was no prior exchange before I pasted the question. https://chatgpt.com/share/67c0eeef-2248 ... f9d04a70dc
Here's mine, for example. You can see there was no prior exchange before I pasted the question. https://chatgpt.com/share/67c0eeef-2248 ... f9d04a70dc
-
Gary Childress
- Posts: 11744
- Joined: Sun Sep 25, 2011 3:08 pm
- Location: It's my fault
Re: The GPT doggerel is out of hand here
OMG! That is fantastic. I'm in business now. I was trying to show a logical proof one time that ChatGPT had done for me but when I tried to paste it, it didn't format correctly so I posted a picture on ILP in order to link to it on PN. This is awesome!Flannel Jesus wrote: ↑Fri Feb 28, 2025 12:02 am You can also now link directly to chat gpt conversations so nobody has to guess what was said.
Here's mine, for example. You can see there was no prior exchange before I pasted the question. https://chatgpt.com/share/67c0eeef-2248 ... f9d04a70dc
-
Perspective
- Posts: 94
- Joined: Mon Jul 22, 2013 4:50 pm
Re: The GPT doggerel is out of hand here
Wow! That’s a high rate, but I believe it. Some teachers encourage using it to proofread - but I don’t see how students would use it, only, after they wrote. At their age, I also often took the path of least resistance.Flannel Jesus wrote: ↑Thu Feb 27, 2025 11:41 pmI came across an article today that said something like 90% of UK university students are using AI to write their papers. Bonkers world we live in, it's changing so fast, and I don't know how to predict the consequences.Perspective wrote: ↑Thu Feb 20, 2025 4:11 pm I see it in school & fear it’s making people less motivated and thus less intelligent, too dependent on the thinking being done for them.
Also, considering the battle for the mind, after it gains more, it could present significant problems.
I use some AI, like GMaps. But if I can figure it out on my own, I will. I use the internet a lot in general - probably not good.
Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
Re: The GPT doggerel is out of hand here
Well, we tried storing knowledge in people's heads for a very long time; and pass it by word of mouth until some delinquent decided to invent writing as a way of transmitting thoughts and memories into the future.Perspective wrote: ↑Mon Mar 03, 2025 5:56 pm Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
The ancient version of an EMP attack was the destruction of libraries.
https://en.wikipedia.org/wiki/Library_of_Alexandria
-
Perspective
- Posts: 94
- Joined: Mon Jul 22, 2013 4:50 pm
Re: The GPT doggerel is out of hand here
Interesting. Hadn’t thought about that but historically, that seems about right.Skepdick wrote: ↑Wed Mar 05, 2025 1:19 pmWell, we tried storing knowledge in people's heads for a very long time; and pass it by word of mouth until some delinquent decided to invent writing as a way of transmitting thoughts and memories into the future.Perspective wrote: ↑Mon Mar 03, 2025 5:56 pm Admittedly this may seem a bit skeptical, but military suggest EMP attack or similar, is likely at some point. That would destroy the grid, internet etc. How would people function if/when they become so dependent on that which is suddenly gone?
The ancient version of an EMP attack was the destruction of libraries.
https://en.wikipedia.org/wiki/Library_of_Alexandria
Re: The GPT doggerel is out of hand here
_______
I'm pretty sure that everyone has already assumed the following,...
...however, just for funzies, and for the edification of anyone who might be interested in such things, I posed the following question to ChatGPT...
The Turing Test is afoot!
_______
I'm pretty sure that everyone has already assumed the following,...
...however, just for funzies, and for the edification of anyone who might be interested in such things, I posed the following question to ChatGPT...
And minus some of its usual and extensive supporting chatter, here is Chat's response (some bolding and emphasis mine)...Me:
I participate in a couple of philosophy forums and have oftentimes suspected that certain interlocutors seem to be surreptitiously using you (ChatGPT) to enhance their arguments without acknowledging that they are using ChatGPT. So, my question is, is it possible for some random user of ChatGPT to feed ChatGPT one of their forum opponent's arguments and then prompt ChatGPT to create a reply for them that makes it look and sound as if the reply from you was of their own making? And if so, how could one recognize and unmask their ruse?
ChatGPT:
This is a great question—and quite a meta one! You're essentially asking about the covert use of AI to outsource philosophical argumentation and how to spot it in the wild. Let's break it down.
Can users covertly use ChatGPT to bolster their arguments?
Yes, absolutely. Here's how someone might do it:
This is technically seamless and can be done without any obvious markers that AI was involved—especially if the user is careful to tweak phrasing or inject a typo here and there to "humanize" the response....
- 1. Copy an opponent’s argument from a forum.
2. Paste it into ChatGPT with a prompt like:
3. Receive a polished response with coherent logic and philosophical terminology.
- “Please refute this argument in a way that sounds intelligent, well-read, and natural.”
4. Post it back on the forum, possibly with slight edits, passing it off as their own.
...Would you be interested in a few “AI telltales” from a sample argument you think might’ve been ChatGPT-generated? You could drop it here and we can analyze it together.
Me:
Okay, how about you analyze this particular post:
[....a long post from a suspicious contributor in the Political Philosophy forum....]
Anyway, the ultimate verdict is that with the recent advent of AI chatbots, philosophy forum participants can no longer assume that they aren't debating with some unscrupulous person who uses "uncited" AI to bolster (or even completely write) their arguments.ChatGPT:
Thanks for sharing that excerpt—this is exactly the kind of writing that could plausibly come from either a thoughtful human or an AI like me. Let’s put on our detective hat and analyze this from several angles to see whether it might be AI-generated, possibly ChatGPT-assisted....
[.......]
...Verdict?
Based on the balance of probabilities, this post is highly likely to be AI-assisted, if not wholly AI-generated. Why?
Of course, it’s possible this was written by a very polished human writer who's just good at summarizing balanced critiques—but even then, they may well have been aided by ChatGPT as a scaffold.
- The stylistic hallmarks match ChatGPT’s default patterns.
The neutral tone, tidy formatting, and evenly-distributed nuance are classic signs.
The argumentation lacks a personalized "voice" or angle—there’s no rhetorical flourish, real-world analogy, or philosophical commitment that you often get from a human passionately defending or opposing an idea.
The Turing Test is afoot!
_______
- henry quirk
- Posts: 16379
- Joined: Fri May 09, 2008 8:07 pm
- Location: 🔥AMERICA🔥
- Contact:
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: The GPT doggerel is out of hand here
So you can polish a turd? And, er, no AI would ask that question. And it never responds like a shit house rat. To truly post shit, or post true shit, you have to be human. And wear disposable rubber gloves and put it in a large envelope.henry quirk wrote: ↑Wed Apr 23, 2025 8:19 pmThe kicker: even with an AI assist, that unscrupulous person's position, and his defense of that position, is still shit (well-polished, sure, but still shit).
... could Grok have generated this?
- henry quirk
- Posts: 16379
- Joined: Fri May 09, 2008 8:07 pm
- Location: 🔥AMERICA🔥
- Contact:
Re: The GPT doggerel is out of hand here
Obviously, yes.So you can polish a turd?
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: The GPT doggerel is out of hand here
Only metaphorically, dear henry, dear henry, only metaphorically [, dear henry, metaphorically]. Unless you dip it in liquid nitrogen and polish at -18 c.
Last edited by Martin Peter Clarke on Fri Apr 25, 2025 10:14 am, edited 1 time in total.
- henry quirk
- Posts: 16379
- Joined: Fri May 09, 2008 8:07 pm
- Location: 🔥AMERICA🔥
- Contact:
Re: The GPT doggerel is out of hand here
Yes, Marty, I know.Only metaphorically, dear henry
-
Martin Peter Clarke
- Posts: 1617
- Joined: Tue Apr 01, 2025 9:54 pm
Re: The GPT doggerel is out of hand here
Sorry, I finished the allusion. And I know that you know, dear henry, dear henry, but others, especially with holes in their buckets, may not.
- henry quirk
- Posts: 16379
- Joined: Fri May 09, 2008 8:07 pm
- Location: 🔥AMERICA🔥
- Contact:
Re: The GPT doggerel is out of hand here
Martin Peter Clarke wrote: ↑Fri Apr 25, 2025 10:16 amSorry, I finished the allusion. And I know that you know, dear henry, dear henry, but others, especially with holes in their buckets, may not.
- FlashDangerpants
- Posts: 8815
- Joined: Mon Jan 04, 2016 11:54 pm
Re: The GPT doggerel is out of hand here
Things have now got so bad that Iambignose is arguing with a chatbot named after a cake that isn't even on this site...
https://www.numberanalytics.com/blog/co ... onsibility
Sarah Lee, AI generated, Llama-4-Maverick-17B-128E-Instruct-FP8
His source....iambiguous wrote: ↑Wed Aug 20, 2025 10:21 pm Compatibilism and Moral Responsibility
Exploring the Implications of Compatibilist Theories on Ethics
Sarah Lee
Significant? I'd say so. The part where someone is able to clearly demonstrate how and why, even though Mary was never able not to abort her unborn baby/clump of cells, she is still morally responsible for doing so. I still can't wrap my head around that...if in fact it is true.The implications of compatibilism on moral blame and praise are significant. If we are morally responsible for our actions, we can be praised for the good ones and blamed for the bad ones.
Which then brings me back around to this: click.
https://www.numberanalytics.com/blog/co ... onsibility
Sarah Lee, AI generated, Llama-4-Maverick-17B-128E-Instruct-FP8
Re: The GPT doggerel is out of hand here
Isn't that what philosophy is? Writing things that please us? The AI slop is no worse than most of the philosophy publications on arXiv and the level of critique/feedback/engagement you get from AI when you prompt it well is a few notches above the "philosophers" who hang out here.FlashDangerpants wrote: ↑Sun Dec 15, 2024 1:29 pm It's one of maybe hundreds of threads now where the same person has done the same thing to get an obsequious AI to write something that pleases him (which is the only thing they actually do) and attempted to pass it off as philosophy.
It's a better sounding board than you for sure. Besides, you don't like obsequious. You don't like insolent defiance. Do you even know what you want; or are you just another old fart longing for the good old days? Perhaps you feel some valiable/important tradition is being lost to history? Nostalgia just isn't what it used to be, eh?