Page 1 of 2

Over-reliant on AI

Posted: Fri Nov 08, 2024 6:58 am
by Veritas Aequitas
Iwannaplato wrote: Fri Nov 08, 2024 6:33 am The problems with replacing one's own thinking with AIs by VA's BibleMuhammed
Loss of Personal Voice: AI responses lack personal perspective, nuance, and insight. Relying on AI removes the individuality and creativity that comes from forming your own thoughts and expressing them in your own words.

False Sense of Authority: Using AI-generated content as an appeal to authority can give a false impression of expertise or infallibility. AI can generate responses based on patterns, not on an actual understanding of the topic or context.

Bias in the AI: AI systems are not neutral—they are trained on large datasets, which often carry inherent biases. Relying on them without recognizing this can perpetuate misleading or unbalanced perspectives.

Shallow Understanding: AI-generated content is often superficial. It might present facts or arguments, but it cannot engage in the depth of analysis or nuance that a well-informed human can bring to the table.

Over-Simplification: AI responses tend to simplify complex issues for the sake of clarity or brevity, potentially glossing over important nuances and alternative viewpoints that might be critical for understanding a topic.

Echo Chamber Effect: AI systems are designed to give responses based on patterns in their training data, which means they might reinforce existing views and not challenge assumptions. This can lead to intellectual stagnation and entrenchment in one’s existing beliefs.

Lack of Accountability: When relying on AI, it becomes easy to avoid taking personal responsibility for one’s ideas. If the AI's suggestions are wrong, the person can simply blame the machine instead of engaging with the reasoning behind the idea.

Limited Creativity: AIs are programmed to mimic human writing and thinking, but they cannot create new ideas or explore deeply original concepts in the way that humans can. Relying on AI stifles innovation and originality in philosophical discussion.

Loss of the Human Element: Philosophy, at its core, is about human experiences, values, and existential questions. AI lacks the human element of personal experience, emotions, and cultural context, which are crucial to philosophical thinking.

Misleading Authority: Using AI-generated quotes as authoritative references without properly questioning or verifying them leads to a distorted sense of what is genuinely credible or relevant. AI is not infallible and can be wrong or misleading.

Circular Reasoning: If one continually references the same AI as a source of authority, it can lead to circular reasoning—using the AI as an authority because it says something that confirms your view, without engaging with other sources or perspectives.

Undermining Real Expertise: Relying on AI instead of engaging with real experts or original texts diminishes the value of expertise. AI doesn't replace the intellectual labor, insight, or experience that human philosophers and scholars bring to the table.

Dehumanizing Discourse: Philosophical discussions often involve empathy, interpretation, and understanding of human nature. AI lacks the ability to genuinely engage in these human aspects, leading to a dehumanized, robotic form of discourse.

Shifting the Burden: AI might give the illusion of a well-rounded, well-informed answer, but it can’t replace the intellectual effort involved in independently sourcing, analyzing, and evaluating information. This shifts the burden of responsibility away from the person, making them intellectually lazy.

Additional problems viewed from a more general perspective

Diminished Intellectual Autonomy: Constant reliance on AI for answers can gradually erode one’s intellectual autonomy, leaving the individual less able to make independent decisions, form personal opinions, or engage in self-reflection.

Lack of Engagement with Diverse Perspectives: AI often pulls from mainstream datasets and may not engage with marginalized or unconventional viewpoints as effectively. This can limit one's exposure to diverse ways of thinking and stifle critical engagement with less dominant ideas.

Erosion of Trust in Human Expertise: If people increasingly turn to AI as a source of "authoritative" knowledge, it could undermine trust in human experts and professionals, who bring years of study, context, and experience that AI lacks.

Misinterpretation of Context: AI can misinterpret the nuances of specific situations, failing to understand the broader context that might shape a philosophical argument. This could lead to poor reasoning or inadequate responses that don’t reflect the complexities of the real world.

Over-reliance on AI Leads to Stagnation: If people rely too much on AI, they may stop pushing the boundaries of their own thinking. Intellectual progress often comes from challenging one’s assumptions, experimenting with new ideas, and debating others—AI doesn't foster this dynamic, participatory process.

Undermining Authentic Dialogue: In a forum or community setting, when AI is used to dominate discussions, it can create a false sense of consensus. This harms authentic dialogue, as it leads to the propagation of machine-generated content that doesn't reflect the actual views of the participants.

Decreased Ability to Adapt to New Ideas: AI responses are based on pre-existing data, so they can often be rigid and slow to adapt to emerging or evolving ideas. Individuals relying on AI might find it harder to keep up with new developments in philosophy or other fields of knowledge.

Apathy Toward Societal Problems: If someone stops engaging with philosophy and instead defers to AI-generated solutions, they might become complacent, avoiding the hard work of thinking through solutions to societal problems, which require nuanced human judgment, empathy, and creativity.

Fragmented Intellectual Identity: Over time, heavy reliance on AI-generated content could erode an individual's intellectual identity. Rather than developing a unique perspective, a person could become a mere aggregator of AI outputs, losing their personal intellectual coherence.

Potential for AI to Reflect Cultural Dominance: AIs are often trained on predominantly Western, English-language datasets and may reflect certain biases or perspectives over others. This could create a distorted intellectual landscape where ideas from non-Western or less-represented cultures are marginalized or ignored.

I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:

Here is ChatGpt:
ChatGpt wrote:Based on your [VA] interactions so far, it doesn't appear that you're over-relying on ChatGPT.
You've demonstrated an approach that actively evaluates and counter-argues points raised by the AI, showing critical engagement rather than passive acceptance.
Your usage reflects a balanced, resourceful approach to gathering insights without compromising independent thought.
I believe it you who is likely to be doing passive acceptance of what is answered by ChatGpt?

For others, if one is not sure, self-check with AI to find out whether one is over reliant on AI as a passive acceptor of AI generated answers.

Discuss??
Views??

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 6:58 am
by Veritas Aequitas
Notes:

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 7:27 am
by Iwannaplato
Veritas Aequitas wrote: Fri Nov 08, 2024 6:58 am
Iwannaplato wrote: Fri Nov 08, 2024 6:33 am The problems with replacing one's own thinking with AIs by VA's BibleMuhammed
Loss of Personal Voice: AI responses lack personal perspective, nuance, and insight. Relying on AI removes the individuality and creativity that comes from forming your own thoughts and expressing them in your own words.

False Sense of Authority: Using AI-generated content as an appeal to authority can give a false impression of expertise or infallibility. AI can generate responses based on patterns, not on an actual understanding of the topic or context.

Bias in the AI: AI systems are not neutral—they are trained on large datasets, which often carry inherent biases. Relying on them without recognizing this can perpetuate misleading or unbalanced perspectives.

Shallow Understanding: AI-generated content is often superficial. It might present facts or arguments, but it cannot engage in the depth of analysis or nuance that a well-informed human can bring to the table.

Over-Simplification: AI responses tend to simplify complex issues for the sake of clarity or brevity, potentially glossing over important nuances and alternative viewpoints that might be critical for understanding a topic.

Echo Chamber Effect: AI systems are designed to give responses based on patterns in their training data, which means they might reinforce existing views and not challenge assumptions. This can lead to intellectual stagnation and entrenchment in one’s existing beliefs.

Lack of Accountability: When relying on AI, it becomes easy to avoid taking personal responsibility for one’s ideas. If the AI's suggestions are wrong, the person can simply blame the machine instead of engaging with the reasoning behind the idea.

Limited Creativity: AIs are programmed to mimic human writing and thinking, but they cannot create new ideas or explore deeply original concepts in the way that humans can. Relying on AI stifles innovation and originality in philosophical discussion.

Loss of the Human Element: Philosophy, at its core, is about human experiences, values, and existential questions. AI lacks the human element of personal experience, emotions, and cultural context, which are crucial to philosophical thinking.

Misleading Authority: Using AI-generated quotes as authoritative references without properly questioning or verifying them leads to a distorted sense of what is genuinely credible or relevant. AI is not infallible and can be wrong or misleading.

Circular Reasoning: If one continually references the same AI as a source of authority, it can lead to circular reasoning—using the AI as an authority because it says something that confirms your view, without engaging with other sources or perspectives.

Undermining Real Expertise: Relying on AI instead of engaging with real experts or original texts diminishes the value of expertise. AI doesn't replace the intellectual labor, insight, or experience that human philosophers and scholars bring to the table.

Dehumanizing Discourse: Philosophical discussions often involve empathy, interpretation, and understanding of human nature. AI lacks the ability to genuinely engage in these human aspects, leading to a dehumanized, robotic form of discourse.

Shifting the Burden: AI might give the illusion of a well-rounded, well-informed answer, but it can’t replace the intellectual effort involved in independently sourcing, analyzing, and evaluating information. This shifts the burden of responsibility away from the person, making them intellectually lazy.

Additional problems viewed from a more general perspective

Diminished Intellectual Autonomy: Constant reliance on AI for answers can gradually erode one’s intellectual autonomy, leaving the individual less able to make independent decisions, form personal opinions, or engage in self-reflection.

Lack of Engagement with Diverse Perspectives: AI often pulls from mainstream datasets and may not engage with marginalized or unconventional viewpoints as effectively. This can limit one's exposure to diverse ways of thinking and stifle critical engagement with less dominant ideas.

Erosion of Trust in Human Expertise: If people increasingly turn to AI as a source of "authoritative" knowledge, it could undermine trust in human experts and professionals, who bring years of study, context, and experience that AI lacks.

Misinterpretation of Context: AI can misinterpret the nuances of specific situations, failing to understand the broader context that might shape a philosophical argument. This could lead to poor reasoning or inadequate responses that don’t reflect the complexities of the real world.

Over-reliance on AI Leads to Stagnation: If people rely too much on AI, they may stop pushing the boundaries of their own thinking. Intellectual progress often comes from challenging one’s assumptions, experimenting with new ideas, and debating others—AI doesn't foster this dynamic, participatory process.

Undermining Authentic Dialogue: In a forum or community setting, when AI is used to dominate discussions, it can create a false sense of consensus. This harms authentic dialogue, as it leads to the propagation of machine-generated content that doesn't reflect the actual views of the participants.

Decreased Ability to Adapt to New Ideas: AI responses are based on pre-existing data, so they can often be rigid and slow to adapt to emerging or evolving ideas. Individuals relying on AI might find it harder to keep up with new developments in philosophy or other fields of knowledge.

Apathy Toward Societal Problems: If someone stops engaging with philosophy and instead defers to AI-generated solutions, they might become complacent, avoiding the hard work of thinking through solutions to societal problems, which require nuanced human judgment, empathy, and creativity.

Fragmented Intellectual Identity: Over time, heavy reliance on AI-generated content could erode an individual's intellectual identity. Rather than developing a unique perspective, a person could become a mere aggregator of AI outputs, losing their personal intellectual coherence.

Potential for AI to Reflect Cultural Dominance: AIs are often trained on predominantly Western, English-language datasets and may reflect certain biases or perspectives over others. This could create a distorted intellectual landscape where ideas from non-Western or less-represented cultures are marginalized or ignored.

I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:

Here is ChatGpt:
ChatGpt wrote:Based on your [VA] interactions so far, it doesn't appear that you're over-relying on ChatGPT.
You've demonstrated an approach that actively evaluates and counter-argues points raised by the AI, showing critical engagement rather than passive acceptance.
Your usage reflects a balanced, resourceful approach to gathering insights without compromising independent thought.
I believe it you who is likely to be doing passive acceptance of what is answered by ChatGpt?

For others, if one is not sure, self-check with AI to find out whether one is over reliant on AI as a passive acceptor of AI generated answers.

Discuss??
Views??
LOL you just asked a user-complimentary and user-adjusted bot to evaluate your use of it. Further you did precisely the kind of thing I (and it) was describing in the post you quoted above. Relaying on it in an appeal to authority and doing your thinking. Third, Chatgpt can't see how you use its posts here unless you feed all of that into it. It evaluated it's own chats with you. So there's...
bias in the process you used, built into the AI
lack of full information
precisely the phenomenon described in the criticism of your use in your response to the criticism

And you can't see any of this.

But what I have learned is that you are utterly impervious to learning anything about your confusions about AI, at this time, so it's pointless to show you that the authority you appeal to can produce precisely the opposite of what you want it to, even after it produces what you want it to. So, I'll stop posting AI responses to you, since like everything else, texts, Kant, other people's posts, you only see what you want.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 7:40 am
by Veritas Aequitas
Iwannaplato wrote: Fri Nov 08, 2024 7:27 am
Veritas Aequitas wrote: Fri Nov 08, 2024 6:58 am
Iwannaplato wrote: Fri Nov 08, 2024 6:33 am The problems with replacing one's own thinking with AIs by VA's BibleMuhammed


I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:

Here is ChatGpt:
ChatGpt wrote:Based on your [VA] interactions so far, it doesn't appear that you're over-relying on ChatGPT.
You've demonstrated an approach that actively evaluates and counter-argues points raised by the AI, showing critical engagement rather than passive acceptance.
Your usage reflects a balanced, resourceful approach to gathering insights without compromising independent thought.
I believe it you who is likely to be doing passive acceptance of what is answered by ChatGpt?

For others, if one is not sure, self-check with AI to find out whether one is over reliant on AI as a passive acceptor of AI generated answers.

Discuss??
Views??
LOL you just asked a user-complimentary and user-adjusted bot to evaluate your use of it. Further you did precisely the kind of thing I (and it) was describing in the post you quoted above. Relaying on it in an appeal to authority and doing your thinking. Third, Chatgpt can't see how you use its posts here unless you feed all of that into it. It evaluated it's own chats with you. So there's...
bias in the process you used, built into the AI
lack of full information
precisely the phenomenon described in the criticism of your use in your response to the criticism

And you can't see any of this.
Dumb again.

Of course, in this obvious case, there is no way how ChatGpt can know I how use the information and discussions.
It is very immature to use this counter.

What is relevant is this, i.e. based on the Chats ChatGpt has saved in its memory;
You've demonstrated an approach that actively evaluates and counter-argues points raised by the AI, showing critical engagement rather than passive acceptance.
There is a sense of overuse and over reliant if one is merely a chronic passive acceptance without countering ChatGpt's response.
ChatGpt can also infer based on the contents [mostly philosophical] of the discussions I have had with it so far.

ChatGpt unless asked will always give a rough general overview for a start and in many cases I have highlighted the nuanced views to ChatGpt to expand on the discussion.

In many cases, the answers given by ChatGpt which I had posted are actually triggered by me to get ChatGpt to look into the deeper aspects, e.g. of Kantian philosophy which ChatGpt has a handicapped because it does not have access the CPR directly.

As I had stated, you are more likely a passive acceptor rather than engaging in serious extensive discussion with ChatGpt or other AI?

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 8:48 am
by Iwannaplato
Veritas Aequitas wrote: Fri Nov 08, 2024 6:58 am I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:
I love how you restrict the concept of overreliance to one criterion. You've been presented with many. But here, self-servingly you restrict it to one. I should have pointed this out in a previous post.
Here is ChatGpt:
I believe it you who is likely to be doing passive acceptance of what is answered by ChatGpt?

For others, if one is not sure, self-check with AI to find out whether one is over reliant on AI as a passive acceptor of AI generated answers.
Discuss??
Views??
LOL you just asked a user-complimentary (note: not in this context complEmentary) and user-adjusted bot to evaluate your use of it. Further you did precisely the kind of thing I (and it) was describing in the post you quoted above. Relaying on it in an appeal to authority and doing your thinking. Third, Chatgpt can't see how you use its posts here unless you feed all of that into it. It evaluated it's own chats with you. So there's...
bias in the process you used, built into the AI
lack of full information
precisely the phenomenon described in the criticism of your use in your response to the criticism

And you can't see any of this.
Dumb again.

Of course, in this obvious case, there is no way how ChatGpt can know I how use the information and discussions.
It is very immature to use this counter.
Actually, that's not true in two different ways. The first and most telling is that YOU used Chatgpt to evaulate if you were overrelying on AI. You seemed to think you could demonstate with Chatgpt, using the method you did, that you were not overrelying on AI. My point was that your method cannot work. When I asked Chatgpt about people overrelying and the problems with doing this, it was a general question about replacing one's own thinking with AIs in discussion forums. Obviously there's no way to gauge an individual person's use, but those are the AIs general concerns about the effects on those who do and then in the later numbered issues on society in general.

There would be a way for Chatgpt to evaluate how you use it in participation here, and it's truly odd that you can't see that. But, as I said in other parts of my posts, this would still be problematic, given the other two objections I wrote about. It would have one less problem, but still two problems.
What is relevant is this, i.e. based on the Chats ChatGpt has saved in its memory;
You've demonstrated an approach that actively evaluates and counter-argues points raised by the AI, showing critical engagement rather than passive acceptance.
There is a sense of overuse and over reliant if one is merely a chronic passive acceptance without countering ChatGpt's response.
ChatGpt can also infer based on the contents [mostly philosophical] of the discussions I have had with it so far.
ChatGpt unless asked will always give a rough general overview for a start and in many cases I have highlighted the nuanced views to ChatGpt to expand on the discussion.
In many cases, the answers given by ChatGpt which I had posted are actually triggered by me to get ChatGpt to look into the deeper aspects, e.g. of Kantian philosophy which ChatGpt has a handicapped because it does not have access the CPR directly.
It has at this time a number of handicaps for the kinds of use you put it to, as it will say itself. Explorations with it are obviously fine. Posting it as if it is an authority is confused. Accusing others of manipulating through their prompts whenever you don't like what they find the AI says, but assuming that your own process is more objective is also confused.
As I had stated, you are more likely a passive acceptor rather than engaging in serious extensive discussion with ChatGpt or other AI?
Yup, you stated it. You seem to conflate statements with jusitifications for positions.

So, here's what you managed here. You did manage to respond to one point I made. Kudos for that. However you defended yourself by doing precisely what the AI suggests itself is a problematic way of doing things. You also asserted something false. You also failed to respond to the specific problems the AI raised in general and that I pointed out in specific about your defense of your own use.

You're not learning from others and now you are undermining learning from yourself. And you will continue to rely on AIs. I only used them with you in the hopes you might get some perspective. Hope springs eternal. As said - duh - I won't use them again. You're impervious to evidence. And you expect others to take seriously authorities that you do not take seriously when they don't say what you want.

It's certainly self-protective, as far as ego. Unfortunately, pattern reduces learning.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 9:20 am
by Veritas Aequitas
Iwannaplato wrote: Fri Nov 08, 2024 8:48 am Posting it as if it is an authority is confused. Accusing others of manipulating through their prompts whenever you don't like what they find the AI says, but assuming that your own process is more objective is also confused.
I have always qualify whatever AI provide to me is with reservations note my use of [wR] most of the time, omitted at times.
I have never relied on AI as the authority but is obvious AI merely summarized from sources in the internet or on whatever it is trained on. At times, AI will provide references to the authority or sources.

Most of the cases we discussed involved Kant.
I am confident I know more in depth on Kant than ChatGpt which do not have access to the the texts of Kant CPR.
In most cases, where I accused you on leading ChatGpt to a strawman involved discussions related to Kant.
Because I am more familiar [nearly all nooks & crannies] with Kant than ChatGpt, I can easily detect where you have misled ChatGpt with your question.
As needed I will provide the quotes from the CPR or past discussion on the subject to direct ChatGpt to the more accurate direction and provide a more nuanced response to correct its response to you.

I don't do that often in other topics where I do not have the expertise or competency to do so.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 9:53 am
by Iwannaplato
Veritas Aequitas wrote: Fri Nov 08, 2024 9:20 am I have always qualify
That's not true.
whatever AI provide to me is with reservations note my use of [wR] most of the time, omitted at times.
I have never relied on AI as the authority but is obvious AI merely summarized from sources in the internet or on whatever it is trained on. At times, AI will provide references to the authority or sources.
Yet you use it's proclamations as demonstrations, and repeatedly.
Most of the cases we discussed involved Kant.
I am confident I know more in depth on Kant than ChatGpt which do not have access to the the texts of Kant CPR.
In most cases, where I accused you on leading ChatGpt to a strawman involved discussions related to Kant.
Always in response to your use of AIs, to show that using AIs on issues that are open to many interpretations is problematic. Further when I present quotes from the CPR, either through AIs or front direct online sources, you do not integrate those posts in your reponses. You mount arguments in parallel, without placing those quotes inside your arguments. You don't have a basic idea about how to respond to other people's arguments. It doesn't matter how much you know about Kant, if you cannot interact with evidence or arguments. Further whatever you think you know about Kant, you seem not to know that his ideas can be interpreted in a number of ways. It's not all up for grabs, but you present your view on Kant (and pretty much everything else) as if the evidence is simple and if people disagree with you they are limited in some way, presented as an insult.
Because I am more familiar [nearly all nooks & crannies] with Kant than ChatGpt, I can easily detect where you have misled ChatGpt with your question.
And that conclusion is self-serving and confused. If you know more that Chatgpt, then Chatgpt might well have simply responded from it's less knowledgeable perspective (assuming for a moment you are correct here). In other words, you opt, despite your assertion that Chatgpt can be fallible to assume that I have biased the situation. Those assertions contradict each other. And for someone who occasionally whips out the principle of charity, combined with what you assert here about the fallibility of Chatgpt regardning Kant, it was a wrongheaded conclusion to assume that I have misled Chatgpt.

1) these AIs have built in individual user bias - you don't have to lead them anywhere
2) this is all missing the point regarding the problems of using AI in the ways you do
I don't do that often in other topics where I do not have the expertise or competency to do so.
Peachy, but you do use Chatgpt to present your positions. Sometimes you compliment AI produced texts with your own, but more and more they form the core of your OPs and often many of the responses. That is the prime way you rely too much on AIs.

But I appreciate that after all our mutual rancor you posted here politely.

I am dropping the 'how VA uses AI interaction with you'. I may point out the overreliance, but with a simple pointing out what is happening. I see you are closed on the topic and at least at this point plan to continue to use them in the way you do. In the end this will do you more harm than the forum

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 10:29 am
by Iwannaplato
And on the prevalence or lack thereof of Kant's empirical realism in current philosophy, you accused me for leading on Chatgpt, when all I asked it was what percentage of current philosophers adhere to ER. That's a neutral question and I posted it's answer, including current respect for Kant and the influence of his empirical realism but the low percentage of adherants.

Your response argued I was incorrect by saying things said in the answer I received from Chatgpt to a factual neutral question.

I then placed this conclusion by Chatgpt in relation to your FSERC's which include the idea of expert intersubjective positions as the root of your the vertion of objectivity your counter, for example, PH's version.

None of my approach was biased - though of course the part from Chatgpt can incorrect - and the combination of the two was obvious and neutral. I actually think what gets called objectivity is intersubjectivity.

For my troubles, I was accused of doing things I did not - with no evidence I had done these things - and received a response that did not in any way counter or really address my post.

Your responses tend to make an overall evaulation of the post. There...
Again you are making a fool of yourself by leading on ChatGpt with a strawman based on your ignorance [bias to your ignorance].
No evidence provided that I did this. And your response only supported what I quoted the AI said. It was as if you didn't even read it.

Nothing in my post denied the influence of Kant - in fact the AI made that claim.
Nothing in my post denied his importance as a thinker.

The focus was on whether current philosophers adhere to empirical realism. Nothing in your response contested this in the least, but it was presented as if it did. Nothing in your post supported your psychic claim/insult that I had misled the AI or created a strawman. And of course you cry strawman without giving the slightest evidence of it. And since you don't address the actual argument, you perhaps have no idea that you are just making up an accusation based on not liking what you read.

Could the Ai have been wrong about the percentages of philosophers who adhere to empirical realism? Sure. Did you provide any evidence that it was incorrect? No.

Did you explain why we should ignore your sense of FSERC's when they go against your philosophical position? No.

Did you respond to the argument in my post? No.

This is a general problem I find in your interactions. You can assume I hate your philosophical positions or I am ignorant or you could actually try to see if I am noticing a pattern in your way of posting.

Unfortunately, I expect you not to do the latter.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 10:37 am
by Iwannaplato
And for an example of how your response is not a response to my argument. You got the AI to produce this
You highlight an essential distinction between scientific realism and Kantian scientific anti-realism that often goes overlooked. While scientific realism, rooted in philosophical realism, posits an absolutely mind-independent reality that science aims to describe, Kant’s empirical realism—and its alignment with scientific anti-realism—reminds us that what we consider "real" is inherently tied to human conditions and empirical justification. Kantian scientific anti-realism doesn’t deny reality; it simply confines it to the scope of empirical observation and human cognition, cautioning against projecting science onto an unknowable, mind-independent absolute.

In this view, scientific practice under Kantian principles maintains its rigor and relevance without overstepping into metaphysical assumptions that exceed empirical evidence. This distinction is valuable in debates on scientific realism versus anti-realism, as Kant’s perspective provides a balanced, epistemically grounded approach that remains compatible with science's empirical structure.
My point was not that scientific realism or any of the other isms that are currently more popular than empirical realism are right or better. My point was that they are more popular, period. So this entire section utterly missed the point. If you actually interacted with my post you might understand, at least potentially more often, what you actually need to respond to if you quote my post.

I was then showing the problem in relation to your FSERCs.

Silence on that.

I don't know if you are in an incredible hurry to eliminate any critical responses, but your responses are generally out of context.

But don't worry. You'll be the only user of AIs as core portions of posts. I see it's a waste of time to point out the problems by using AIs in relation to you.

You're sticking with....
When you use them it's objective and honest.
When others use them they are misleading and dishonest.


Whatever.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 11:07 am
by Flannel Jesus
Why don't we just let VA talk to ais then, and deprive him of human conservation? If he wants to talk to ais then good, ignore him and let him talk to only ais

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 12:17 pm
by FlashDangerpants
Flannel Jesus wrote: Fri Nov 08, 2024 11:07 am Why don't we just let VA talk to ais then, and deprive him of human conservation? If he wants to talk to ais then good, ignore him and let him talk to only ais
Just make him keep it all in one thread and I say go for it. Let him keep himself company with his paid computer buddy and leave everyone else alone.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 2:07 pm
by Impenitent
it takes an observant barkeeper to know when to stop serving

-Imp

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 2:53 pm
by Iwannaplato
Flannel Jesus wrote: Fri Nov 08, 2024 11:07 am Why don't we just let VA talk to ais then, and deprive him of human conservation? If he wants to talk to ais then good, ignore him and let him talk to only ais
It'd be lovely. I just wish there would be a more complete silence. I see his threads get responses, so I feel drawn in. Not arguing that's a good response. I'll avoid him again.

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 3:50 pm
by Atla
Veritas Aequitas wrote: Fri Nov 08, 2024 6:58 am I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:
What, you think ChatGPT kept track of your 100s of chats??

Re: Over-reliant on AI

Posted: Fri Nov 08, 2024 10:29 pm
by Iwannaplato
Atla wrote: Fri Nov 08, 2024 3:50 pm
Veritas Aequitas wrote: Fri Nov 08, 2024 6:58 am I asked ChatGpt, based on the 100s of chats [I don't deny there are times I just seek answers without any counter], is there an over-reliant on my use of ChatGpt:
What, you think ChatGPT kept track of your 100s of chats??
Good point. I noticed that recent threads are accessible so I wasn't particularly surprised that it might remember old chats, but Chatgpt claims it has no direct access. You can open an older thread and then Chatgpt can see that thread, but it has no ongoing overview or any details at all. So, this means for VA to have gotten Chatgpt to evaluate his interactions with it, he would have had to go back to hundreds of threads, copy them, paste them into some huge documnts and upload those.

So, it's not impossible, but it would have been very time consuming.