Atla wrote: ↑Fri Nov 08, 2024 11:31 pm
Pretty sure ChatGPT I mean God can't evaluate that, even if you copy all the hundreds of conversations into one chat.
The way to do it would be to copy paste those 100s of threads into a Word file, for example. Then attach the file.
I asked Chatgpt about this and it did not think it could make a determination of objectivity and overreliance. It could read all the threads. It could estimate, but it was skeptical about its own ability to make any such assessments.
But truly determining objectivity or overreliance involves insights into factors beyond the text itself—such as your own reflection, how you feel about the support received, or whether you use alternative methods for problem-solving and validation outside AI.
And of course it would need to understand how he uses AI in relation to other people and how much he is replacing his own analysis with the AI's. All beyond it's ability to scan.
So, it seems to me there are two likely possibilities: 1) he did something completely inadequate and didn't realize that it was completely inadequate or 2) he lied. He knows he could not have given the AI much information to work with.
In my experience it already tends to lose the plot after 3-4 inputs on the same topic and forgets what was said before, it can't handle a mishmash of 100s of inputs on all kinds of different topics.
Actually I think the problem is not the AI. It's how it's used.
And knowing VA, he didn't copy the 100s of conversations into one chat, he will think that God remembered all the different conversations and gathered them in one place in its mind like a person would and thought about them while driving home from work. Lol.
Yes, I am very skeptical he did any of that by the AIs own assertions would be necessary to even start analyzing with any hope of drawing a valid conclusion. And even then it would not be able to see the most important things to evaluate how objective he is and if he overrelies on AIs.
Thanks for pointing that out. Often my strategy is to accept absurdities people spout to see if even then what they are doing holds. But this has gotten too habitual. Other people tend to challenge silly assumptions - like that he even did the minimum necessary. So, I've gone the other route: hey, I assume that for the sake of argument: see, it doesn't work anyway. Here, I didn't even notice I was doing that. Good catch.
I noticed his two silly posts above. For example saying his posts are useless to us, as if that is the complaint. Finding contradictions, faulty reasoning, how he is actually not responding to what we write while pretending to, and so on, these are all useful skills. And the vague denial/concession neither of either
I did complain to ChatGpt sometime ago that its response are not consistent, thereafter I see there is 'memory updated' activity in my case, not sure if it is a common feature for all.
I asked the AI if paid memberships or any other factor could make a difference in its ability to analyze for him and it said no.
Not that we need to trust it's answers, but even if it overestimates its abilities, it does not agree with his sense of the effectiveness in this case.