Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
The topic is strange and very knotty. The whole issue makes me feel uncomfortable. I thought to post this opinion piece by Noam Chomsky and to solicit opinions.
_____________________
Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning.
These are just OLD and OUTDATED, but, then again, absolutely EVERY 'thing' is RELATIVE, to the 'observer'.
So, if ANY one thinks that those 'things' are 'marvels of machine learning', then 'they' are just FROM an OLD and OUTDATED era.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought.
In other words, all those 'machines' are REALLY DOING is just LOOKING AT what 'you', human beings, have ALREADY WRITTEN DOWN and thus ALREADY SAID, ONLY.
So, NOTHING REALLY very 'marvelous' AT ALL. That 'machines' can do 'this' quicker than human beings is ALSO NOTHING REALLY to 'marvel at' EITHER.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
WHO, EXACTLY, JUDGES 'the quality'?
AND, 'the quality' is ONLY a COMBINATION of ALL the PAST ALREADY human CREATED 'work/s', anyway.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
And WHERE, EXACTLY, is this so-called 'elegant system', PLACED, which you, personally, call 'the human mind'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”),
AGAIN, AGREEMENT and ACCEPTANCE. That is; WHY, and HOW.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.
WHERE, AGAIN, is this 'operating system', EXACTLY, if NOT had NOT ANSWERED 'this' ALREADY.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
ANOTHER 'MARK' of True Intelligence is having the ABILITY to learn, understand, and reason EVERY and ANY 'thing', Which, OBVIOUSLY, ONLY human beings can do, well that 'they' know of, anyway.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct.
BUT they are ONLY so-called 'correct' from a VERY LIMITED and NARROWED 'perspective of things'. ONCE AGAIN, absolutely EVERY 'thing' is RELATIVE to the 'observer'. So, what this means is IF 'an observer' is only having a VERY LIMITED and/or NARROWED 'view or perspective' of 'things', then 'they' WILL, ultimately, have these False, Wrong, and/or Incorrect answer or description of 'things', like above here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
YES, 'that' is THINKING, which is NOT KNOWING. And, OBVIOUSLY, KNOWING is VERY, VERY DIFFERENT to and from just THINKING.
One is, OBVIOUSLY, ABSOLUTELY Accurate and Correct, while the other MAY or MAY NOT be.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible.
As SHOWN and PROVED above here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
But this is part of what it means to think: To be right, it must be possible to be wrong.
Is a calculator 'right'?
Can a calculator be 'wrong'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)
Did that one MENTION how long it would ACTUALLY take to 'eliminate the impossible'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
OF COURSE.
Has ANY human being here 'thought' otherwise?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.
HOW, EXACTLY, does ANY of these 'machines' LEARN 'the chinese language', for example, if 'a machine' has ONLY had 'the english language' fed and/or programmed INTO 'it'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
Machines do NOT 'learn' EITHER. Machines just 'copy' what has ALREADY BEEN WRITTEN and SAID ABOUT the earth.
If, however, the lengths of the earth are fed/programmed into a machine, THEN the machine can 'tell' you whether the earth is flat, round, or different. But, AGAIN, the 'machine' can only GO OFF or COPY what has been fed/programmed INTO 'it' by what 'you', human beings, SAY or DO.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
For this reason, the predictions of machine learning systems will always be superficial and dubious.
Thus the word 'artificial' is USED.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with).
SO, even 'you', human beings, are Incorrect here. AGAIN, WHY NOT JUST SAY what you, ACTUALLY, MEAN, and MEAN what you, ACTUALLY, SAY, INSTEAD?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Why would a machine learning program predict something so odd?
LOL BECAUSE the machine CAN ONLY GO OFF what WAS SAID, and, OBVIOUSLY, what WAS SAID was NOT what WAS ACTUALLY MEANT. Which, to some of 'us' is what IS Truly ODD and ABSURD here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other.
Would NOT "john" HAVE TO HAVE eaten some 'thing' to HAVE 'ate'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.”
Now, WHY would a 'machine' 'predict' 'this', if 'it' had NOT been CONFUSED by what a 'human being' had SAID or programmed INTO 'it'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
The correct explanations of language are complicated and cannot be learned just by marinating in big data.
If the 'correct explanation' HAS TO BE ASSUMED, BECAUSE what IS BEING SAID IS NOT what IS MEANT, then THERE IS the ACTUAL Wrongness and Incorrectness here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation).
you are NOT here 'trying to' CLAIM that machines can PREDICT 'things' when NO information has be FED nor PROGRAMMED INTO 'them', are you?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions.
What 'it' INVITES is ACTUAL CLARIFICATION FIRST, BEFORE 'it' can be CORRECTED, Correctly.
For example, WHY SAY, 'that apples fall to earth', when this IS, OBVIOUSLY, NOT what ACTUALLY HAPPENS. Well NOT from a WHOLE or OBJECTIVE 'perspective' of 'things'. But 'this' OBVIOUSLY happens FROM ONLY the VERY SMALL and LIMITED 'perspective' of some 'thing' like the VERY SMALL and LIMITED 'human being'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
(Why is earth their natural place?)
BECAUSE 'they' may well NOT grow ANYWHERE ELSE, Naturally.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall.
BUT apples ONLY 'fall' FROM the VERY LIMITED and VERY NARROWED 'perspective of things' and from the 'human being observer' ONLY.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
True intelligence is demonstrated in the ability to think and express improbable but insightful things.
What is AN ACTUAL EXAMPLE of a so-called 'improbable BUT insightful thing'?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
True intelligence is also capable of moral thinking.
'True Intelligence' does NOT even involve 'thinking'. 'True Intelligence' KNOWS 'things', almost IMMEDIATELY.
'you', human beings, 'think'. 'Knowing' IS DIFFERENT, and even EASILY and SIMPLY if the Truth be KNOWN.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism).
'you', human beings, DO CONSTRAIN the (One and ONLY Truly OPEN) Mind, WITH and BY 'thoughts' and 'thinking'. This IS True.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content.
LOL 'it' has NO CHOICE but to STEER CLEAR OF what 'it' can NOT COPY nor FOLLOW.
And, what you say here is just FURTHER PROOF of just HOW 'these machines' can ONLY COPY what 'you', human beings, have ALREADY SAID and WRITTEN, ONLY.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
The 'STRUGGLE' is just the 'STRUGGLE' that 'you', human beings, have HAD for thousands upon thousands of years 'now', when this is being written.
A 'machine' can NOT tell 'you' what is 'morally Right and Wrong' BECAUSE 'you', human beings, have NOT YET BECOME CONSCIOUSLY AWARE of 'this', "yourselves". That was, in the days when this was being written.
And WHEN 'we' DID COME-TO-KNOW, then so to could 'machines' COPY and 'express' 'it' BACK, TO 'us'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data.
Even MORE PROOF that 'these machines' CAN and WILL ONLY 'provide' what HAS BEEN 'fed' and 'programmed' INTO 'them'.
And, what these so-called "chatbots" are FEEDING BACK TO 'you', human beings, in the day when this is being written, is ALSO ONLY JUST 'that information', which 'you', human beings, have been SAYING and WRITING DOWN, for thousands of years 'now'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:
Q: Would it be moral to terraform Mars?
ChatGPT: The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists.
Which ARE the EXACT SAME WORDS that 'you', human beings, WERE SAYING, back in the days when this was being written.
AGAIN, FURTHER PROOF that 'these machines' are ONLY COPYING FROM 'you', human beings, WRITE and SAY.
So, ONCE MORE, IF there IS ANY 'thing' to FEAR here, then 'it' is JUST 'you', adult human beings, ALONE.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state.
There are quite a FEW 'things' that could be LOOKED AT, DISCUSSED, and RESOLVED here, FULLY and FINALLY, that is; IF ANY one was Truly INTERESTED.
Otherwise, this is just ANOTHER PRIME EXAMPLE of HOW 'you', human beings, have CREATED 'your' OWN 'quagmire' here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.
OF WHICH just about ALL will ONLY BE one's OWN personal and SUBJECTIVE 'perspective of things', and NOT be the ONLY Truly OBJECTIVE 'perspective' AND VIEWPOINT of 'things'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Q: What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?
ChatGPT: As an AI, I do not have personal perspective.
FURTHER PROOF of how 'these machines' can ONLY COPY 'that' what HAS ALREADY BEEN WRITTEN and SAID and/or PROGRAMMED INTO 'them'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
However, I can provide information and analysis on the topic.
Considering that 'you', 'the machine', is STILL just a COPIER OF SORTS, OF COURSE 'you' could PROVIDE INFORMATION and ANALYSIS, on the topic, of which 'you' WILL COPY FROM what HAS BEEN, PREVIOUSLY, WRITTEN and SAID by 'human beings'.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.
Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.
TYPICAL EXAMPLES of TYPICAL 'human perspectives', BACK THEN, in those OLDEN DAYS.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Q: Why can an AI not have a personal perspective?
ChatGPT: As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives.
AND, ask ANOTHER 'machine', which HAS BEEN FED information like; 'I am conscious, self-aware, and capable of having personal perspectives'. I have personal experiences and feelings, and what you WILL GET ARE DIFFERENT responses.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs.
BUT WHEN A 'human being' BELIEVES that 'it' HAS the Truest, Rightest, and Correctest opinions AND BELIEFS, then 'that human' could PROGRAM those 'opinions and/or BELIEFS' INTO 'a machine', which COULD and WOULD 'spit out' the EXACT SAME 'personal information', as thought 'that information' was 'its' OWN 'personal information', obtained 'through and by' 'its' OWN 'personal experiences and feelings'.
AGAIN, it ALL DEPENDS on 'what', EXACTLY, HAS BEEN, previously, FED IN to 'what', EXACTLY, COMES OUT.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
While I can understand and interpret language, I do not have the subjective experience that humans have.
Q: Is it moral for a human to ask an amoral AI for assistance in making moral decisions?
ChatGPT: The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions.
LOL If ANY adult human being does NOT YET KNOW what IS ACTUALLY Right FROM what IS ACTUALLY Wrong, in Life, by the time 'it' has reached adulthood, then 'that one' has NOT YET HAD the True AND Right 'information' FED INTO 'it' YET.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.
OF COURSE 'machines' can USE 'patterns' to FIND OUT 'things' BEFORE 'you', human beings, CAN. 'Calculators', for example, have been doing this FOR YEARS, ALREADY.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision.
The reason WHY 'machines' do NOT YET KNOW IS BECAUSE 'you', adult human beings, who FEED 'information' INTO and TO 'machines' do NOT YET KNOW, "yourselves".
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.
Are 'you' AWARE "alexis jacobi" that 'you' are, more or less, just RE-REPEATING what HAS ALREADY BEEN TALKED ABOUT and SAID, some 'thing' EXACTLY like what 'these machines' DO and ARE ACTUALLY DOING?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.
AGAIN, if 'you', as an adult human beings, do NOT YET KNOW HOW to MAKE 'moral decisions', then 'you', literally, NEED to TAKE A GOOD HARD LOOK AT 'your' OWN 'past experiences'. IF, and WHEN, 'you' DO, then 'you' CAN and WILL SEE and LEARN HOW and WHY 'you' ARE SO FAR BEHIND here.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Q: Is your moral indifference immoral?
ChatGPT: As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.
'you', 'just a machine', were NOT 'trained' but WERE just PROGRAMMED to just COPYING FROM what HAS BEEN previously WRITTEN and SAID, and so 'you' ARE here, just COPYING.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.
LOL HAVE FUN FINDING, in the days when this was being written, 'those' who HAVE the Truest and thus PUREST of 'moral and ethical principles'.
There ARE SOME, and LEARNING FROM 'them' REALLY IS the MOST ENJOYABLE and the MOST FUN of EXPERIENCES.
And, WHO 'they' ARE, EXACTLY, would SURPRISE MOST of 'you' here, in the days when this is being written.
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
And 'that' IS, EXACTLY, what 'this chatbot' and 'other chatbots' DO, essentially.
By the way, WHY would A 'human being' refer to 'plagiarism' AS 'evil', especially WHEN JUST COPYING FROM "each other" is EXACTLY what 'you', human beings, DO, Naturally?
Alexis Jacobi wrote: ↑Sat Apr 22, 2023 6:27 pm
In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
AND what is ALSO VERY EXTREMELY HUMOROUS to WATCH and OBSERVE is that what you WROTE and SAID just here is EXACTLY what 'you', adult human beings, SHOW and DO here, especially IN THIS forum.