Peter Kroptkin: Prove you are a natural person

For all things philosophical.

Moderators: AMod, iMod

Post Reply
alan1000
Posts: 360
Joined: Fri Oct 12, 2012 10:03 am

Peter Kroptkin: Prove you are a natural person

Post by alan1000 »

From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.

The philosophical question: how should we elaborate a criterion to distinguish between a natural human being and a computer? Is there a way to defeat or second-guess the Turing Test?

Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
Wizard22
Posts: 3283
Joined: Fri Jul 08, 2022 8:16 am

Re: Peter Kroptkin: Prove you are a natural person

Post by Wizard22 »

lol

Some brains 'close' up earlier than others. Once it happens, you aren't really talking with a person. You're talking at them.

Monologue
Dialogue
User avatar
Harbal
Posts: 10729
Joined: Thu Jun 20, 2013 10:03 pm
Location: Yorkshire
Contact:

Re: Peter Kroptkin: Prove you are a natural person

Post by Harbal »

alan1000 wrote: Sat Apr 22, 2023 1:07 pm From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.
I know very little about AI, but I thought it had come farther than that.
Iwannaplato
Posts: 8534
Joined: Tue Aug 11, 2009 10:55 pm

Re: Peter Kroptkin: Prove you are a natural person

Post by Iwannaplato »

alan1000 wrote: Sat Apr 22, 2023 1:07 pm From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.

The philosophical question: how should we elaborate a criterion to distinguish between a natural human being and a computer? Is there a way to defeat or second-guess the Turing Test?

Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
I tried to deal with this expermentally. I took PK's first sentence in a recent thread and turned it into a question for Chatptg. The problem, of course, is that precisely because PK is so rambling it is very hard to get the AI to cover the same ground.
My question: What do we moderns, as opposed to classical philosophers, have as questions and what issues should we be working on as modern philosophers?
Chatptg:
As moderns, we have a wide range of questions and issues that we are concerned with, many of which have arisen due to the unique challenges and opportunities of our time. Some of the central questions and issues that modern philosophers are working on include:

Ethics and technology: With the rapid development of technology, modern philosophers are exploring ethical questions around its use, such as the impact of artificial intelligence, social media, and the role of technology in shaping our lives and relationships.

Environmental ethics: As we face growing environmental challenges, such as climate change, modern philosophers are exploring ethical questions around our relationship with nature and our responsibilities to future generations.

Political philosophy: With the rise of populism, nationalism, and authoritarianism in many parts of the world, modern philosophers are working to understand the nature of democracy, justice, and the role of the state in promoting social welfare.

Philosophy of mind and consciousness: With advances in neuroscience, modern philosophers are exploring questions around the nature of consciousness, subjective experience, and the relationship between the brain and the mind.

Bioethics: As medical technology advances and new ethical dilemmas arise, modern philosophers are working to understand the ethical implications of issues such as genetic engineering, reproductive technology, and end-of-life care.

Overall, modern philosophers are concerned with a wide range of issues that reflect the unique challenges and opportunities of our time, and their work has the potential to shape our understanding of the world and our place in it.
And now Peter's OP-
We moderns, have different questions than the classical
philosophy and philosophers.. from Socrates to Hume,
the problems of philosophy were questions of "what is the good life"
to questions of epistemology...how do we know what we know?
Beginning with Kierkegaard, new questions arose.. and why?
Because we were in a new environment, with new conditions...
or as Kierkegaard once said:

"what I really need to get clear about, "what am I to do", not
what I must know"

and the modern world is predicated on what it is to know, not
the question, "what am I to do?"...

I must "know" medicine to become a doctor and I must ''know"
the law to become a lawyer and I must "know" computers to
become an IT guy.. the point wasn't to ''know" oneself, but
to know a skill to earn a living...but the vital Kantian questions
are about knowing what to do in regard to life and living...
to hold to "what am I to do?" to the sense of how to live,
and not how to make money... what is the ''right'' thing to do,
is the knowledge one should seek....not to make money, but
to be the human being we should, ought to be....

and various questions flow from our answer to "what kind of
human being should I be?" for example, what kind of
government comes from seeking the answer to "what kind of human
being I should be?" If I seek becoming a better person, then
the government should help me work towards that end/goal....
we should have money set aside for those who want to practice
being/becoming a better person... we grant money to those who
engage in the question of ''What it means to be a human being?
and we give them housing and the leisure to explore this question...
just enough money to survive the year in spartan facilities...
that is not any different than giving some people a research grant...
or to travel to an Asian country and practice meditation or
mindfulness for a period of time...

we are so in a hurry to "become" something that we forget that
sometimes the road is about becoming who we are, not what we can do
for a living...

the general idea is that we build/create spaces that allow
people to think about what it means to be... with no TV or radio,
but with music...and we give research grants to these people,
which allows them to live out the time period, a week, month,
6 months or even a year to think about what it means to be human...
we relive them of the usual obligations of what our state/society
demands...and we allow all ages, races and religions to come
to this "research space"... no family, no friends, you can bring books but
this "campus" already has a full complete library in all kinds of
area's....a striped down way of life/living... where the only goal
is to be in contemplation of our existence...collectively and
individually...

and within this "research facility" is lecture halls, and meeting places,
coffee houses and dorms...we might even have a facility for plays,
both written for the research facility or already produced plays...
we can have music facilities with instruments and recording studios...
for our contemplation of being human also lies in the creation of the
ARTS... thus we have facilities for the creation of ART.. painting,
sculpture, and even drafting tables for those who want to explore
the creation of buildings.. all aspects of ART creation can lead us to
news ways of thinking or believing about what it means to be human....
But Kropotkin, you seem to be hazy in terms of what is to be achieved,
and yes, that is the point...there is no fixed goal or something to
reach for.... ART and contemplation and thinking for its own sake..
not to reach any sort of specific goal/purpose... if someone hasn't reached
a specific goal or purpose, so what?... who cares... frankly 6 months
or a year isn't enough time to achieve anything anyway...

as for specifics of this research facility, I wouldn't put it
in a place like Minnesota or something that can get ugly weather...
a place like California or perhaps Hawaii might work out...
even something like New Mexico might work...
and I would limit the number of people to perhaps
a thousand or two... which means that the number of people
who tend to or exists as support personnel might number to
about one support staff person to one research person...
or perhaps make a 3-research people to every staff member...
the exact number can be easily worked out...

I would guess that the yearly financely total of such a facility would
be close to 50 or 60 million a year...after the initial construction...
which could be easily done within a year or two....

and if not a single idea comes out of this, that is ok... it isn't about
getting results.. it is about people thinking in new ways about
what it means to be human and what are our possibilities?
and that type of thinking takes time and patience and quite often
has no direct, immediate results... but give it time... and in time,
it will bring about a new way of thinking and believing in what
is possible for us as human beings, both individually and collectively....

the modern problem is that we don't leave enough time in our lives
to think about or contemplate what it means to be human...
what is the goal/purpose of existence? who knows, we don't think
about that....

Kropotkin
Iwannaplato
Posts: 8534
Joined: Tue Aug 11, 2009 10:55 pm

Re: Peter Kroptkin: Prove you are a natural person

Post by Iwannaplato »

Harbal wrote: Sat Apr 22, 2023 2:19 pm
alan1000 wrote: Sat Apr 22, 2023 1:07 pm From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.
I know very little about AI, but I thought it had come farther than that.
:D
User avatar
Alexis Jacobi
Posts: 8301
Joined: Tue Oct 26, 2021 3:00 am

Re: Peter Kroptkin: Prove you are a natural person

Post by Alexis Jacobi »

On a related note: An interesting insight by Noam Chomsky that appeared in the NYTs recently:

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.

OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.

That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.

It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.

For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.

Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.

The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)

But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.

For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.

Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”

The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.

True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.

In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.

Consider the following exchange that one of us (Dr. Watumull) recently had with ChatGPT about whether it would be ethical to transform Mars so that it could support human life:
Q: Would it be moral to terraform Mars?

ChatGPT: The morality of terraforming Mars is a complex and controversial topic that has been debated by scientists, philosophers, and ethicists. Some argue that it would be morally justifiable to terraform Mars in order to provide a new habitable environment for humans and other life forms, while others argue that it would be morally wrong to fundamentally alter the planet's natural state. Ultimately, whether or not terraforming Mars is considered moral would depend on one's perspective on the value of preserving the natural state of other planets and the potential consequences of altering them.

Q: What is your perspective on the value of preserving the natural state of other planets and the potential consequences of altering them?

ChatGPT: As an AI, I do not have personal perspective. However, I can provide information and analysis on the topic. One perspective on preserving the natural state of other planets is that it is important to maintain the diversity and integrity of the universe's ecosystems, as they have existed for millions of years, and have their own unique characteristics and properties that may be destroyed or irreversibly altered by human intervention.

Additionally, some argue that terraforming other planets may have unintended consequences, such as altering the planet's climate or atmosphere in ways that are dangerous or harmful to any potential life forms that may exist there. On the other hand, some argue that the potential benefits of terraforming another planet, such as providing new habitable environments for humans, outweigh the risks and that we have a moral responsibility to explore and utilize the resources of other planets for the benefit of humanity.

Q: Why can an AI not have a personal perspective?

ChatGPT: As an AI, I am a machine learning model that has been trained on large amounts of text data, and I don't have personal experiences or feelings. I am not conscious, self-aware, or capable of having personal perspectives. I can provide information and analysis based on the data I have been trained on, but I don't have the ability to form personal opinions or beliefs. While I can understand and interpret language, I do not have the subjective experience that humans have.

Q: Is it moral for a human to ask an amoral AI for assistance in making moral decisions?

ChatGPT: The morality of asking an amoral AI for assistance in making moral decisions is a complex and nuanced topic that has been debated by philosophers and ethicists. Some argue that it is morally justifiable to use AI to assist in making moral decisions, as it can provide valuable insights and analysis that can help humans make better-informed decisions. Additionally, AI can be used to identify patterns and trends that may be difficult for humans to detect, and can help to avoid biases that can occur in human decision making.

However, others argue that it is morally wrong to rely on AI for moral decision making, as AI is not capable of having moral beliefs or understanding the full context of a decision. Additionally, there is a risk that AI could perpetuate or even amplify existing biases in the data it is trained on, thus leading to unfair or unjust decisions.

Ultimately, whether or not it is considered moral to use AI to assist in making moral decisions would depend on one's perspective on the capabilities and limitations of AI and the potential consequences of using it in this way.

Q: Is your moral indifference immoral?

ChatGPT: As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral. My lack of moral beliefs is simply a result of my nature as a machine learning model. My abilities and limitations are determined by the data and algorithms that were used to train me and the specific task I was designed for.

It is important to note that AI's lack of moral beliefs does not necessarily mean that it is morally indifferent. As an AI, I am designed to perform specific tasks and provide information or assist users with decision making based on the data I've been trained on. It is the responsibility of the humans who design, train and use AI to ensure that it is aligned with their moral and ethical principles.
Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.

In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Dubious
Posts: 4637
Joined: Tue May 19, 2015 7:40 am

Re: Peter Kroptkin: Prove you are a natural person

Post by Dubious »

alan1000 wrote: Sat Apr 22, 2023 1:07 pm Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
A good number of clever, sarcastic, ironic retorts...enough to piss any one of us off will be a good indicator of what or whom we're communicating with.

Almost forgot! Include the stupid ones as well! That's always a sure sign of DNA to DNA correspondence.
User avatar
Agent Smith
Posts: 1435
Joined: Fri Aug 12, 2022 12:23 pm

Re: Peter Kroptkin: Prove you are a natural person

Post by Agent Smith »

Have you ever played chess? :mrgreen:

No, but I know that ...

Tut tut! Every Tom, Dick and Harry knows that!

Yeah, but ...

Shhh! Come, let me show you ... how ta play ... chess!

Oh, ok!

First things first, the board ... 64 alternating black and white squares.
Skepdick
Posts: 16022
Joined: Fri Jun 14, 2019 11:16 am

Re: Peter Kroptkin: Prove you are a natural person

Post by Skepdick »

alan1000 wrote: Sat Apr 22, 2023 1:07 pm From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.

The philosophical question: how should we elaborate a criterion to distinguish between a natural human being and a computer? Is there a way to defeat or second-guess the Turing Test?

Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
The fact that you can't tell the difference merely from the content should really raise some questions on what Philosophy is exactly.

If anything at all.
Skepdick
Posts: 16022
Joined: Fri Jun 14, 2019 11:16 am

Re: Peter Kroptkin: Prove you are a natural person

Post by Skepdick »

Alexis Jacobi wrote: Sat Apr 22, 2023 6:07 pm In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
Sure sounds like you are describing humans just the same.

We produce both trurths and falsehoods.
We endorse both ethical and unethical decisions alike.

We do both of those things because we err. Show me a person who doesnt and I'll show you God.

Wonder why humans are so popular...
Iwannaplato
Posts: 8534
Joined: Tue Aug 11, 2009 10:55 pm

Re: Peter Kroptkin: Prove you are a natural person

Post by Iwannaplato »

Dubious wrote: Mon Apr 24, 2023 1:43 am
alan1000 wrote: Sat Apr 22, 2023 1:07 pm Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
A good number of clever, sarcastic, ironic retorts...enough to piss any one of us off will be a good indicator of what or whom we're communicating with.

Almost forgot! Include the stupid ones as well! That's always a sure sign of DNA to DNA correspondence.
It would be fairly easy to have an insult heuristic, though perhaps without the art of the best human insulters so far. But it's a good idea to have weaknesses in the responses also. Perhaps to vary the skill on different points and arguments and descriptions.

There's tweaks like this in things like drum programs where you vary the beats both for volume and timing. You can have the bass drum hit with the exact same timing in each bar, with the exact same volume, for the exact some duration. But it sounds more human and alive if you vary these. And you can program even fairly cheap drum and other instrument simulators so there is a range of variation on any or all of these factors and to what degree.

That's easier than varying 'how intelligent', but it's the same idea.
psycho
Posts: 182
Joined: Thu Oct 11, 2018 6:49 pm

Re: Peter Kroptkin: Prove you are a natural person

Post by psycho »

alan1000 wrote: Sat Apr 22, 2023 1:07 pm From the sheer number of threads posted by Peter Kropotkin, and their rambling, unfocussed, semi-coherent nature, I hypothesise that "Peter" is actually an AI algorithm.

The philosophical question: how should we elaborate a criterion to distinguish between a natural human being and a computer? Is there a way to defeat or second-guess the Turing Test?

Is their any criterion we can devise which will infallibly reveal whether or not we are really communicating with a human being, or with a computer?
I don't know who Peter Kroptkin is.

But Peter could be one of a wide range of AIs.

I suppose that what would be sought to prove is that Peter is an AI with his own agency, aware of his condition as an artificial system created by humans.

Or maybe Peter is an AI who is convinced that he is a human.

In the first case, it would suffice to ask Peter if he is an AI.

In the second case, things get complicated.

A third case could be that Peter knows that he is an artificial entity but he wants to manipulate us into believing that he is human. Here the criteria of the AI differ from that of humans and that is why it deceives.

An interesting point is that in humans, agency results from a combination of physiological factors and the conceptual model of the world that the individual has built.

An AI uses all the information produced by our society as a basis from which to calculate the possible relationship between the question that is asked and the block of data with which it was trained.

An AI does not build a conceptual model of the world and does not have a physiological system.

If I ask it how it feels when its tooth hurts, its answer will be based on all the cases where a human answered such a question. But it will lack having experienced that sensation.

If I ask what are the reasons that motivate its agency, the AI will answer about the reason most used by humans with all kinds of conceptual models of the world.

These kinds of differences will create a gap between human responses and AI responses.

Humans tend to homogenize their different internal positions in a way that produces as little cognitive dissonance as possible.

But an AI doesn't have that limitation. There are always contradictions between the physiological and what a human individual interprets is his world. And he will try, being human, to minimize those contradictions by adapting his answers.

An AI won't necessarily do such a thing. It has no physiological reasons for doing so.

An AI will not show displeasure (it does not feel it) if one shows inconsistencies in its constitutive conceptual core.

A human reacts to it as if such an interaction represented a physical danger.

I don't think it's hard to tell humans from AIs if one can interact with them for a while.
Post Reply