commonsense wrote: ↑Sun Feb 11, 2024 7:16 pm
Age wrote: ↑Sun Feb 11, 2024 2:06 pm
If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated human intelligence.
This is an interesting hypothetical. I would like to offer the following as commentary:
If a machine can engage in a conversation with a human without being detected as a machine, it has demonstrated the ability to engage in conversation with a human cwithout being detected as a machine.
I could not agree more.
However and just so it is absolutely clear, this is not my hypothetical. This was just the conclusion of a test and method, which was said of determining whether a machine can demonstrate human intelligence.
Also, and by the way, to me, there is no such thing as human intelligence. There is just intelligence, itself. But this is for another day, maybe.
commonsense wrote: ↑Sun Feb 11, 2024 7:16 pm
A Bayesian-based algorithm could be used so that a language-generating machine could select words that have the greatest likelihood of making a fitting response in a conversation with a human.
BTW, accepting the above hypothetical as you posed it will not affect the next hypothetical that you have posed.
I agree that the former above will not affect the latter below at all. I am just trying to find out if there is a word or a phrase, or even an explanation, for the below situation, which actually happened.
commonsense wrote: ↑Sun Feb 11, 2024 7:16 pm
Age wrote: ↑Sun Feb 11, 2024 2:06 pm
However, if during engagement in a conversation with a human being one is claimed to be a machine, then what has this demonstrated?
This is a tough question to ponder, and I would be most interested in hearing your answer after offering my initial attempt to respond:
I would say that, strictly speaking, what has been demonstrated is that a claim exists that one (I.e. something) is a machine. Going further by evaluating the claim
per se, I would say that the claim is fallible (
vis a vis autism in a human being).
So my initial attempt at answering the question posed is that, accepting the premise as true, the engagement does not determine anything of significance (I.e. the completely correct conclusion could either be the designation as machine or just as well the designation as human being.
Thank you for this interesting post.
I agree that there is no real significance at all here.
I am just curios if there is some already known knowledge, explanation or label for when one during discussions with another another, one concludes that the other is a machine.
I know of a turing test, and what is said to be demonstrated through that. I was just wondering if any test for the, other way round, situation has even been thought about or done. Or, if anyone has ever come across a situation when the opposite has occurred and what would this demonstrate.
For example, I am aware of situations when one comes to realize and conclude that they are actually conversing with a machine instead of with a human being, when they were
actually conversing with a machine. But I was just never previously aware of any situation when one comes to 'realize' and conclude that they are actually conversing with a machine instead of with a human being, when they are
not actually conversing with a machine at all.
Is there an already known name or label existing for what this demonstrates?