PTH wrote: ↑Sun Sep 01, 2019 3:17 pm
Basically this
PTH wrote: ↑Fri Aug 30, 2019 12:49 pmThat said, mental activity must change physical outcomes. Unless our contention is that no-one, ever, was influenced enough to change their behaviour because of something they read. So if I read instructions on how to operate a piece of machinery, do we think that simple conscious understanding of that instructive text had no impact on what I did next?
It's irrelevant.
1. The meaning lies in your decoding (interpretation) of the symbols, not the symbols themselves. This is trivially testable. If the manual was in Chinese it would have no effect on you. Because you don't understand the symbols.
2. The person reading the symbols decides whether the information they extracted from those symbols is relevant to the task at hand.
Demonstration: A bottle of clear liquid has a label: 1001101 1100001 1100100 1100101 100000 1101001 1101110 100000 1001010 1100001 1110000 1100001 1101110
Do you drink it?
What if the label said 1010000 1101111 1101001 1110011 1101111 1101110 100001 ?
After you figure out that the first label says "Made in Japan", and the second label says "Poison!" maybe you will notice that the answer to the question "Do you drink the poison Made in Japan?" doesn't really depend on the label's meaning.
What alters your behaviour is your objective. If you are trying to quench your thirst - you don't drink it.
If you are trying to commit suicide - you drink it.
You are attributing causality to meaning where it had none.
Stanley Cavell argued that if you can't extract an intention from a piece of text, then it is meaningless. I agree with him.
If a cloud appeared in the sky spelling out the word "LOVE" it means nothing. You may recognize it as an English word, but nobody intended to use it.
It's subject to the
infinite monkeys theorem.
Il n'y a pas de hors-texte --Jacques Derirda
PTH wrote: ↑Sun Sep 01, 2019 3:17 pm
Bearing in mind, in contrast, that understanding forms no part of how a computer "follows" instructions, and more than a ratchet screwdriver understands I want to take out a screw.
Why do you always pick the trivial examples? Why didn't you add self-driving cars to this list?
Want to compare how algorithmic drivers make choices compared to human drivers?
Want to discuss how autonomous vehicles parse road signs and some times obey them, some times ignore them? Or is that detrimental to your argument?
PTH wrote: ↑Sun Sep 01, 2019 3:17 pm
It's usually a problem if you find yourself responding to the points you wish people would make, and not the points they actually make.
Kettle, meet pot. It was you who proposed a two-value logic (things you know, things you don't know), but then you went to propose that consciousness is magic fairy dust, and there is no way you know that.
I am merely holding you accountable to some epistemic consistency.
Gravitons are hypothetical. Exactly like consciousness.
You can no more tell me what gravity
IS, than you can tell me what consciousness
IS.
Because when you talk about what things ARE, you are doing metaphysics.