Materialism is logically imposible

So what's really going on?

Moderators: AMod, iMod

User avatar
Noax
Posts: 851
Joined: Wed Aug 10, 2016 3:25 am

Re: Materialism is logically imposible

Post by Noax »

Thanks for the replies Ginkgo.
Ginkgo wrote:Experience is the extra part. If you kick a p-zombie in the shins he will say "ouch" and get very angry with you. But he is not really angry and he feels no pain, it is just a programmed response on his part.
The difference between the two seems to be your choice of language used in the description between one and the other. The p-zombie says ouch because he is thus programmed perhaps, but I also might thus be programmed. The zombie would not have said ouch without experiencing it. How else would it know when to say "ouch"? This is why I find no hard problem. I just don't see the difference between the two cases.
Your laptop computer probably has a health button that gives you information about operating temperature, battery charge and how efficiently your computer is processing information. One could imagine a program that could be devised whereby your computer could tell you how it is feeling today rather than just presenting technical information for you to read. So if the battery is low and the operating temperature is high it could tell you it is feeling run-down and very sluggish today. But of course this is just a programmed response, it can't really experience being-run and sluggish.
It can if I choose to word it differently. My hunger and its low battery seem pretty similar to me. My looking to food in response to that state is a very instinctual programmed response.

Do you understand my issue? All the explanations seem begging: Arbitrarily assigning the experience to what you want to have it (what has it is a story that varies from one opinion to the next). So apart from somebody's opinion that X says "ouch" because it feels pain, and Y says "ouch" because it is programmed to do so when it detects a potentially damaging blow, what does X have that Y doesn't? What are we unable to explain under naturalism? Why should I not consider myself a programmed machine?
The hard problem is definitely not science.
Agree.

Small nit on the prior post: Photons have mass, just not rest mass, which is not an issue since they cannot rest. But they have inertia and gravity and the usual properties of things with mass. But velocity is an example of something with no mass, yet is the stuff of physics. Consciousness is process in the materialist view, and process, like velocity, is a relation.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Materialism is logically imposible

Post by Greta »

Noax wrote:
The hard problem is definitely not science.
Agree.
Are you saying that because science requires a relatively objective third perspective?

I see no reason why the link between neuronal states and qualia can't be teased out with more sophisticated tools. After all, illness was caused by evil spirits before bacteria were discovered. I wouldn't be surprised if what we think of as qualia pertains to quantum or Planck scale states.

Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI. Take away emotions and morality and the idea of the remaining "consciousness" being fundamental becomes less controversial. Consciousness is simply having some idea what's going on, and given that every particle interacts with others in specific ways they certainly do "know" what's going on in their environment, even if they have no opinion. Opinions aren't fundamental, but the information, the relationships, on which opinions are built appears to be.

What are emotions but an evolutionary hack to get organisms behaving in complex ways as though they were processing much more information than they otherwise would. So a child might be kind to a stray kitten through emotion. An AI might be kind because it is programmed to promote growth rather than entropy, so it may perform all manner of computations of the numerous ways one may respond to a stray kitten, and determine that stroking in certain ways, adjusting to the animal's responses in real time, will promote maximal growth. Or, in a more sinister scenario, it may kill the kitten after determining that the kitten will be destined to grow up to to be a feral pest, calculate the complexity of the potential adult cat and deem it less than the complexity lost in ecosystems by a feral cat, thus ultimately promoting growth. Naturally, we would need special human provisos to avoid our own strategic extermination.

Nonetheless, the emotional reality hack is not needed if enough information is available. So emotions as instinctive motivational responses to informational cues, while currently being greatly valued by much of humanity, could be superceded by intelligent processing and deeper understanding of scenarios, calculating maximum "good". So while future AI wouldn't be motivated by love to "do good", it could still do more or less the same things as a person of goodwill, driven by a programmed impulse to achieve ends with minimum possible entropic effects on other entities. Further, (if the kitten scenario was sorted out) it could potentially achieve much kinder and more sustainable results than humans through fewer unintended consequences, like a chess player thinking multiple moves ahead.

Instead of anger, which acts as a display behaviour to deter those who would gain by inflicting entropy on us (I'm trying to think like a machine here) an intelligent AI would use strategic means to nullify needlessly entropic threats. Again, if programmed well it would achieve its ends with the "lightest possible touch", and actively work towards win-win solutions.

Still, without emotion, there are only algorithms - primary directives - to "motivate" AI to do anything. In a sense, when emotional human life is no longer viable on the Earth the only point to the persistence of nonhuman creations would be a sense of duty to perpetuate the Earth's story, to propagate new biospheres, to continue a new evolutionary line from self improving intelligent machines, or we may simply informational packages into space to maybe be be picked up by future advanced life, keeping some of the Earth's story alive even after it's been engulfed by the Sun in five billion years' time.
Ginkgo
Posts: 2657
Joined: Mon Apr 30, 2012 2:47 pm

Re: Materialism is logically imposible

Post by Ginkgo »

Noax wrote:The difference between the two seems to be your choice of language used in the description between one and the other. The p-zombie says ouch because he is thus programmed perhaps, but I also might thus be programmed. The zombie would not have said ouch without experiencing it. How else would it know when to say "ouch"?
A pinball machine is designed to detect movement, so when it is shaken, big letters appear saying "TILT" and the machine shuts down. One could just as easily programme the machine to say "ouch" whenever it is shaken. Obviously, the machine doesn't really feel any pain, it just gives the impression of feeling pain. The machine doesn't know "what it is like" to feel pain. To a certain extent we are programmed to realize certain responses much like a p-zombie or a computer and this is pretty much what Chalmers calls,"the easy problem of consciousness" I don't think the harder problem can be understood without considering the easy problem.
Noax wrote: It can if I choose to word it differently. My hunger and its low battery seem pretty similar to me. My looking to food in response to that state is a very instinctual programmed response.
True, but the difference is that you know what it is like to feel hungry. That's the extra bit, or the hard problem.
Noax wrote: Do you understand my issue? All the explanations seem begging: Arbitrarily assigning the experience to what you want to have it (what has it is a story that varies from one opinion to the next). So apart from somebody's opinion that X says "ouch" because it feels pain, and Y says "ouch" because it is programmed to do so when it detects a potentially damaging blow, what does X have that Y doesn't? What are we unable to explain under naturalism? Why should I not consider myself a programmed machine?

To some extent we are but we have something extra that machines and p-zombies don't.
Noax wrote: Small nit on the prior post: Photons have mass, just not rest mass, which is not an issue since they cannot rest. But they have inertia and gravity and the usual properties of things with mass. But velocity is an example of something with no mass, yet is the stuff of physics. Consciousness is process in the materialist view, and process, like velocity, is a relation.
Could you expand on the word "relation" please.
Last edited by Ginkgo on Sun Sep 11, 2016 10:21 am, edited 3 times in total.
Ginkgo
Posts: 2657
Joined: Mon Apr 30, 2012 2:47 pm

Re: Materialism is logically imposible

Post by Ginkgo »

Hi Greta
Greta wrote: Are you saying that because science requires a relatively objective third perspective?


Yes, this is definitely part of the problem.
Greta wrote: I see no reason why the link between neuronal states and qualia can't be teased out with more sophisticated tools. After all, illness was caused by evil spirits before bacteria were discovered. I wouldn't be surprised if what we think of as qualia pertains to quantum or Planck scale states.
We are still are a long way from a quantum explanation for consciousness but there is some progress in this area, mainly through the efforts of Hameroff and Penrose. One must acknowledge the efforts of David Bohm as well. One day a quantum explanation for consciousness will be the standard.
Greta wrote:
Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI. Take away emotions and morality and the idea of the remaining "consciousness" being fundamental becomes less controversial. Consciousness is simply having some idea what's going on, and given that every particle interacts with others in specific ways they certainly do "know" what's going on in their environment, even if they have no opinion. Opinions aren't fundamental, but the information, the relationships, on which opinions are built appears to be.
You have a lot of information in one paragraph. Emotions and morality are, to some extent, part of our experiences.In many cases are part of of our human make up.
Greta wrote:

What are emotions but an evolutionary hack to get organisms behaving in complex ways as though they were processing much more information than they otherwise would. So a child might be kind to a stray kitten through emotion. An AI might be kind because it is programmed to promote growth rather than entropy, so it may perform all manner of computations of the numerous ways one may respond to a stray kitten, and determine that stroking in certain ways, adjusting to the animal's responses in real time, will promote maximal growth. Or, in a more sinister scenario, it may kill the kitten after determining that the kitten will be destined to grow up to to be a feral pest, calculate the complexity of the potential adult cat and deem it less than the complexity lost in ecosystems by a feral cat, thus ultimately promoting growth. Naturally, we would need special human provisos to avoid our own strategic extermination.

Nonetheless, the emotional reality hack is not needed if enough information is available. So emotions as instinctive motivational responses to informational cues, while currently being greatly valued by much of humanity, could be superceded by intelligent processing and deeper understanding of scenarios, calculating maximum "good". So while future AI wouldn't be motivated by love to "do good", it could still do more or less the same things as a person of goodwill, driven by a programmed impulse to achieve ends with minimum possible entropic effects on other entities. Further, (if the kitten scenario was sorted out) it could potentially achieve much kinder and more sustainable results than humans through fewer unintended consequences, like a chess player thinking multiple moves ahead.

Instead of anger, which acts as a display behaviour to deter those who would gain by inflicting entropy on us (I'm trying to think like a machine here) an intelligent AI would use strategic means to nullify needlessly entropic threats. Again, if programmed well it would achieve its ends with the "lightest possible touch", and actively work towards win-win solutions.

Still, without emotion, there are only algorithms - primary directives - to "motivate" AI to do anything. In a sense, when emotional human life is no longer viable on the Earth the only point to the persistence of nonhuman creations would be a sense of duty to perpetuate the Earth's story, to propagate new biospheres, to continue a new evolutionary line from self improving intelligent machines, or we may simply informational packages into space to maybe be be picked up by future advanced life, keeping some of the Earth's story alive even after it's been engulfed by the Sun in five billion years' time.

Again, you have a lot of information here and we would need to work though it.
User avatar
Hobbes' Choice
Posts: 8360
Joined: Fri Oct 25, 2013 11:45 am

Re: Materialism is logically imposible

Post by Hobbes' Choice »

Materialism is logically necessary.

Existence has to have substance.
matter is the only substance
Matter makes existence substantial.
User avatar
Greta
Posts: 4389
Joined: Sat Aug 08, 2015 8:10 am

Re: Materialism is logically imposible

Post by Greta »

Ginkgo wrote:Hi Greta
Greta wrote: Are you saying that because science requires a relatively objective third perspective?


Yes, this is definitely part of the problem.
Maybe this is one area where a return to 19th century style Gonzo science and self experimentation could be helpful?
Ginkgo wrote:We are still are a long way from a quantum explanation for consciousness but there is some progress in this area, mainly through the efforts of Hameroff and Penrose. One must acknowledge the efforts of David Bohm as well. One day a quantum explanation for consciousness will be the standard.
If I was a gambler (which I'm not), I'd put my money on the answer to consciousness being at the Planck scale or even smaller. If there is one domain where perhaps information cannot be lost it might be at that very baseline of physical reality, too small to meaningfully be thought to be subject to space and time.

Ginkgo wrote:
Greta wrote: Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI. Take away emotions and morality and the idea of the remaining "consciousness" being fundamental becomes less controversial. Consciousness is simply having some idea what's going on, and given that every particle interacts with others in specific ways they certainly do "know" what's going on in their environment, even if they have no opinion. Opinions aren't fundamental, but the information, the relationships, on which opinions are built appears to be.
You have a lot of information in one paragraph. Emotions and morality are, to some extent, part of our experiences.In many cases are part of of our human make up.
Heh, yes, I have more claims than certainty.

Still, it seems to me that the entire notion of mysticism - spirits and souls - is based on emotion. Trouble is, you need working glands to emote. Whatever aspects of consciousness that can be posited to be fundamental, emotion would seem not to be one of them (although it is certainly an important and powerful emergence).

What is qualia without emotion? The closest I can think of would be a highly focused goal-oriented state. Rather than being motivated by emotion, the motivation would come from prime directives/original orientation. Instead of morality there would be evolving algorithms based on the instructions provided by moral beings (humans). As speculated earlier, decisions would be based on assessment of relative growth and entropy, which is what we do instinctively anyway. Rather than fight-or-flight, an emotionless consciousness might have fast enough processing speed to act both immediately and prudently in a way that we'd consider superhuman.

Might we be prepared to do away with suffering - grief, loss, pain and dread of torture and death - with the only price being to do away with life's joys? Maybe life on an ever more resource-poor Earth in the future is no picnic? High level cognition of everything, but no care whatsoever, rather total immersion in the moment.

Is consciousness without emotion synonymous with consciousness without qualia?
User avatar
Terrapin Station
Posts: 4548
Joined: Wed Aug 03, 2016 7:18 pm
Location: NYC Man

Re: Materialism is logically imposible

Post by Terrapin Station »

Immanuel Can wrote:But ideologies are what they are trying to believe -- consistently or inconsistently.
But people aren't "trying to believe" something, that obtains somehow, and that's not what they already believe. You can feel that what they believe is inconsistent, but even when that's the case, it's a straw man when you present their views as something other than what they actually (inconsistently in your opinion) believe.
User avatar
Immanuel Can
Posts: 27604
Joined: Wed Sep 25, 2013 4:42 pm

Re: Materialism is logically imposible

Post by Immanuel Can »

Terrapin Station wrote:
Immanuel Can wrote:But ideologies are what they are trying to believe -- consistently or inconsistently.
But people aren't "trying to believe" something, that obtains somehow, and that's not what they already believe. You can feel that what they believe is inconsistent, but even when that's the case, it's a straw man when you present their views as something other than what they actually (inconsistently in your opinion) believe.
No, no...you've got me terribly wrong. :lol: I don't "feel" they're inconsistent: because whether or not they are is not at all a matter of "feeling." It's a matter of logic. Their inconsistency is rationally verifiable or disprovable: so if you spell out a particular case, it would be quite possible to prove it one way or another. But "feelings," well, they're just irrelevant. If one believes the basic premises of Materialism, then by definition one is a Materialist. If one does not, then one IS not, whether one thinks one is or not.

It's that simple. 8)

Now, the subject here is Materialism. (See the OP). The terms on which we are discussing them is "logic" (Again, see the OP: just look above). We are not asking if people are sociologically inclined to be hypocrites, or foolish enough to misunderstand their own professed belief system: we are asking if they're logically entitled to Materialism. We are therefore discussing only those people who do actually believe in that ideology's basic premises. If they do not, then they are not being indicted -- or even mentioned -- here. :shock:

Ironically, there's no "straw man" except the one you're so desperately trying to fashion. I can only recommend that you start a strand called "People Who Think They're Materialists But Don't Actually Understand What Materialism Is," then you can talk about that all you want, and defend their right to be misunderstood on their own terms, or to misunderstand their own ideology all they want: and everybody else who thinks it's a good topic can join you.

Me, well, all I can tell you is that the empirical instrument is yet to be invented that can measure my indifference to that topic. :wink:
User avatar
Noax
Posts: 851
Joined: Wed Aug 10, 2016 3:25 am

Re: Materialism is logically imposible

Post by Noax »

Sorry for slow response on this earlier note.
Greta wrote:It seems to me that qualia pertains to the emotional experience of being. Take away the emotion and you have something akin to the protagonists of Chalmers's philosophical zombie thought experiment. It seems to me, though, that emotion is efficacious but perhaps its efficacy is reducing as tasks that once required instinct can be more reliably performed via analytics. Maybe much of what we value about consciousness is that which hinders us and renders us chronically unethical? Our best conscious intentions are regularly thwarted by vestigial unconscious impulses like fight-of-flight or PTSD response.
I hadn't seen it described as emotion before, which doesn't just seem to be different synaptic states, but also a chemical change. In that sense, there is little outside of biology to which emotional attributes might be attributed.
Your comments speak quite a bit of subconscious functions and their interactions with the conscious as defined as "not subconscious". There are several other definitions, and the consciousness of the hard problem is not always defined as "not subconscious". My subconscious seems to have qualia for instance. There is also 'awake' as opposed to the lowered-consciousness 'asleep' or absent consciousness of 'unconscious' of being knocked out.
Perhaps the human kind of consciousness will produce emergent phenomena, just as cephalised, conscious animals emerged from purely reflexive life forms. All contemplative traditions refer to selfless Zen-style flow states and meditative states which are keenly focused in the present moment, unhindered by mental and emotional static. This seems to me to not be miles from how advanced future humans or general AI or may operate - with far more efficient, unhindered focus.
Why might said unhindered focus be a better thing? My son has unusually unhindered focus. Does amazing things, but he's all the less fit for survival because of it. Evolution would have him in a hour were it not for my efforts.
Greta wrote:Are you saying that because science requires a relatively objective third perspective?
One can always experiment in first person. Clues are gathered from the first person experience of everyday life, from first person accounts of abnormalities (surgery, diseases, etc.). In the end, the data gathered in this manner is evidence and thus science. The interpretation of it is not. I find the examples I just stated to be strong empirical evidence, yet IC considers there to be absolute zero evidence. What science discovers is therefore not the hard part of the problem. That's about the best I can word it. If the hard problem is examined only by logic, one still must take empirical knowledge as a foundation of the logic.
I see no reason why the link between neuronal states and qualia can't be teased out with more sophisticated tools. After all, illness was caused by evil spirits before bacteria were discovered. I wouldn't be surprised if what we think of as qualia pertains to quantum or Planck scale states.
The links have been observed, and they operate on macroscopic scale. There seem to be no quantum level constructs in biology, which is too bad since such constructs would have evolved if there was useful information to be gathered there.
Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI.
This seems to confine possible consciousness to humans only, at least if we're the only ones that are moral. Or could a different thing be conscious if it had a moral code of its own, albeit a completely different ones than ours? Morality seems a social construct to me. I am conscious of the way interactions between others should be done.'
What are emotions but an evolutionary hack to get organisms behaving in complex ways as though they were processing much more information than they otherwise would. So a child might be kind to a stray kitten through emotion. An AI might be kind because it is programmed to promote growth rather than entropy, so it may perform all manner of computations of the numerous ways one may respond to a stray kitten, and determine that stroking in certain ways, adjusting to the animal's responses in real time, will promote maximal growth. Or, in a more sinister scenario, it may kill the kitten after determining that the kitten will be destined to grow up to to be a feral pest, calculate the complexity of the potential adult cat and deem it less than the complexity lost in ecosystems by a feral cat, thus ultimately promoting growth. Naturally, we would need special human provisos to avoid our own strategic extermination.
Kittens are cute and evoke a different response than does a spider, despite them both being predators of things I don't want around. Not sure what bearing all this has on the hard problem of consciousness.
Instead of anger, which acts as a display behaviour to deter those who would gain by inflicting entropy on us (I'm trying to think like a machine here) an intelligent AI would use strategic means to nullify needlessly entropic threats. Again, if programmed well it would achieve its ends with the "lightest possible touch", and actively work towards win-win solutions.
Touching lightly might not be a moral incentive. In the long run, I could find no definition of 'good' that would dictate a rational choice of action over human gut instincts. I've pondered how I would set things straight if I was boss of the world. All I concluded is that I could not identity a course of action that was 'good', and I could not personally bear the weight of doing so if one was identified.
Still, without emotion, there are only algorithms - primary directives - to "motivate" AI to do anything. In a sense, when emotional human life is no longer viable on the Earth the only point to the persistence of nonhuman creations would be a sense of duty to perpetuate the Earth's story, to propagate new biospheres, to continue a new evolutionary line from self improving intelligent machines, or we may simply informational packages into space to maybe be be picked up by future advanced life, keeping some of the Earth's story alive even after it's been engulfed by the Sun in five billion years' time.
That was sort of the long term goals for which I was searching. I like that the goals of humanity were not even mentioned in that description. What is good on Earth that the rest of the universe might wish to have perpetuated?
User avatar
Terrapin Station
Posts: 4548
Joined: Wed Aug 03, 2016 7:18 pm
Location: NYC Man

Re: Materialism is logically imposible

Post by Terrapin Station »

Immanuel Can wrote:No, no...you've got me terribly wrong. :lol: I don't "feel" they're inconsistent: because whether or not they are is not at all a matter of "feeling." It's a matter of logic.
We wouldn't have the same view on the ontology of logic.
User avatar
Immanuel Can
Posts: 27604
Joined: Wed Sep 25, 2013 4:42 pm

Re: Materialism is logically imposible

Post by Immanuel Can »

Terrapin Station wrote:We wouldn't have the same view on the ontology of logic.
Are you on the right strand, then? This one is about what is "logically" possible or impossible. And likewise, as I have pointed out before, it's about Materialism not Materialists. As you probably realize, philosophy focuses on the coherence of concepts, rather than the statistical examination of which people contingently happen to believe what. So maybe what you're looking for is a sociology forum, not a philosophy one...

Is that possible? :?

Nothing's wrong with that. But even so, it would no doubt be most useful to know whether those who are self-describing as "Materialists" are self-describing as something potentially logical and rational, or merely as something incoherent and idiosyncratic. That would determine whether your sociological study were of rational or irrational cases, at the very least.

So doing the philosophy first would be a good idea either way, don't you think?
User avatar
Terrapin Station
Posts: 4548
Joined: Wed Aug 03, 2016 7:18 pm
Location: NYC Man

Re: Materialism is logically imposible

Post by Terrapin Station »

Immanuel Can wrote:
Terrapin Station wrote:We wouldn't have the same view on the ontology of logic.
Are you on the right strand, then? This one is about what is "logically" possible or impossible. And likewise, as I have pointed out before, it's about Materialism not Materialists. As you probably realize, philosophy focuses on the coherence of concepts, rather than the statistical examination of which people contingently happen to believe what.
Re my ontology of logic, you can't somehow get "beyond" "which people contingently happen to believe what."
User avatar
Immanuel Can
Posts: 27604
Joined: Wed Sep 25, 2013 4:42 pm

Re: Materialism is logically imposible

Post by Immanuel Can »

Terrapin Station wrote:Re my ontology of logic, you can't somehow get "beyond" "which people contingently happen to believe what."
a) I don't know what your "ontology of logic" is. You haven't said. The entire concept sounds dubious. Still, feel free to expound your view.

b) This is a philosophy forum. People who do not understand philosophy, or who are incapable of holding a logically consistent belief in their heads are not the subject. They're a great subject for psychology or sociology, probably; but they're bad representatives of their own ideology, and it is uncharitable to use them to represent their philosophy as if they were exemplars of it when they're clearly not.

You should want that. That is, if you have any interest in a rational defence of Materialism. You should want Materialism represented in the form that is most logical, rational and consistent. Absent that, it can be faulted twice: once for being irrational on its own terms, and secondly because the exemplars of the belief can't even get it right. If you hang that baggage on Materialism, it's not going to win any arguments.
User avatar
Noax
Posts: 851
Joined: Wed Aug 10, 2016 3:25 am

Re: Materialism is logically imposible

Post by Noax »

Ginkgo wrote:A pinball machine is designed to detect movement, so when it is shaken, big letters appear saying "TILT" and the machine shuts down. One could just as easily programme the machine to say "ouch" whenever it is shaken. Obviously, the machine doesn't really feel any pain, it just gives the impression of feeling pain.
Define pain then. It notices the undesired action and reacts appropriately to it. It is not a human feeling or reaction, but the hard problem is not about being human. The pinball machine feels pain.

Not trying to be difficult. I suspect the problem would be solvable and not 'hard' if simple definitions existed for what exactly is hard to explain. But my view has been that if nobody can identity exactly what is missing from the pinball machine, that perhaps no such distinction exists. All I see is designations of "I have it, but example X doesn't". Nobody even defines the line where things stop having it. Just: This thing (pinball machine) is clearly on the other side of this arbitrary line I refuse to define.
The machine doesn't know "what it is like" to feel pain.
I doesn't know what it is like to feel human pain or hunger and you don't know what it is like for it to feel a TILT or for a laptop to feel low battery. But bottom line is the pinball machine very much does know, else it would not react to the tilt. So we seem to be in pursuit of the wrong thing in asserting the pinball machine doesn't know, or we fail to have an acceptable definition of 'knowing'. Give me one that distinguishes between conscious and not, because I seem to be able to do this analysis with any example.
To a certain extent we are programmed to realize certain responses much like a p-zombie or a computer and this is pretty much what Chalmers calls,"the easy problem of consciousness" I don't think the harder problem can be understood without considering the easy problem.
I think we're been discussing the easy problem, yes.
True, but the difference is that you know what it is like to feel hungry. That's the extra bit, or the hard problem.
Since I don't know what it is like for the laptop to feel low on battery, I see the two situations as exactly symmetrical.
To some extent we are but we have something extra that machines and p-zombies don't.
I think not. I find it impossible to grasp that a p-zombie would function undetectably the same without what I personally would consider to be human consciousness. Hence my assertion that there really is something extra, and my inability to see it means I am such a p-zombie. That would actually make sense to me. The Chalmers picture is one where the zombies are quite capable of functioning without the consciousness. I am missing several body parts, including some I'd rather not have lost. But if the consciousness was taken, I don't think what was left would function at all.
Could you expand on the word "relation" please.
A relationship between two or more physical states. Velocity is a relationship between an object and a reference object (often the ground), or an abstract reference frame. Process seems to be a dynamic set of relationships between states. Combustion is a chemical process for instance, relating say the alteration of oxygen and some sort of fuel with which oxygen reacts into different chemicals.
So it is unclear if process can be grouped in with regular static relationships like velocity or weight, but it is clear that process is not a physical object itself, with properties like mass.
User avatar
Noax
Posts: 851
Joined: Wed Aug 10, 2016 3:25 am

Re: Materialism is logically imposible

Post by Noax »

Hobbes' Choice wrote:Materialism is logically necessary.

Existence has to have substance.
matter is the only substance
Matter makes existence substantial.
While I'm hardly opposed to the conclusions, how do you justify this post?

Immaterial mind doesn't count as a different sort of substance, or a different set of properties of substance, or a substance but not an existing one?

I would say dualism is logically unnecessary, but you go further with the positive assertion.
Post Reply