Consciousness has always been a serious embarrassment for those who believe that everything is physical and that physics is the most authoritative account of the material world. There is, it seems, nothing in matter or energy as seen through the eyes of physics that explains how a part of the material world might become aware of itself and the world surrounding it, as is the case with conscious subjects, such as readers of Philosophy Now. Physicalism cannot account for the emergence of minds from a purely physical reality.
What does it mean to be "physical"? Sure, Dualism is not a valid position and neither is a "physical" monist position for the reasons being described in this article because one still has to account for the mind, illusion or not.
Science describes objects as the interaction, or relationship, of smaller objects (ie cells are made of molecules, molecules made of atoms, atoms made of protons and electrons, etc.) all the way down. So if each object is just a relation of smaller objects all the way down, where, or what, is the "physical" nature of reality? What does it mean to be "physical"?
Some physicalists propose that reality is simply a relation of mathematics, or that mathematics is fundamental. But mathematics is an abstraction.
In my opinion, reality is not physical or mental. It is informational. Information is fundamental.
Chalmers’ list of ‘easy’ problems in the philosophy of mind includes: how the brain can discriminate, categorize, and react to environmental stimuli; our capacity to describe our mental states; our ability to focus our attention, or deliberately to control our behaviour; how our cognitive systems acquire and integrate information; and the difference between wakefulness and sleep. Yet for me, understanding all these aspects of consciousness is as hard as explaining experience. Attention, deliberation about one’s behaviour, and wakefulness, are also things about which we can ask the question, “What is it like?” Likewise dream-filled sleep. Indeed, if these features did not feel like anything – if there was nothing it was like to experience them – they would not be what they’re supposed to be. What’s more, difficult questions would remain about why they at least seem to feel like something. ‘Seeming to feel like’ is no more amenable to a physicalist explanation than ‘feeling like’, as seeming also presupposes experience.
Exactly. Even the study of neurology requires conscious experience. All descriptions of the brain and it's neurons and their activities are visual descriptions. Observations are made, hypotheses proposed and then tested with more observations. Science itself is the act of categorizing ones observations (conscious experiences) to be able to make predictions of future observations (conscious experiences).
If consciousness is an illusion then so are all the observations made by science, including neurology.
There is an obvious objection to illusionism. How can I be wrong about the existence of something for which I have inescapable evidence? Of course, I can be mistaken as to the existence of something that I believe is out there because of my experiences: I may incorrectly think I have seen a red apple. Or I may incorrectly classify it as a plum. I cannot, however, be mistaken that I have had an experience of a colour (even though I might misname it because I have a poor grasp of colour terminology).
Agreed. Even though I know that a mirage is not a real puddle of water on the ground I still experience the "illusion". The cause of the "illusion" is explained as a product of the behavior of light and it's interaction with our eye-brain system. The causes of a mirage aren't waved away without some explanation as to how or why it appears in the first place. So why would describing the mind as an illusion be any different?
It is, however, far from clear why brain mechanisms should enter awareness at all. What purpose does consciousness serve? Most organic and all artefactual mechanisms proceed without being made aware of themselves. Granted that the clever things that go on in computers are, for we who use them, handily reduced to icons, just as Dennett notes – but that reduction is not for the benefit of the computer, but for the (conscious) user for whom the computer is simply a tool to support a certain range of activities. This is hardly analogous to our minds’ relationship to our brains.
To me, the purpose of consciousness is learning. Consciousness is basically a sensory feedback loop where we make an observation, adapt our behavior, observe the effects of our behavior and then repeat until we become proficient. This is how we learn to ride a bike, drive a car, use a computer, etc. Once we become proficient we no longer need to devote conscious effort to the task. We can automate the tasks of riding a bike and driving a car and focus on other matters, like what we will have for dinner tonight or what you will talk about with your friends later in the day, etc.
This is what separates us from the lower animals and AI. They operate on instincts. Instincts are those behaviors that have been hard-coded over millions of years and therefore do not require consciousness to function. However, if AI was designed to make observations in real-time, adjust its behaviors based on those observations and then update it's programming based on the effects its behavior had, then we can say that the AI has conscious experiences. The information in its working memory would qualify as "what it is like" for the AI, just as it is for humans. Consciousness is a type of working memory.