Sorry for slow response on this earlier note.
Greta wrote:It seems to me that qualia pertains to the emotional experience of being. Take away the emotion and you have something akin to the protagonists of Chalmers's philosophical zombie thought experiment. It seems to me, though, that emotion is efficacious but perhaps its efficacy is reducing as tasks that once required instinct can be more reliably performed via analytics. Maybe much of what we value about consciousness is that which hinders us and renders us chronically unethical? Our best conscious intentions are regularly thwarted by vestigial unconscious impulses like fight-of-flight or PTSD response.
I hadn't seen it described as emotion before, which doesn't just seem to be different synaptic states, but also a chemical change. In that sense, there is little outside of biology to which emotional attributes might be attributed.
Your comments speak quite a bit of subconscious functions and their interactions with the conscious as defined as "not subconscious". There are several other definitions, and the consciousness of the hard problem is not always defined as "not subconscious". My subconscious seems to have qualia for instance. There is also 'awake' as opposed to the lowered-consciousness 'asleep' or absent consciousness of 'unconscious' of being knocked out.
Perhaps the human kind of consciousness will produce emergent phenomena, just as cephalised, conscious animals emerged from purely reflexive life forms. All contemplative traditions refer to selfless Zen-style flow states and meditative states which are keenly focused in the present moment, unhindered by mental and emotional static. This seems to me to not be miles from how advanced future humans or general AI or may operate - with far more efficient, unhindered focus.
Why might said unhindered focus be a better thing? My son has unusually unhindered focus. Does amazing things, but he's all the less fit for survival because of it. Evolution would have him in a hour were it not for my efforts.
Greta wrote:Are you saying that because science requires a relatively objective third perspective?
One can always experiment in first person. Clues are gathered from the first person experience of everyday life, from first person accounts of abnormalities (surgery, diseases, etc.). In the end, the data gathered in this manner is evidence and thus science. The interpretation of it is not. I find the examples I just stated to be strong empirical evidence, yet IC considers there to be absolute zero evidence. What science discovers is therefore not the hard part of the problem. That's about the best I can word it. If the hard problem is examined only by logic, one still must take empirical knowledge as a foundation of the logic.
I see no reason why the link between neuronal states and qualia can't be teased out with more sophisticated tools. After all, illness was caused by evil spirits before bacteria were discovered. I wouldn't be surprised if what we think of as qualia pertains to quantum or Planck scale states.
The links have been observed, and they operate on macroscopic scale. There seem to be no quantum level constructs in biology, which is too bad since such constructs would have evolved if there was useful information to be gathered there.
Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI.
This seems to confine possible consciousness to humans only, at least if we're the only ones that are moral. Or could a different thing be conscious if it had a moral code of its own, albeit a completely different ones than ours? Morality seems a social construct to me. I am conscious of the way interactions between others should be done.'
What are emotions but an evolutionary hack to get organisms behaving in complex ways as though they were processing much more information than they otherwise would. So a child might be kind to a stray kitten through emotion. An AI might be kind because it is programmed to promote growth rather than entropy, so it may perform all manner of computations of the numerous ways one may respond to a stray kitten, and determine that stroking in certain ways, adjusting to the animal's responses in real time, will promote maximal growth. Or, in a more sinister scenario, it may kill the kitten after determining that the kitten will be destined to grow up to to be a feral pest, calculate the complexity of the potential adult cat and deem it less than the complexity lost in ecosystems by a feral cat, thus ultimately promoting growth. Naturally, we would need special human provisos to avoid our own strategic extermination.
Kittens are cute and evoke a different response than does a spider, despite them both being predators of things I don't want around. Not sure what bearing all this has on the hard problem of consciousness.
Instead of anger, which acts as a display behaviour to deter those who would gain by inflicting entropy on us (I'm trying to think like a machine here) an intelligent AI would use strategic means to nullify needlessly entropic threats. Again, if programmed well it would achieve its ends with the "lightest possible touch", and actively work towards win-win solutions.
Touching lightly might not be a moral incentive. In the long run, I could find no definition of 'good' that would dictate a rational choice of action over human gut instincts. I've pondered how I would set things straight if I was boss of the world. All I concluded is that I could not identity a course of action that was 'good', and I could not personally bear the weight of doing so if one was identified.
Still, without emotion, there are only algorithms - primary directives - to "motivate" AI to do anything. In a sense, when emotional human life is no longer viable on the Earth the only point to the persistence of nonhuman creations would be a sense of duty to perpetuate the Earth's story, to propagate new biospheres, to continue a new evolutionary line from self improving intelligent machines, or we may simply informational packages into space to maybe be be picked up by future advanced life, keeping some of the Earth's story alive even after it's been engulfed by the Sun in five billion years' time.
That was sort of the long term goals for which I was searching. I like that the goals of humanity were not even mentioned in that description. What is good on Earth that the rest of the universe might wish to have perpetuated?