Noax wrote:I hadn't seen it described as emotion before, which doesn't just seem to be different synaptic states, but also a chemical change. In that sense, there is little outside of biology to which emotional attributes might be attributed.
Your comments speak quite a bit of subconscious functions and their interactions with the conscious as defined as "not subconscious". There are several other definitions, and the consciousness of the hard problem is not always defined as "not subconscious". My subconscious seems to have qualia for instance. There is also 'awake' as opposed to the lowered-consciousness 'asleep' or absent consciousness of 'unconscious' of being knocked out.
It only occurred to me recently that qualia and emotion seem to be inseparable. I see little difference, if any, between Chalmers's philosophical zombies and the idea of an entirely emotionless cognition, aside from, say, programmed emotional play-acting by AI for the sake of human comfort. I note that there is a general tendency to think of anything that lacks emotionalism as being machine-like, eg. microbes are often referred to as "biological robots".
Maybe the unconscious qualia you referred to are micro-emotions, sensory impressions too mild to trigger emotional responses? Those little niggles that are not consciously noticed.
In lieu of any kind of emotional response, intelligent AI could be programmed to detect thresholds of damage to that which it deems itself responsible and to respond, not with pain but simply a signal. The signal would trigger a chain reaction. However, rather than emotion driving and motivating subsequent actions, the "middleman" of emotion would be bypassed and, on detecting potential damage or loss, the machine would shift immediately to diagnostics and action.
Noax wrote:Perhaps the human kind of consciousness will produce emergent phenomena, just as cephalised, conscious animals emerged from purely reflexive life forms. All contemplative traditions refer to selfless Zen-style flow states and meditative states which are keenly focused in the present moment, unhindered by mental and emotional static. This seems to me to not be miles from how advanced future humans or general AI or may operate - with far more efficient, unhindered focus.
Why might said unhindered focus be a better thing? My son has unusually unhindered focus. Does amazing things, but he's all the less fit for survival because of it. Evolution would have him in a hour were it not for my efforts.
Yet his strong focus would be helpful if he had control over it and was capable of harnessing it more flexibly.
Noax wrote:Greta wrote:Are you saying that because science requires a relatively objective third perspective?
One can always experiment in first person. Clues are gathered from the first person experience of everyday life, from first person accounts of abnormalities (surgery, diseases, etc.). In the end, the data gathered in this manner is evidence and thus science. The interpretation of it is not. I find the examples I just stated to be strong empirical evidence, yet IC considers there to be absolute zero evidence. What science discovers is therefore not the hard part of the problem. That's about the best I can word it. If the hard problem is examined only by logic, one still must take empirical knowledge as a foundation of the logic.
I think this issue with IC comes down to the Mary's Room thought experiment, where the scientist Mary has monochrome vision but learns everything possible about colour. Then she is granted colour vision (surgery? whatever). Will she have learned something new? Generally the answer comes down to, no matter how much you learn, it's always only a sketchy approximation of experience, knowing how something feels. Of course, experience itself is only a sketchy approximation of the actual reality, Kant's noumena.
So we all sketch away in whatever manner suits us at the time :)
As mentioned earlier, rigorous study of subjective perspectives has been done by Buddhists, although obviously there's some metaphysical distractions to work past. Time is ripe for a similar subjective approach with western rigour. For people to simply record their experiences without metaphysical extrapolations. While neural mapping is potent, there are limitations involved in mapping the mental dynamics of subjects in a laboratory setting. That is, the experimenter only ever measures the kinds of self-conscious mental states possible for a subject who is stuck in a sterile laboratory, acutely aware that her every impulse is being observed, and all the while her head is infested by annoying gizmos with wires attached to a machine that goes beep every eight seconds. (A bit of poetic licence but you know what I mean :)
Noax wrote:I see no reason why the link between neuronal states and qualia can't be teased out with more sophisticated tools. After all, illness was caused by evil spirits before bacteria were discovered. I wouldn't be surprised if what we think of as qualia pertains to quantum or Planck scale states.
The links have been observed, and they operate on macroscopic scale. There seem to be no quantum level constructs in biology, which is too bad since such constructs would have evolved if there was useful information to be gathered there.
There is no known causative link between the dynamics of non conscious cells and the experience of consciousness. When the claim is made that researchers already understand how consciousness works thanks to neural mapping the usual reply is that that refers only to correlation, not causation.
It's not that quantum level constructs apply on a macro scale - obviously - but that multiple quantum effects in the brain are involved with the brain's function and, according to Penrose and Hameoff, neuronal microtubules may play a pivotal role:
https://www.sciencedaily.com/releases/2 ... 085105.htm
Noax wrote:Emotions and morality appear to be key aspects of qualia. Take them away and you have sensory and abstract processing that could theoretically be readily replicable by advanced general AI.
This seems to confine possible consciousness to humans only, at least if we're the only ones that are moral. Or could a different thing be conscious if it had a moral code of its own, albeit a completely different ones than ours? Morality seems a social construct to me. I am conscious of the way interactions between others should be done.'
Cows have morality - cooperators and troublemakers - ditto numerous other species. There's a famous Capuchin monkey clip, showing their clear grasp of fairness.
Noax wrote:Still, without emotion, there are only algorithms - primary directives - to "motivate" AI to do anything. In a sense, when emotional human life is no longer viable on the Earth the only point to the persistence of nonhuman creations would be a sense of duty to perpetuate the Earth's story, to propagate new biospheres, to continue a new evolutionary line from self improving intelligent machines, or we may simply send informational packages into space to maybe be be picked up by future advanced life, keeping some of the Earth's story alive even after it's been engulfed by the Sun in five billion years' time.
That was sort of the long term goals for which I was searching. I like that the goals of humanity were not even mentioned in that description. What is good on Earth that the rest of the universe might wish to have perpetuated?
Yes. Humanity, like all species, is either transitional or a dead end, never a destination. If transitional, then offworld AI may continue the line. What might evolve from self improving AI would be beyond our ken, which is an exciting thought.