There seems to be an emerging consensus around an answer to this question in the “
Representational Qualia Theory” camp. To date, no other camp has achieved anything close to this amount of consensus. It defines consciousness to be:
Canonizer wrote:Consciousness is: "Computationally bound elemental intrinsic qualities like redness and greenness".
It points out that all today’s AI’s are “abstract”. Any knowledge of red things AIs have is represented with abstract words like the word ‘red’. While we represent knowledge of red things with something in our brain that has an intrinsic redness quality. With abstract knowledge you need a dictionary with a definition of a word like ‘red’ to know what it means. While the redness quality of our knowledge is just a fact. Running directly on intrinsic properties is more efficient since there is no dictionary required to achieve substrate independence. So, it's not a matter of probability, it's just a matter of experimentalists reporting on observations of the brain in a non
"qualia blind" way (using more than one abstract word for all things red) so we can discover which of all our abstract descriptions of stuff in the brain is a description of redness. Once we do that, we'll be able to build more efficient phenomenal computational system that run directly on physical qualities rather than ones with information that is abstracted away from the physical properties representing it requiring lots of interpreting dictionaries.
It's not a "hard mind body problem", it's just an intrinsic color problem. We simply don't yet know the intrinsic color of anything in the world because we are still
qualia blind, using one word for all things red.
And if we see that same redness stuff in a bat, we will
know what it is like to be that bat.