Can Science Explain Consciousness?
Posted: Fri Jul 28, 2017 1:20 pm
For the discussion of all things philosophical.
https://canzookia.com/
So if the brain is a receiver rather than a creator of this core of essential ideas, it also explains Einstein’s respect for the source of our intuition. This source exists regardless of us.“My brain is only a receiver, in the Universe there is a core from which we obtain knowledge, strength and inspiration. I have not penetrated into the secrets of this core, but I know that it exists.” attributed to Nikola Tesla
Can we really be expected to have a greater understanding of consciousness without the union of science and ontology? I don't see how. The essential projection of objective meaning is its relationship to the Source. To deny objective meaning in favor of pragmatic subjective reactions some call meaning has nothing to build on. It makes no sense.She admonishes, increasingly dangerous reliance on mathematical expression as the most accurate expression of reality, flattening and making artificially linear the dimensional and messy relationships of which reality itself is woven:………………………………….
Weil argues that this creates an incomplete and, in its incompleteness, illusory representation of reality — even when it bisects the planes of mathematical data and common sense, such science leaves out the unquantifiable layer of meaning:
If the algebra of physicists gives the impression of profundity it is because it is entirely flat; the third dimension of thought is missing.
That third dimension is that of meaning — one concerned with notions like “the human soul, freedom, consciousness, the reality of the external world.”
The problem with consciousness is that one cannot tell if somebody is conscious or not. Therefore, if scientists could create a machine with human artificial intelligence they still couldn't tell if it is conscious or just a zombie (i.e. just passing the Turing test is no guarantee of being conscious).jayjacobus wrote: ↑Mon Jul 31, 2017 11:26 pm Scientist may discover the circuits that surround consciousness and declare EUREKA! but until they can create artificial consciousness (not just a mimic) they won't understand consciousness. Consciousness is as issue without a likely solution.
Interestingly, that's exactly what we, humans, do. We take light waves of certain type through our eyes, which transform them into bioelectrical signals and send them to our brain which process that data and interprets our own personal version of the world. Perhaps we shouldn't assume that an artificial intelligence has to be exactly like us.jayjacobus wrote: ↑Mon Jul 31, 2017 11:26 pm Already scientists claim that a computer can see. If computers are not aware, they can't see. They process and interpret data from light waves. This is not seeing.
Consciousness interprets a scene. The brain creates a scene. The scene is not data but a compilation of signals. If you don't see a scene and are not aware of the scene, you don't see.]Interestingly, that's exactly what we, humans, do. We take light waves of certain type through our eyes, which transform them into bioelectrical signals and send them to our brain which process that data and interprets our own personal version of the world. Perhaps we shouldn't assume that an artificial intelligence has to be exactly like us.[/color]
Alright, seeing implies experiencing it, and without consciousness there is no experiencing it and therefore it is not seeing. However, how can we prove there is not consciousness?jayjacobus wrote: ↑Sat Aug 12, 2017 10:02 pm Consciousness interprets a scene. The brain creates a scene. The scene is not data but a compilation of signals. If you don't see a scene and are not aware of the scene, you don't see.
Suppose I show two images to a 5-year-old kid, one of a cat and one of a dog, and I ask her to point to the image of the cat and kids does so correctly every time. Most us would say she sees the images and chooses accordingly. But how do we know the she's is actually experiencing seeing?jayjacobus wrote: ↑Sat Aug 12, 2017 10:02 pm Already scientists claim that a computer can see. If computers are not aware, they can't see. They process and interpret data from light waves. This is not seeing.
I think the 5-year-old sees, but don't ask me why. This is the same common sense that allows us to discuss consciousness.edalorzo wrote: ↑Mon Aug 14, 2017 10:17 pm
Suppose I show two images to a 5-year-old kid, one of a cat and one of a dog, and I ask her to point to the image of the cat and kids does so correctly every time. Most us would say she sees the images and chooses accordingly. But how do we know the she's is actually experiencing seeing?
Let's say we show the same images to a computer with an optical device, and the computer choose the cat correctly every time. How could you tell if the computer is not experiencing seeing? It takes light waves, process them, interprets them and react accordingly, pretty much like our brains do. How can we tell if the computer is a just a zombie or if it is actually conscious?
And how do we know we're not just a complex biological algorithm? For all we know, consciousness may just be a by-product of running such biological algorithm.
.
Let's say we show the same images to a computer with an optical device, and the computer choose the cat correctly every time. How could you tell if the computer is not experiencing seeing? It takes light waves, process them, interprets them and react accordingly, pretty much like our brains do. How can we tell if the computer is a just a zombie or if it is actually conscious?
That's en empirical inference. You say so because the kid behaves like a conscious being. I thought this would be a good point to quote Searle
In the same line of thought, we could easily conceive a computer program that behaves like a philosophical zombie.John Searle wrote: According to the argument from analogy, I infer the existence of mental states in other people, by analogy with myself. Just as I observe a correlation of my own behavior with my mental states, so I can infer the presence of appropriate mental states in others when I observe their behavior. I have already pointed out the limitations of this form of argument. The problem is that in general with inferential knowledge there must be some independent check on the inference if the inference is to be valid. Thus, for example, I might discover that a container is empty by banging on the container and inferring from the hollow sound that there is nothing in it, but this inferential form of knowledge only makes sense given the assumption that I could open up the container and look inside and thus noninferentially perceive that the container is empty. But in the case of knowledge of other minds there is no noninferential check on my inference from behavior to mental states, no way that I can look inside the container to see if there is something there.
That perspective is a bit neuron chauvinistic. You seem to assume that a computer needs to be a human and probably have a human brain to be conscious.jayjacobus wrote: ↑Tue Aug 15, 2017 1:43 pm A computer is neither conscious nor a zombie. We know that the results do not define the operator but do define the operation. A horse is not a person even though both can travel. And a computer is not a human even though the computer can mimic many of the results of human consciousness [...]
But you know that's not true. You know it's not true because the computer follows a chain of switches without subjectivity and you know that computer experts don't understand consciousness and can't program it. Let's say that they are pretending to do what they can't,At the same time, as a computer programmer, and as a scientist, how could I be certain that my android is not conscious at all? Perhaps he is truly conscious and experiences the world just as I do.