Immanuel Can wrote: ↑Sat Nov 21, 2020 8:24 pm
Scott Mayers wrote: ↑Sat Nov 21, 2020 7:06 pm
I think that the reason for consciousness at all is DUE TO the fact of contradiction of our 'expectations'.
That's interesting. So we'd have to think that if we only got what we expect, we might not be genuinely "conscious"?
If that's so, then we might have a partial answer to the question: namely, we have "barriers" so that we can become conscious. Would you say that?
Maybe. My statement is more like, "IF/SINCE (barriers exist), THEN (consciousness evolves for animals)" because the benefit of the living being that happens to succeed with more likelihood would be those that can induce patterns useful to overcome barriers. The consciousness that we feel is just a subset of all of our cells that acts like a sort of External Affairs department of a government. It is an 'interface' of an outside world that lacks CONSISTENCY. (Thus is relatively 'contradictory' to our individual collection of cells without because those animals that lack it, would be eaten up by those that have it to overcome such conflicts in a relatively unpredictable set of different environments.)
But to expect is itself also not meaningful technically unless we HAVE something that initially goes against it as something unexpected; So I get and share your concern.
Yes, that's it. How do we get this "expectation" of something we have never had? That's a puzzling one, at least initially...
In artificial intelligence, the logic required to get computers or robots to learn requires a predefining PROGRAM something like:
(0)Experience = 0
(1)take a sample of the environment as an experience, X = (whatever it finds); Make Experience(new) = Experience(old) + 1
(2) assign whatever it senses first as what it 'wants'. (Seek for what is in X)
(3) Try to find what it wants, (Compare an arbitrary selection to see if it is a match to what you 'want' = contents of X)
(4) If it finds the same kind of object with its new 'programmed' assigned target, its runs a reward program by goint to (6); if not, goto (5)
(5) Penalty Program: Subtract one from your experience. If Experience = 0, goto (1), Else goto (3)
(6)Reward Program: Add one from your experience. (Experience = Experience(old) + 1) Goto(3)
[I tried to present only a simplified program here in a way I hope you might follow. If you cannot, don't be too concerned. I was just giving a possible example of how such a non-thinking mechanism can lead to one that 'evolves towards something that could.]
I think that another way of thinking of what conscious beings do is summarized as "wanting" itself, as it relates to this. "Why do we 'want", then, would be a kind of reduced question of conscious beings to the same question here.
Yes, or even more, "Why do we want what we have never had?" or even, "
How do we manage to conceive of ideal states like freedom, so that we can want them, even though we don't ever have them?
I'm presently watching the series, "Brave New World", based upon the Huxley's original. It is a Utopian model that works well but is challenged by one from outside who asked of his new admirers inside this utopia, "Do you want 'happiness' or do you want 'freedom'?" Personally, the contrived utopia actually seems favorable to me in context of the series. But the members inside this utopia are relatively naive and has a caste system and each takes pills for each emotional risk. This reminded me of that.
Unfortunately, I just learned by looking it up now, that the series will be cancelled. Perhaps its present plot made some think as I did: why NOT stay in bliss if 'happiness' is still an end in itself. But the intent of the lesson of the story was that some
actual degree of 'happiness' is lost should we all be on the same level. I think that if this kind of utopian world could exist, we'd be reduced to more literal vegetables in actual fact. Then we'd become 'cells' of another larger entity that repeats the process of the 'whole' society as an organism in its own right.
Basically, consciousness has no purpose if we get what we want directly. We'd lack the need to seek the environment when we get it automatically and so become as plants. But, of course, as plants are relatively restricted to succeed with complete dependence upon the environment, we risk potential annihilation by some other possible environmental event, as the dinosaurs had. Consciousness is uniquely of animals because we roam to different environments whereas plants have to accept the environment as is.
My guess relates to biology and evolution. We have something that intially has no 'value' to behaviors as favorable or unfavorable to us personally. Then, from early development, we take something from the environment and assign it as "that which we will seek", regardless of whether what we seek is constructive or destructive of our biology. And then, IF it has any tendency to PERMIT us to continue existence without being destroyed, the survival rate of ONLY those assigned behaviors that get passed on by continued living things, BECOME what we 'default to' as a normal "expectation", ...and something that we internally assign as "good" to us.
Okay. But if that were the case, would there not have to be a neat fit between what is "good" and what is "adaptive"? But some things that we take as good are surely not very "adaptive." For example, an "adaptive" strategy would be maximizing our own survival by acquiring as many advantages as we can...but we have law that say, "Thou shalt not steal." Or an "adaptive" strategy might be to maximize promiscuity...but we have rules that say "Don't sleep with everybody."
If we can find even one such rule, that requires of us something "good" but which is not evolutionarily advantageous to us personally, then why do we feel obligated to follow it?
Nietzsche thought it was that the weak were imposing their "Judeo-Christian" morality on us, as he put it. And he thought that being really adaptive, that is, becoming an "Overman" or "Superman" would entail seeing those moral precepts as the fakes they were, and getting past them all.
If "good" = "adaptive," then can we say he was wrong?
This is the point at which political movements had looked at Darwins theory and thought to question what 'fitness' meant. On the surface, what is 'adaptive' can be "good" but still depends upon an environment that simply doesn't utterly destroy it by some other competing body. Darwin's 'fit' meant "that which matched the present environment's coinciding favor, not the particular individual's prosperity or happiness." By contrast, if for whatever means you happen to BE 'happy' regardless, then happiness assures your 'fitness'.
So the logic is a condition:
If I am 'happy' (a 'good' thing), then I am 'fit'
NOT, if I am 'fit', then I must be 'happy'.
That is, you can be 'adaptive' but not in a 'good' way (because you can possibly be sad or simply lack emotions, like a plant).
If I want "mommy", for instance, it is only when "mommy'' is predefined as a norm that we discover her absense as discomforting.
Right. I get that. But a baby has already experienced "mommy" -- not as a concept, surely, but as a reality...as a warm presence, the absence of which is unpleasant. So that can perhaps be explained.
But what about some value like "total freedom"? That's an ideal we've NEVER experienced, not even once. So "How do we learn to desire it?" is the next question. We shouldn't even be able to
conceptualize it...much less
recognize the absence of it as somehow "wrong." We should, one would think, experience our restrictions and "barriers" as just...well...normal.
See above. I cannot interpret 'freedom' without a prior evalation. You might think of the mechanism that seeks a value in the environment in the example program above as 'neutrally
free'. Freedom is not necessary for 'happiness' and why I question this in essence. We only find disapproval of this one we experience it and find it meaningful. If we are born without having knowledge of its meaning, then like the relative 'automatons' in Huxley's
Brave New World, we'd be satisfied. In the series, the outsider's intent to entertain the value of freedom derived from his own selfish expressions that went against the norm. The others thought it entertaining for being something highly unusual but were not so able to handle the 'freedom' once released. It is something valued as a complex of having both 'good' and 'bad'. Like being bipolar, the high of freedom is due to the way one can increase the illusion of pleasure by experiencing the extremes of pain (or discomfort). As such, freedom can advance the degree of distinction between happiness to sadness, like we have the degrees of rich to poor. But it isn't in itself a PRIOR necessity to experiencing, just as one learns their addiction to some drug AFTER experiencing the 'freedom' to do it. Though, I know that after experiencing the degree, it is even hard for me to ever suggest that it would be good to go back for myself. You can't undo the experience of the roller coaster's power of experience once experienced without brain damage or death, something that drug users are sure to prove is common.