E4788077-1C0D-4EE5-B4CE-F568A9AF93BF.png
Consciousness is Not Computational and Not Controllable
Finally, let’s look at the lower-right quadrant. Again, we assume that the computational theory of mind is incorrect. But now we also assume that humans have libertarian free will. We are something truly special: the conscious authors of our own stories. We are creatures with insights, intuitions, feelings, and volitional capacities that cannot be replicated by computation. This is the quadrant that I personally believe is true
(EDIT: me too...)
Of course, readers of Less Wrong would call this the “woo woo” or “pseudoscience” quadrant, since it foolishly rejects the reductive materialism that (they believe) underlies science. Religious and spiritual minded thinkers would consider it a wise rejection of reductive materialism. Average people just live their lives as if this quadrant were true, and react to new developments in AI as if it were true.
If this quadrant is correct, then AI cannot ever have a mind, no matter how good its learning model or how big its neural network. It can, at best, simulate the appearance of having a mind. That is the point of John Searle’s Chinese Room thought experiment: An AI can only ever be a philosophical zombie, without understanding or intentionality.
If this quadrant is correct, AI can’t replace us because we’re special in a way it never will be. In a sense, that’s good news.
Unfortunately, the people making AI don’t think this quadrant is true. (Re-read the reductivism of Less Wrong!) And we can’t ever prove it to them. Nothing I or anyone else could ever say or do could persuade someone like Eliezer Yudkowsky that I’m non-algorithmic and free-willed; I could only demonstrate to him that I say I’m non-algorithmic and free-willed. But a computer could be programmed to say that, too.
And that’s very bad news. Why do I say that?
Well, imagine that humanity moves forward with AI development without solving the AI alignment problem, and creates an advanced AI that eliminates us all.
Now imagine that the upper-left quadrant is correct. If so, then the elimination of our species is no big deal. If an advanced AI replaces humanity, all that’s happened is that… a new deterministic system that is superior at computation has replaced an old deterministic system that was inferior at it. As chilling as this sounds, I have spoken to several AI developers who hold precisely this view — and are proud to be working on humanity’s successors. If you accept the nihilism inherent in reductive materialism, it makes perfect sense.
In contrast, imagine that our lower-right quadrant is correct. If so, then eliminating our species is eliminating something unique and special. If an advanced AI replaces humanity, then beauty, goodness, and life itself have been extinguished in favor of soulless machinery. This is an absolutely horrific ending — in fact, the worst possible outcome that can be conceived.
If this quadrant is true, then
we’re not just summoning a genie to grant our wishes, we’re summoning a soulless demon, an undead construct. The AI black box is black because its black magic, and we shouldn’t touch it.
https://en.m.wikipedia.org/wiki/Colossu ... in_Project