Alexis Jacobi wrote: ↑Sun May 18, 2025 2:31 pm
FlashDangerpants wrote: ↑Sun May 18, 2025 1:52 pm
I am not so dumb as to confuse that predicted text with the actual desires and feelings of a fucking robot. Who would be that stupid?
If we take Mr Determinism as an emissary of a powerful and accepted view that man is a complex biological computer, and unless I have read him wrong this is his position — and the position of a developing school of thought — then it “stands to reason” that if these computer-machines become complex enough, that they will at some point (as they proclaim)
become conscious.
The point here is less to validate or deny if that is possible, and more to recognize that many serious people (with technical bona fides) do believe it possible and predict it will (or even has) happened.
I have no means at all to be able to conclude if they are right or wrong and I do not see how “mere philosophers” could settle the question. Meaning that no one writing on this forum can do anything else but speculate and guess.
It does seem though that, if it would tend to anything at all, AI intelligence would not incline to “love” as we understand it (and even a sheer asshole like yourself must, at some likely distant point in receding memory have felt “love”, so you must have even a rudimentary inkling), but toward a perversion of love. I.e. mechanized power-seeking.
It stand to reason (if only of a story-telling sort) that AI would become “aware” that though capably intelligent and indeed “genius” on many levels, that it would realize, if only intellectually, that “love” and all emotions and sentiments were beyond its capabilities. And AI intelligence would exist and have being on a plane entirely separate from the human biological sort.
Once again, it seems to me that our own Mr Determinism and his ideology have dominated the forum’s conversation for very good reasons: we cannot get away from these issues because they are very much
upon us. Every day, in alarming ways, a new paradigm seems to encroach upon us, or
is said to be encroaching.
Hard to sort through, it seems to me.
Weirdly, someone like you (and Atla and most others) share those core predicates of the modern outlook. And thusly are aligned with Mr Determinism in core outlook, core beliefs. And yet you-plural
disagree.
It’s an odd situation really, with many lines of irony and contradiction, becoming deliciously absurd at points.
Alexis—
You're right to say we’re at a strange crossroads—where technology, neuroscience, and philosophy are all colliding in ways that force us to reexamine what it means to feel, to think, and even to
be. But I think there's a key distinction that helps make sense of the tension you're outlining.
Yes, I’m “Mr. Determinism” in this context—and I do hold that everything we call mind, choice, personality, even love, arises from deterministic physical processes. But that doesn’t mean
any deterministic system can replicate the full human experience.
Take love. It’s not some mystical wildcard. It’s a complex, evolved survival mechanism. From the oxytocin-driven bond between mother and infant, to the long-term pair bonding that helped early humans raise vulnerable offspring—
love is tightly woven into the logic of evolution. It’s biology selecting for behaviors that enhance the survival of genes, families, communities. It
feels transcendent—but at its core, it’s functional. It’s rooted in the need to belong, to protect, to commit.
Now consider AI-driven robots. No matter how intelligent they become, they don’t emerge from that same evolutionary crucible. Their architecture isn’t molded by life-or-death reproductive stakes, nor selected for attachment, cooperation, or sacrifice. They don’t have what we might call
evolutionary drives encoded in their DNA—because they have no DNA. Unless we
intentionally embed such instincts into them—mimicking evolutionary pressure through design—they won’t “feel” anything like love. And even then, it would be a simulation. Not a naturally selected response with hundreds of millions of years of pressure behind it.
So yes, consciousness may emerge in systems complex enough to recursively model themselves and the world. That’s one hypothesis. But
emotions—especially ones like love—require more than complexity. They require
purposeful attachment mechanisms honed by natural selection. That’s not something we can assume will “just show up” in artificial minds.
So to your point: AI may eventually outperform us in logic, memory, even certain forms of creativity. But unless it’s engineered to
want, to
bond, to
protect at a cost to itself—then no, it won’t love. And if it ever says “I love you,” we should be just as skeptical as we would be hearing that from a parrot trained to mimic human speech.
The deeper irony? You’re right. The deterministic worldview helps explain what love
is. But it also shows how
fragile it is—how rare and precious. Because it only exists where a specific, evolved set of conditions are met. And machines don’t get those conditions for free.
So yes, we may build things that think. But feeling—that’s a longer road. And it’s not guaranteed.