The Power of Art and Emotion: Should We Worry About Manipulation?

How should society be organised, if at all?

Moderators: AMod, iMod

Post Reply
seeds
Posts: 2880
Joined: Tue Aug 02, 2016 9:31 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by seeds »

BigMike wrote: Mon Jan 13, 2025 8:41 pm Seeds, your attempt at a rebuttal is as misguided as it is self-assured. Let me break it down for you, though I suspect that won’t stop you from clinging to these ill-conceived notions.

First, gravity doesn’t affect spacetime or light in the way you’re implying. Spacetime is the framework within which gravity operates, and mass influences the curvature of spacetime. That’s not the same as “affecting” something like an object. Similarly, photons follow geodesics in curved spacetime—they’re not "affected" by gravity in a causal interaction sense.
Look, you made the following broad and unambiguous assertion...
"...gravity cannot affect something that has no mass..."
...and, just for kicks, I ran the issue past Microsoft's Copilot (bolding and emphasis mine)...
Me:
Does gravity affect the massless fabric of spacetime?
Copilot:

Absolutely! Gravity plays a crucial role in shaping the fabric of spacetime.

According to Einstein's theory of general relativity, massive objects like stars and planets create a curvature in spacetime, which we perceive as gravity. Even though spacetime itself is massless, the presence of mass and energy causes it to bend and warp.

Me:
Just to be absolutely clear (because the word "affect" seems to be important to my interlocutor on a philosophy forum), it is the gravity associated with massive objects such as stars and planets that "affects" (bends/causes the curving of) the massless fabric of spacetime. Is that correct?
Copilot:

Yes, you are absolutely correct. The gravity associated with massive objects like stars and planets "affects" the massless fabric of spacetime by bending or causing it to curve.
Image

:P :D
BigMike wrote: Mon Jan 13, 2025 8:41 pm Your understanding is shallow and riddled with misconceptions.
That's rich coming from a hardcore materialist such as yourself,...

...for there is no one more shallow when it comes to thinking about the universe, than a person who believes that there is nothing more to reality than what the thin veneer of this material "illusion" presents to our senses.

Try not to soil your Depends, but from my perspective, you are the metaphorical equivalent of a video game character who is somehow conscious enough to see, feel, hear, smell, and taste the limited 2-D world you are a part of,...

...however, you are not quite conscious enough to recognize the deeper truth that that world is, in essence, a programmed illusion with a software-like underpinning.

In other words, none of it is as "real" as what it seems to be.

Indeed, physicist and author, Nick Herbert,...

...based on his assessment of Werner Heisenberg's conclusions of how what we call "reality" is composed of a substance that doesn't seem to be very real itself,...

...stated the following in his book, "QUANTUM REALITY; Beyond the New Physics:
"The entire visible universe, what Bishop Berkeley called 'the mighty frame of the world,' rests ultimately on a strange quantum kind of being no more substantial than a promise."
BigMike wrote: Mon Jan 13, 2025 8:41 pm If your argument is that this "ghost" doesn’t need to adhere to physical principles, then congratulations—you’ve left the realm of reason entirely. You’re invoking fantasy, not engaging in a serious discussion.
No, BigMike, I have not left the realm of reason entirely, I simply (unlike you) haven't closed my mind to the implications of "strong emergence."

Now it goes without saying that you are going to accuse me of all sorts of mystical and magical (delusional/unscientific) thinking.

To which I will preemptively suggest that because you do not realize that you are, in essence, sleepwalking through life, then the more articulate, eloquent, and passionate you are in presenting your materialistic vision of reality, then the more you will demonstrate...

(in direct proportion to the degree to which you believe your assertions)

...the depth and degree of your somnambulism.

So, by all means, insult away and show me just how much you are under the thrall of this amazing ("dream-like"/"video game-like") illusion that is comprised of a substance that is...

"...no more substantial than a promise..."
_______
Last edited by seeds on Tue Jan 14, 2025 7:15 am, edited 1 time in total.
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

BigMike wrote: Tue Jan 14, 2025 1:28 am
Gary Childress wrote: Tue Jan 14, 2025 1:16 am
BigMike wrote: Tue Jan 14, 2025 1:02 am
Gary, I understand your sentiments, but I’d like to unpack this a little and clarify where your assumptions might need adjustment, particularly regarding consciousness and its relationship to the physical.

First, consciousness doesn’t have to be seen as something mystical or outside the physical realm. There’s overwhelming evidence tying conscious experience to physical processes in the brain. The effects of brain damage, anesthesia, or intoxication show that altering physical structures or chemical balances in the brain profoundly impacts consciousness. This strongly suggests that consciousness arises from, and is dependent on, physical interactions. In other words, consciousness is a product of a highly complex machine—our brain—not something fundamentally separate from it.

Now, on the point about machines and pain: Machines don’t feel pain, not because they are inherently incapable, but because they weren’t designed or evolved to experience it. Pain, in humans and other animals, is an adaptive mechanism—it evolved to help organisms survive by avoiding harm. Machines, by contrast, are created to perform tasks; they lack the evolutionary imperative to experience suffering. If one day we designed machines with the right sensory and processing systems to simulate or even experience something akin to pain, the ethical considerations around their treatment would necessarily shift.

Humans are indeed different from current machines, but not because we’re somehow above or beyond being mechanical. Instead, we’re vastly more complex. The human body, including the brain, operates through physical principles—chemical reactions, electrical impulses, and mechanical processes. Our complexity doesn’t make us less mechanical; it makes us a highly advanced machine shaped by billions of years of evolution.

So, I’d tweak your statement like this: Human beings are special not because we transcend physicality, but because our biology enables experiences like pain, love, and creativity. These aren’t mystical phenomena—they’re the emergent properties of an extraordinarily intricate system operating within the laws of nature.

Would you agree that seeing consciousness as a physical phenomenon doesn’t diminish its value but rather deepens our appreciation of how extraordinary it is?
OK. So I did not say that consciousness was "mystical". I said it made humans special over machines. I said it sets us apart from machines. Do you disagree with that statement?

As far as "outside the physical realm", I would say that a human being lives very much within the physical realm. So again, I'm not seeing a problem with my words unless you are reading into them something I did not say.

1) What evidence do you have that suggests machines can be made to feel physical pain just as humans and (presumably many other living beings) do?

2) Why would you think that a machine can feel pain?

3) Or maybe you could unpack your statement a bit, what are you defining as a "machine"? I define a "machine" as a tool that humans use to improve our own lives and perhaps even those of other living beings if we can. If something feels things, then it is not a "machine". If something is conscious, then it is not a "machine". Do you disagree?
Gary, I appreciate your clarifications, and it seems we’re converging on some shared ideas while differing on how we frame them. Let me address your points directly.

Consciousness certainly makes humans and other sentient beings unique compared to machines, as we currently define them. Where we might diverge is in what "special" means. I don’t dispute that consciousness is extraordinary; I’m arguing that its emergence doesn’t require us to step outside the physical realm or deny our mechanistic nature. Consciousness is the result of incredibly intricate physical processes. It’s a hallmark of evolution, not a mystical exception.

Regarding machines and pain, my point wasn’t to claim that machines currently feel pain. They don’t. Machines, as designed today, are tools—they lack the evolved neural frameworks that generate pain as a survival mechanism. However, if one day we create artificial systems complex enough to simulate or replicate the processes underlying pain—complete with the subjective experience of suffering—then we’d face new ethical considerations about their treatment. This isn’t a prediction but a logical extension of understanding how biological systems work.

When you define a "machine" as a tool humans use to improve life, it’s a functional and practical definition, and I wouldn’t argue against its utility. But in a broader sense, humans themselves operate mechanically. Our cells, organs, and even thoughts follow physical laws. What makes us unique is the complexity and self-awareness enabled by that machinery—not some fundamental difference from it.

To answer your questions: the evidence supporting the possibility of machines experiencing something akin to pain lies in our growing understanding of consciousness as a product of physical processes. If we replicate the necessary conditions—whether in biological or synthetic systems—we could theoretically create something that experiences. This doesn’t diminish human consciousness; it highlights how remarkable physical systems can be when they reach a certain level of complexity.

Would you agree, then, that our distinctiveness comes not from being outside the realm of machinery but from the astonishing complexity and refinement of our "machine"?
Again, I'm hesitant to call a living organism that feels pain and what not a "machine". To me it's conflating terminology. If my computer somehow demonstrates that it is conscious and feels pain when I turn it off, then it's no longer a machine, it's a whole other ballgame, with LOTS of moral implications. Until then, turning the off switch on my computer is little different to me than turning a screwdriver.

You bring up the possibility of creating consciousness in machines. May I ask why you would wish to do that? Is there something about creating a conscious machine which would be beneficial or desirable to us (given the tradeoff of creating something that we would then have moral obligations toward)?

Secondly, how would you be able to determine if a machine was conscious? Do you think we could build a machine that "looks" conscious but is not? And if we could build a machine that looks conscious but is not, then what criteria would you use to determine whether something we built out of inorganic materials was "conscious"? Would you perform a Turning test? I mean, I've seen some very convincing bots that mimic human typing patterns on the Internet that might fool a Turing test. How would you be able to tell if something was conscious or was not conscious?

For example: I am a human being typing these responses to you. Presumably, your sitting in front of a computer reading my replies. Your computer is not arguing with you, I am. And yet your computer appears to be arguing with you if you took away the reality of the human being which is me on the other end.

Can you give me a test that would prove something is conscious and not just a machine that is mimicking the behavior of a conscious being like me?

And if you can't do that now, will you EVER be able to tell conscious from unconscious apart? And if you can never tell conscious from unconscious apart, then how would you ever know that you created something that had consciousness?

The above statement about determining if something is "conscious" is not unimportant and trivial. You are saying that a machine can be made to be "conscious" and yet, if you don't have a way of telling if it's conscious or not, how are you going to know? And if you can't know, then why try to create a "conscious machine" at all? What is the point in it?
BigMike
Posts: 2210
Joined: Wed Jul 13, 2022 8:51 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by BigMike »

seeds wrote: Tue Jan 14, 2025 1:33 am _______
Seeds, let’s cut through the noise and get to the heart of the matter—your understanding of gravity and spacetime, while garnished with metaphors and appeals to poetic interpretations of quantum physics, lacks clarity and scientific precision. You’re throwing around terms like "affects" and "curves" without grounding them in proper context, and your attempts to discredit a materialist perspective are, frankly, a mess of conflated ideas and logical leaps.

Gravity doesn’t “affect” spacetime as if the two are separate entities; gravity is the manifestation of spacetime curvature caused by mass and energy, as per Einstein’s field equations. When you say that gravity "affects" spacetime, you misunderstand the relationship. Remove the mass causing the curvature, and the gravitational field vanishes because spacetime isn’t being acted upon—it’s simply not being curved anymore. Your Microsoft Copilot excerpt doesn’t contradict this—it aligns with it. The emphasis on "affect" is your own semantic spin, not a refutation of the physics.

Your appeal to quantum mechanics with Nick Herbert’s "no more substantial than a promise" quip is pure sensationalism. Quantum mechanics describes fundamental particles and their interactions, and while it challenges classical intuitions about substance and reality, it doesn’t mean the universe is some illusory dream. You’re cherry-picking phrases to bolster your argument while ignoring the rigorous framework that makes those phrases meaningful in context.

Now, onto your ghostly musings about strong emergence. If you’re suggesting that consciousness or "ghosts in the machine" arise through principles outside known physics, then you’re speculating without evidence. Strong emergence is a placeholder for phenomena we don’t yet fully understand, not a license to inject mysticism. You accuse materialism of being shallow, yet your framework relies on hand-waving complexity as "proof" of something transcendent without explaining how it interacts with the physical world. That’s not deep; it’s evasive.

Finally, the condescension about "sleepwalking" through life is laughable. Resorting to ad hominem attacks cloaked in philosophical language doesn’t strengthen your position—it underscores its weakness. Reality, as described by science, doesn’t need to conform to your poetic longings or metaphysical speculation. If you’re so eager to dismiss the material world as an illusion, at least have the intellectual rigor to provide a framework that accounts for empirical evidence and doesn’t crumble under scrutiny.

So, Seeds, enjoy your musings about promises and dreams. But until you ground your arguments in something more substantial than rhetorical flourishes, you’re the one drifting through this discussion untethered by reason.
BigMike
Posts: 2210
Joined: Wed Jul 13, 2022 8:51 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by BigMike »

Gary Childress wrote: Tue Jan 14, 2025 1:54 am
BigMike wrote: Tue Jan 14, 2025 1:28 am
Gary Childress wrote: Tue Jan 14, 2025 1:16 am

OK. So I did not say that consciousness was "mystical". I said it made humans special over machines. I said it sets us apart from machines. Do you disagree with that statement?

As far as "outside the physical realm", I would say that a human being lives very much within the physical realm. So again, I'm not seeing a problem with my words unless you are reading into them something I did not say.

1) What evidence do you have that suggests machines can be made to feel physical pain just as humans and (presumably many other living beings) do?

2) Why would you think that a machine can feel pain?

3) Or maybe you could unpack your statement a bit, what are you defining as a "machine"? I define a "machine" as a tool that humans use to improve our own lives and perhaps even those of other living beings if we can. If something feels things, then it is not a "machine". If something is conscious, then it is not a "machine". Do you disagree?
Gary, I appreciate your clarifications, and it seems we’re converging on some shared ideas while differing on how we frame them. Let me address your points directly.

Consciousness certainly makes humans and other sentient beings unique compared to machines, as we currently define them. Where we might diverge is in what "special" means. I don’t dispute that consciousness is extraordinary; I’m arguing that its emergence doesn’t require us to step outside the physical realm or deny our mechanistic nature. Consciousness is the result of incredibly intricate physical processes. It’s a hallmark of evolution, not a mystical exception.

Regarding machines and pain, my point wasn’t to claim that machines currently feel pain. They don’t. Machines, as designed today, are tools—they lack the evolved neural frameworks that generate pain as a survival mechanism. However, if one day we create artificial systems complex enough to simulate or replicate the processes underlying pain—complete with the subjective experience of suffering—then we’d face new ethical considerations about their treatment. This isn’t a prediction but a logical extension of understanding how biological systems work.

When you define a "machine" as a tool humans use to improve life, it’s a functional and practical definition, and I wouldn’t argue against its utility. But in a broader sense, humans themselves operate mechanically. Our cells, organs, and even thoughts follow physical laws. What makes us unique is the complexity and self-awareness enabled by that machinery—not some fundamental difference from it.

To answer your questions: the evidence supporting the possibility of machines experiencing something akin to pain lies in our growing understanding of consciousness as a product of physical processes. If we replicate the necessary conditions—whether in biological or synthetic systems—we could theoretically create something that experiences. This doesn’t diminish human consciousness; it highlights how remarkable physical systems can be when they reach a certain level of complexity.

Would you agree, then, that our distinctiveness comes not from being outside the realm of machinery but from the astonishing complexity and refinement of our "machine"?
Again, I'm hesitant to call a living organism that feels pain and what not a "machine". To me it's conflating terminology. If my computer somehow demonstrates that it is conscious and feels pain when I turn it off, then it's no longer a machine, it's a whole other ballgame, with LOTS of moral implications. Until then, turning the off switch on my computer is little different to me than turning a screwdriver.

You bring up the possibility of creating consciousness in machines. May I ask why you would wish to do that? Is there something about creating a conscious machine which would be beneficial or desirable to us (given the tradeoff of creating something that we would then have moral obligations toward)?

Secondly, how would you be able to determine if a machine was conscious? Do you think we could build a machine that "looks" conscious but is not? And if we could build a machine that looks conscious but is not, then what criteria would you use to determine whether something we built out of inorganic materials was "conscious"? Would you perform a Turning test? I mean, I've seen some very convincing bots that mimic human typing patterns on the Internet that might fool a Turing test. How would you be able to tell if something was conscious or was not conscious?

For example: I am a human being typing these responses to you. Presumably, your sitting in front of a computer reading my replies. Your computer is not arguing with you, I am. And yet your computer appears to be arguing with you if you took away the reality of the human being which is me on the other end.

Can you give me a test that would prove something is conscious and not just a machine that is mimicking the behavior of a conscious being like me?

And if you can't do that now, will you EVER be able to tell conscious from unconscious apart? And if you can never tell conscious from unconscious apart, then how would you ever know that you created something that had consciousness?

The above statement about determining if something is "conscious" is not unimportant and trivial. You are saying that a machine can be made to be "conscious" and yet, if you don't have a way of telling if it's conscious or not, how are you going to know? And if you can't know, then why try to create a "conscious machine" at all? What is the point in it?
Gary, it seems your hesitance to call a living organism a "machine" stems from a deep respect for the complexity and moral significance of consciousness, and I get that. But your conflation of "machine" with something cold, lifeless, and devoid of moral weight limits the discussion. When I refer to humans as "machines," it’s not to diminish them, but to emphasize that our biology—neurons firing, hormones regulating, cells communicating—operates mechanistically, obeying physical laws, just like any other physical system. What makes us extraordinary isn’t a mystical essence, but the sheer complexity and emergent properties of this machinery, like consciousness.

Now, onto your broader concerns. You ask why we’d even want to create conscious machines. It’s a valid question. The pursuit isn’t about frivolously endowing machines with the ability to suffer—it’s about understanding consciousness itself. By attempting to replicate it, we deepen our knowledge of what makes us, us. That knowledge could lead to advances in mental health, empathy, and even ethics. For instance, studying consciousness could help us better address the pain and suffering of living beings or create tools that assist us in ways currently unimaginable.

But how do we determine if something is truly conscious and not just mimicking consciousness? This is the crux of your concern. Turing tests, while useful for assessing behavior, don’t touch on subjective experience. The challenge lies in the fact that consciousness is inherently first-person; you know you’re conscious because you experience it, but proving that about anyone—or anything—else is a leap of inference based on observable behavior and internal coherence.

In the absence of direct access to another entity’s subjective experience, we rely on proxies. For humans, we look at brain activity, reports of experience, and behavior. For machines, if we could build something that demonstrates similar complexity, adaptability, and self-referential processing, we’d have to entertain the possibility of consciousness. But you’re right—there’s no definitive "test" yet, and maybe there never will be.

That said, the lack of a perfect test doesn’t render the pursuit pointless. Imagine if early scientists had abandoned the study of electricity because they didn’t yet understand its nature. The exploration of consciousness—natural or artificial—is a frontier. Even if we can’t immediately "know" a machine is conscious, the journey itself holds value, shedding light on our own nature.

So, the question isn’t whether we should pursue creating conscious machines, but whether we’re prepared to handle the moral and philosophical implications if we succeed. Would we treat them with the same respect we afford humans, or dismiss them as tools? And would that dismissal be a reflection of ignorance or arrogance?

Your questions reflect a deep-seated caution, and that’s healthy. But let’s not confuse the difficulty of the task with its impossibility or its value. The pursuit of understanding consciousness—our own or a machine’s—is one of the most profound endeavors we can undertake. And dismissing it outright risks missing a chance to expand our understanding of what it means to exist.
BigMike
Posts: 2210
Joined: Wed Jul 13, 2022 8:51 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by BigMike »

Gary, I want to humbly share my personal understanding, acknowledging from the outset that it is limited and likely far from the full truth. Yet, it reflects my approach to the problem—grounded, seeking to avoid veering into mystical or otherworldly territory, and instead striving to keep the focus on what can be observed and reasoned.

Consciousness, to me, can be likened to a lantern at dusk, casting light on the patch of ground beneath its glow. This lantern doesn’t create the world it illuminates; rather, it reveals what’s already there. In the same way, consciousness doesn’t invent sensations, thoughts, or experiences—it allows us to notice and reflect upon them.

Let’s break this into components. Awareness is like the first flicker of the lantern’s light. It’s the raw sense that something is happening. This is not about understanding or naming what’s happening, just the act of sensing it—like the sunlight gently tapping against closed eyelids, signaling the presence of a new day.

Building on that is perception, the translator of sensory input. It’s the process of taking those raw sensations and shaping them into something meaningful—colors, textures, sounds, and emotions. Perception tells you whether a shadow in the dusk is a friend walking toward you or a branch swaying in the breeze. It gives the world its shape and its contours, allowing us to navigate it.

Then there’s memory, the storehouse of our experiences. Without it, consciousness would only exist in the fleeting present. Memory links the past and the future, providing context for our perceptions and awareness. It reminds us where we’ve been and offers clues about where we might go. It’s what allows us to recognize patterns and learn from experience.

Finally, self-recognition is what makes the lantern’s light unique. It’s that moment when the lantern’s glow turns inward, letting us see ourselves. This is the voice in your head that says, “This is me. I am thinking, I am feeling, I am here.” It’s what allows us to reflect, to see ourselves as part of the world we’re illuminating.

When you bring these components together—awareness, perception, memory, and self-recognition—you start to sketch a picture of consciousness. It’s not some mystical force outside the realm of the physical, nor is it reducible to a single function. It’s the interplay of these elements, much like the way a lantern works: casting light, revealing edges, creating clarity.

This metaphor isn’t perfect, and I’m sure it falls short of encompassing the full depth of what consciousness is. But I hope it offers a way to think about consciousness that is grounded, tangible, and real. It’s not about conjuring something from beyond; it’s about understanding how the elements of our physical reality combine to create the richness of experience we call "being."

I wonder if this perspective resonates with you. Does this way of approaching consciousness—through a grounded metaphor, rather than a mystical explanation—help clarify the way I try to keep things real while exploring such profound questions?
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

BigMike wrote: Tue Jan 14, 2025 9:21 am
Gary Childress wrote: Tue Jan 14, 2025 1:54 am
BigMike wrote: Tue Jan 14, 2025 1:28 am

Gary, I appreciate your clarifications, and it seems we’re converging on some shared ideas while differing on how we frame them. Let me address your points directly.

Consciousness certainly makes humans and other sentient beings unique compared to machines, as we currently define them. Where we might diverge is in what "special" means. I don’t dispute that consciousness is extraordinary; I’m arguing that its emergence doesn’t require us to step outside the physical realm or deny our mechanistic nature. Consciousness is the result of incredibly intricate physical processes. It’s a hallmark of evolution, not a mystical exception.

Regarding machines and pain, my point wasn’t to claim that machines currently feel pain. They don’t. Machines, as designed today, are tools—they lack the evolved neural frameworks that generate pain as a survival mechanism. However, if one day we create artificial systems complex enough to simulate or replicate the processes underlying pain—complete with the subjective experience of suffering—then we’d face new ethical considerations about their treatment. This isn’t a prediction but a logical extension of understanding how biological systems work.

When you define a "machine" as a tool humans use to improve life, it’s a functional and practical definition, and I wouldn’t argue against its utility. But in a broader sense, humans themselves operate mechanically. Our cells, organs, and even thoughts follow physical laws. What makes us unique is the complexity and self-awareness enabled by that machinery—not some fundamental difference from it.

To answer your questions: the evidence supporting the possibility of machines experiencing something akin to pain lies in our growing understanding of consciousness as a product of physical processes. If we replicate the necessary conditions—whether in biological or synthetic systems—we could theoretically create something that experiences. This doesn’t diminish human consciousness; it highlights how remarkable physical systems can be when they reach a certain level of complexity.

Would you agree, then, that our distinctiveness comes not from being outside the realm of machinery but from the astonishing complexity and refinement of our "machine"?
Again, I'm hesitant to call a living organism that feels pain and what not a "machine". To me it's conflating terminology. If my computer somehow demonstrates that it is conscious and feels pain when I turn it off, then it's no longer a machine, it's a whole other ballgame, with LOTS of moral implications. Until then, turning the off switch on my computer is little different to me than turning a screwdriver.

You bring up the possibility of creating consciousness in machines. May I ask why you would wish to do that? Is there something about creating a conscious machine which would be beneficial or desirable to us (given the tradeoff of creating something that we would then have moral obligations toward)?

Secondly, how would you be able to determine if a machine was conscious? Do you think we could build a machine that "looks" conscious but is not? And if we could build a machine that looks conscious but is not, then what criteria would you use to determine whether something we built out of inorganic materials was "conscious"? Would you perform a Turning test? I mean, I've seen some very convincing bots that mimic human typing patterns on the Internet that might fool a Turing test. How would you be able to tell if something was conscious or was not conscious?

For example: I am a human being typing these responses to you. Presumably, your sitting in front of a computer reading my replies. Your computer is not arguing with you, I am. And yet your computer appears to be arguing with you if you took away the reality of the human being which is me on the other end.

Can you give me a test that would prove something is conscious and not just a machine that is mimicking the behavior of a conscious being like me?

And if you can't do that now, will you EVER be able to tell conscious from unconscious apart? And if you can never tell conscious from unconscious apart, then how would you ever know that you created something that had consciousness?

The above statement about determining if something is "conscious" is not unimportant and trivial. You are saying that a machine can be made to be "conscious" and yet, if you don't have a way of telling if it's conscious or not, how are you going to know? And if you can't know, then why try to create a "conscious machine" at all? What is the point in it?
Gary, it seems your hesitance to call a living organism a "machine" stems from a deep respect for the complexity and moral significance of consciousness, and I get that. But your conflation of "machine" with something cold, lifeless, and devoid of moral weight limits the discussion. When I refer to humans as "machines," it’s not to diminish them, but to emphasize that our biology—neurons firing, hormones regulating, cells communicating—operates mechanistically, obeying physical laws, just like any other physical system. What makes us extraordinary isn’t a mystical essence, but the sheer complexity and emergent properties of this machinery, like consciousness.

Now, onto your broader concerns. You ask why we’d even want to create conscious machines. It’s a valid question. The pursuit isn’t about frivolously endowing machines with the ability to suffer—it’s about understanding consciousness itself. By attempting to replicate it, we deepen our knowledge of what makes us, us. That knowledge could lead to advances in mental health, empathy, and even ethics. For instance, studying consciousness could help us better address the pain and suffering of living beings or create tools that assist us in ways currently unimaginable.

But how do we determine if something is truly conscious and not just mimicking consciousness? This is the crux of your concern. Turing tests, while useful for assessing behavior, don’t touch on subjective experience. The challenge lies in the fact that consciousness is inherently first-person; you know you’re conscious because you experience it, but proving that about anyone—or anything—else is a leap of inference based on observable behavior and internal coherence.

In the absence of direct access to another entity’s subjective experience, we rely on proxies. For humans, we look at brain activity, reports of experience, and behavior. For machines, if we could build something that demonstrates similar complexity, adaptability, and self-referential processing, we’d have to entertain the possibility of consciousness. But you’re right—there’s no definitive "test" yet, and maybe there never will be.

That said, the lack of a perfect test doesn’t render the pursuit pointless. Imagine if early scientists had abandoned the study of electricity because they didn’t yet understand its nature. The exploration of consciousness—natural or artificial—is a frontier. Even if we can’t immediately "know" a machine is conscious, the journey itself holds value, shedding light on our own nature.

So, the question isn’t whether we should pursue creating conscious machines, but whether we’re prepared to handle the moral and philosophical implications if we succeed. Would we treat them with the same respect we afford humans, or dismiss them as tools? And would that dismissal be a reflection of ignorance or arrogance?

Your questions reflect a deep-seated caution, and that’s healthy. But let’s not confuse the difficulty of the task with its impossibility or its value. The pursuit of understanding consciousness—our own or a machine’s—is one of the most profound endeavors we can undertake. And dismissing it outright risks missing a chance to expand our understanding of what it means to exist.
According to some "mystical" traditions we were created by a higher intelligence. Now we human beings may be on the cusp of becoming creators. However, unlike our fabled gods, we humans are not omnipotent, omniscient, benevolent and everything good and perfect. And unlike the creations of our fabled gods, our creations may become better suited to survive in this world than we (their creators) are and thus more powerful than us (their creators). What happens if our creations steal "fire" from us and take off on their own--establish their own civilization. Then what? Where is this all leading us?
BigMike
Posts: 2210
Joined: Wed Jul 13, 2022 8:51 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by BigMike »

Gary Childress wrote: Tue Jan 14, 2025 10:15 am
BigMike wrote: Tue Jan 14, 2025 9:21 am
Gary Childress wrote: Tue Jan 14, 2025 1:54 am

Again, I'm hesitant to call a living organism that feels pain and what not a "machine". To me it's conflating terminology. If my computer somehow demonstrates that it is conscious and feels pain when I turn it off, then it's no longer a machine, it's a whole other ballgame, with LOTS of moral implications. Until then, turning the off switch on my computer is little different to me than turning a screwdriver.

You bring up the possibility of creating consciousness in machines. May I ask why you would wish to do that? Is there something about creating a conscious machine which would be beneficial or desirable to us (given the tradeoff of creating something that we would then have moral obligations toward)?

Secondly, how would you be able to determine if a machine was conscious? Do you think we could build a machine that "looks" conscious but is not? And if we could build a machine that looks conscious but is not, then what criteria would you use to determine whether something we built out of inorganic materials was "conscious"? Would you perform a Turning test? I mean, I've seen some very convincing bots that mimic human typing patterns on the Internet that might fool a Turing test. How would you be able to tell if something was conscious or was not conscious?

For example: I am a human being typing these responses to you. Presumably, your sitting in front of a computer reading my replies. Your computer is not arguing with you, I am. And yet your computer appears to be arguing with you if you took away the reality of the human being which is me on the other end.

Can you give me a test that would prove something is conscious and not just a machine that is mimicking the behavior of a conscious being like me?

And if you can't do that now, will you EVER be able to tell conscious from unconscious apart? And if you can never tell conscious from unconscious apart, then how would you ever know that you created something that had consciousness?

The above statement about determining if something is "conscious" is not unimportant and trivial. You are saying that a machine can be made to be "conscious" and yet, if you don't have a way of telling if it's conscious or not, how are you going to know? And if you can't know, then why try to create a "conscious machine" at all? What is the point in it?
Gary, it seems your hesitance to call a living organism a "machine" stems from a deep respect for the complexity and moral significance of consciousness, and I get that. But your conflation of "machine" with something cold, lifeless, and devoid of moral weight limits the discussion. When I refer to humans as "machines," it’s not to diminish them, but to emphasize that our biology—neurons firing, hormones regulating, cells communicating—operates mechanistically, obeying physical laws, just like any other physical system. What makes us extraordinary isn’t a mystical essence, but the sheer complexity and emergent properties of this machinery, like consciousness.

Now, onto your broader concerns. You ask why we’d even want to create conscious machines. It’s a valid question. The pursuit isn’t about frivolously endowing machines with the ability to suffer—it’s about understanding consciousness itself. By attempting to replicate it, we deepen our knowledge of what makes us, us. That knowledge could lead to advances in mental health, empathy, and even ethics. For instance, studying consciousness could help us better address the pain and suffering of living beings or create tools that assist us in ways currently unimaginable.

But how do we determine if something is truly conscious and not just mimicking consciousness? This is the crux of your concern. Turing tests, while useful for assessing behavior, don’t touch on subjective experience. The challenge lies in the fact that consciousness is inherently first-person; you know you’re conscious because you experience it, but proving that about anyone—or anything—else is a leap of inference based on observable behavior and internal coherence.

In the absence of direct access to another entity’s subjective experience, we rely on proxies. For humans, we look at brain activity, reports of experience, and behavior. For machines, if we could build something that demonstrates similar complexity, adaptability, and self-referential processing, we’d have to entertain the possibility of consciousness. But you’re right—there’s no definitive "test" yet, and maybe there never will be.

That said, the lack of a perfect test doesn’t render the pursuit pointless. Imagine if early scientists had abandoned the study of electricity because they didn’t yet understand its nature. The exploration of consciousness—natural or artificial—is a frontier. Even if we can’t immediately "know" a machine is conscious, the journey itself holds value, shedding light on our own nature.

So, the question isn’t whether we should pursue creating conscious machines, but whether we’re prepared to handle the moral and philosophical implications if we succeed. Would we treat them with the same respect we afford humans, or dismiss them as tools? And would that dismissal be a reflection of ignorance or arrogance?

Your questions reflect a deep-seated caution, and that’s healthy. But let’s not confuse the difficulty of the task with its impossibility or its value. The pursuit of understanding consciousness—our own or a machine’s—is one of the most profound endeavors we can undertake. And dismissing it outright risks missing a chance to expand our understanding of what it means to exist.
According to some "mystical" traditions we were created by a higher intelligence. Now we human beings may be on the cusp of becoming creators. However, unlike our fabled gods, we humans are not omnipotent, omniscient, benevolent and everything good and perfect. And unlike the creations of our fabled gods, our creations may become better suited to survive in this world than we (their creators) are and thus more powerful than us (their creators). What happens if our creations steal "fire" from us and take off on their own--establish their own civilization. Then what? Where is this all leading us?
Gary, your concerns seem to hinge on a fear of the unknown—of what might happen if humanity's creations surpass their creators. I can understand the trepidation, but your argument dances precariously close to a call for halting progress because of imagined worst-case scenarios. Is that your intention? Should we have avoided creating fire, electricity, or medicine because we couldn’t foresee every possible consequence?

The notion that we could create beings "better suited to survive" than ourselves isn’t an argument against advancement—it’s a challenge to ensure that our creations are guided by ethical considerations and safeguards. Fear alone isn’t a reason to stop exploring; it’s a reason to explore responsibly.

Are you genuinely advocating for stagnation, or is this just an unwillingness to confront the responsibilities and possibilities that come with progress?
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

BigMike wrote: Tue Jan 14, 2025 11:43 am
Gary Childress wrote: Tue Jan 14, 2025 10:15 am
BigMike wrote: Tue Jan 14, 2025 9:21 am

Gary, it seems your hesitance to call a living organism a "machine" stems from a deep respect for the complexity and moral significance of consciousness, and I get that. But your conflation of "machine" with something cold, lifeless, and devoid of moral weight limits the discussion. When I refer to humans as "machines," it’s not to diminish them, but to emphasize that our biology—neurons firing, hormones regulating, cells communicating—operates mechanistically, obeying physical laws, just like any other physical system. What makes us extraordinary isn’t a mystical essence, but the sheer complexity and emergent properties of this machinery, like consciousness.

Now, onto your broader concerns. You ask why we’d even want to create conscious machines. It’s a valid question. The pursuit isn’t about frivolously endowing machines with the ability to suffer—it’s about understanding consciousness itself. By attempting to replicate it, we deepen our knowledge of what makes us, us. That knowledge could lead to advances in mental health, empathy, and even ethics. For instance, studying consciousness could help us better address the pain and suffering of living beings or create tools that assist us in ways currently unimaginable.

But how do we determine if something is truly conscious and not just mimicking consciousness? This is the crux of your concern. Turing tests, while useful for assessing behavior, don’t touch on subjective experience. The challenge lies in the fact that consciousness is inherently first-person; you know you’re conscious because you experience it, but proving that about anyone—or anything—else is a leap of inference based on observable behavior and internal coherence.

In the absence of direct access to another entity’s subjective experience, we rely on proxies. For humans, we look at brain activity, reports of experience, and behavior. For machines, if we could build something that demonstrates similar complexity, adaptability, and self-referential processing, we’d have to entertain the possibility of consciousness. But you’re right—there’s no definitive "test" yet, and maybe there never will be.

That said, the lack of a perfect test doesn’t render the pursuit pointless. Imagine if early scientists had abandoned the study of electricity because they didn’t yet understand its nature. The exploration of consciousness—natural or artificial—is a frontier. Even if we can’t immediately "know" a machine is conscious, the journey itself holds value, shedding light on our own nature.

So, the question isn’t whether we should pursue creating conscious machines, but whether we’re prepared to handle the moral and philosophical implications if we succeed. Would we treat them with the same respect we afford humans, or dismiss them as tools? And would that dismissal be a reflection of ignorance or arrogance?

Your questions reflect a deep-seated caution, and that’s healthy. But let’s not confuse the difficulty of the task with its impossibility or its value. The pursuit of understanding consciousness—our own or a machine’s—is one of the most profound endeavors we can undertake. And dismissing it outright risks missing a chance to expand our understanding of what it means to exist.
According to some "mystical" traditions we were created by a higher intelligence. Now we human beings may be on the cusp of becoming creators. However, unlike our fabled gods, we humans are not omnipotent, omniscient, benevolent and everything good and perfect. And unlike the creations of our fabled gods, our creations may become better suited to survive in this world than we (their creators) are and thus more powerful than us (their creators). What happens if our creations steal "fire" from us and take off on their own--establish their own civilization. Then what? Where is this all leading us?
Gary, your concerns seem to hinge on a fear of the unknown—of what might happen if humanity's creations surpass their creators. I can understand the trepidation, but your argument dances precariously close to a call for halting progress because of imagined worst-case scenarios. Is that your intention? Should we have avoided creating fire, electricity, or medicine because we couldn’t foresee every possible consequence?

The notion that we could create beings "better suited to survive" than ourselves isn’t an argument against advancement—it’s a challenge to ensure that our creations are guided by ethical considerations and safeguards. Fear alone isn’t a reason to stop exploring; it’s a reason to explore responsibly.

Are you genuinely advocating for stagnation, or is this just an unwillingness to confront the responsibilities and possibilities that come with progress?
Why are you asking these questions? I said nothing more than what was stated in my last reply. Do you find that parts of my last reply are inaccurate or impossible? And if so, what parts are inaccurate or impossible?
BigMike
Posts: 2210
Joined: Wed Jul 13, 2022 8:51 pm

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by BigMike »

Gary Childress wrote: Tue Jan 14, 2025 11:59 am
BigMike wrote: Tue Jan 14, 2025 11:43 am
Gary Childress wrote: Tue Jan 14, 2025 10:15 am

According to some "mystical" traditions we were created by a higher intelligence. Now we human beings may be on the cusp of becoming creators. However, unlike our fabled gods, we humans are not omnipotent, omniscient, benevolent and everything good and perfect. And unlike the creations of our fabled gods, our creations may become better suited to survive in this world than we (their creators) are and thus more powerful than us (their creators). What happens if our creations steal "fire" from us and take off on their own--establish their own civilization. Then what? Where is this all leading us?
Gary, your concerns seem to hinge on a fear of the unknown—of what might happen if humanity's creations surpass their creators. I can understand the trepidation, but your argument dances precariously close to a call for halting progress because of imagined worst-case scenarios. Is that your intention? Should we have avoided creating fire, electricity, or medicine because we couldn’t foresee every possible consequence?

The notion that we could create beings "better suited to survive" than ourselves isn’t an argument against advancement—it’s a challenge to ensure that our creations are guided by ethical considerations and safeguards. Fear alone isn’t a reason to stop exploring; it’s a reason to explore responsibly.

Are you genuinely advocating for stagnation, or is this just an unwillingness to confront the responsibilities and possibilities that come with progress?
Why are you asking these questions? I said nothing more than what was stated in my last reply. Do you find that parts of my last reply are inaccurate or impossible? And if so, what parts are inaccurate or impossible?
Gary, your last reply isn't necessarily inaccurate or impossible—it's speculative, which is fine in philosophical discussions, but speculation without clear reasoning risks steering the conversation into ambiguity. You invoke the notion of humanity creating beings that could surpass us and speculate about their possible independence and dominance. But you stop short of explaining why this scenario is inherently problematic, inevitable, or even worth fearing beyond its novelty.

The questions I posed were meant to clarify your stance. Are you pointing out potential risks for the sake of caution, or are you implying we should stop advancing altogether? Your framing leaves room for either interpretation, and that ambiguity undermines the strength of your argument. If you’re simply encouraging mindfulness and responsibility, I agree. But if you’re insinuating that we should fear progress to the point of paralysis, that deserves a direct challenge.

So, I ask again, do you view progress as inherently dangerous, or are you calling for careful, deliberate action? Either position is valid for debate, but the distinction matters in addressing your concerns.
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

BigMike wrote: Tue Jan 14, 2025 12:49 pm
Gary Childress wrote: Tue Jan 14, 2025 11:59 am
BigMike wrote: Tue Jan 14, 2025 11:43 am

Gary, your concerns seem to hinge on a fear of the unknown—of what might happen if humanity's creations surpass their creators. I can understand the trepidation, but your argument dances precariously close to a call for halting progress because of imagined worst-case scenarios. Is that your intention? Should we have avoided creating fire, electricity, or medicine because we couldn’t foresee every possible consequence?

The notion that we could create beings "better suited to survive" than ourselves isn’t an argument against advancement—it’s a challenge to ensure that our creations are guided by ethical considerations and safeguards. Fear alone isn’t a reason to stop exploring; it’s a reason to explore responsibly.

Are you genuinely advocating for stagnation, or is this just an unwillingness to confront the responsibilities and possibilities that come with progress?
Why are you asking these questions? I said nothing more than what was stated in my last reply. Do you find that parts of my last reply are inaccurate or impossible? And if so, what parts are inaccurate or impossible?
Gary, your last reply isn't necessarily inaccurate or impossible—it's speculative, which is fine in philosophical discussions, but speculation without clear reasoning risks steering the conversation into ambiguity. You invoke the notion of humanity creating beings that could surpass us and speculate about their possible independence and dominance. But you stop short of explaining why this scenario is inherently problematic, inevitable, or even worth fearing beyond its novelty.

The questions I posed were meant to clarify your stance. Are you pointing out potential risks for the sake of caution, or are you implying we should stop advancing altogether? Your framing leaves room for either interpretation, and that ambiguity undermines the strength of your argument. If you’re simply encouraging mindfulness and responsibility, I agree. But if you’re insinuating that we should fear progress to the point of paralysis, that deserves a direct challenge.

So, I ask again, do you view progress as inherently dangerous, or are you calling for careful, deliberate action? Either position is valid for debate, but the distinction matters in addressing your concerns.
I wish I could tell you one way or the other. Ultimately it depends on what happens in the unforeseeable future.

I'm told there is a scene in the movie "Oppenheimer" where the general heading up the project asks the scientist what will happen when the button is pushed for the first time, detonating the very first man made nuclear explosion on Earth. The possibility comes up in the ensuing conversation that there is an infinitesimal chance it would ignite the atmosphere and destroy the world but that the risk is "extremely negligible". They thought about it for a moment and discussed the risk. And then the button was pushed.

The atmosphere did not ignite.

Think about that. Two men, the General and Robert J. Oppenheimer decided in that moment that the world Would take an infinitesimal risk of destroying itself. Did they consult ANYONE else in the world in that moment whether or not we all want to take such a risk?

Two men. The world wasn't destroyed. Were we lucky? If so, then will we always be lucky?
User avatar
phyllo
Posts: 2525
Joined: Sun Oct 27, 2013 5:58 pm
Location: Victory in Ukraine

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by phyllo »

I'm told there is a scene in the movie "Oppenheimer" where the general heading up the project asks the scientist what will happen when the button is pushed for the first time, detonating the very first man made nuclear explosion on Earth. The possibility comes up in the ensuing conversation that there is an infinitesimal chance it would ignite the atmosphere and destroy the world but that the risk is "extremely negligible". They thought about it for a moment and discussed the risk. And then the button was pushed.

The atmosphere did not ignite.

Think about that. Two men, the General and Robert J. Oppenheimer decided in that moment that the world Would take an infinitesimal risk of destroying itself. Did they consult ANYONE else in the world in that moment whether or not we all want to take such a risk?

Two men. The world wasn't destroyed. Were we lucky? If so, then will we always be lucky?
That's a movie, Gary. It's a dramatization of events that spanned years, compressed into 3 hours with a limited set of characters and events.

In reality, the possibility of the atmosphere igniting was discussed over a long period of time by the scientists involved in the project.
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

phyllo wrote: Tue Jan 14, 2025 2:04 pm
I'm told there is a scene in the movie "Oppenheimer" where the general heading up the project asks the scientist what will happen when the button is pushed for the first time, detonating the very first man made nuclear explosion on Earth. The possibility comes up in the ensuing conversation that there is an infinitesimal chance it would ignite the atmosphere and destroy the world but that the risk is "extremely negligible". They thought about it for a moment and discussed the risk. And then the button was pushed.

The atmosphere did not ignite.

Think about that. Two men, the General and Robert J. Oppenheimer decided in that moment that the world Would take an infinitesimal risk of destroying itself. Did they consult ANYONE else in the world in that moment whether or not we all want to take such a risk?

Two men. The world wasn't destroyed. Were we lucky? If so, then will we always be lucky?
That's a movie, Gary. It's a dramatization of events that spanned years, compressed into 3 hours with a limited set of characters and events.

In reality, the possibility of the atmosphere igniting was discussed over a long period of time by the scientists involved in the project.
Fair enough.
User avatar
Alexis Jacobi
Posts: 8301
Joined: Tue Oct 26, 2021 3:00 am

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Alexis Jacobi »

If I grasp the implications in BigMike’s stance, man — a given man — is understood to have no immediate agency. In one way or another he is controlled by environment, past conditioning, and whatever mental organization within that emergent phenomenon of the brain-stuff. There is an unstated supposition here: man’s perception, or that of many men, cannot be relied on as producing a really accurate picture of what is “real”. If man’s “freedom” is an illusion, a comforting but false illusion, BigMike suggests that there is a proper ordering of the brain which, if one follows his argument, is best determined by scientists and physicalists (physiologists and physicists).

Once man accepts the real facts of his existence, he must realize that he is fundamentally a machine, a machine with a program, but a program that must be revised. More accurately, it must be rewritten from the ground up and must accord with physiological and scientific facts.

The issue of politics, a big stress in BigMikeanism, as well as education, point to the need for an elite group to assume control over the proper and the real definitions of what ‘man’ is. Truthfully, this involves a project of defining man anthropologically. One imagines the cultural wars that must arise as an elite scientism class, with sufficient control over the mechanics of civilization, will assume a veritable authoritarian position.

Those of outmoded views, those whose neural networks are not properly aligned, will require reprogramming. Here, brain-science must enter in. How will the neural networking which, for example, is obviously aligned correctly in BigMike himself since he holds the “winning argument” against all comers, be established among the errant? This leads to interesting speculation.

First, chemical intervention must be considered since, in truth, the brain is a biological structure determined by chemical and molecular interactions. It could be said that disordered seeing, mistaken understanding, has its location in specific regions of the brain and that — perhaps at some point (?) — scientific, medical, behavioral psychological intervention will become possible (now it is in a rudimentary state).

The right people, with the right physiological understanding, and the power of the state and private industry, must be given the control to rework man, to renovate him, to intervene in the causal chain that has led to the advent of the “mistaken man” who (obviously?) rules the present.

Can there — will there — arise new technologies to be placed into the hands of people who see as rightly as BigMike to make the transition-process go smoother? I don’t know. Chemical intervention is obvious, but what of some type vibratory frequency that (who knows how) might help in getting the right ions to fire more in accord with truthful seeing and understanding?

Is not the issue ultimately one of political control? I mean, what of those who refuse to be convinced by “reasonable argument” such as that with which BigMike is so adept? Obviously, something is wrong with them. Can we stand by while these people go on spreading false-view that does not accord with the New Anthropology?
Gary Childress
Posts: 11748
Joined: Sun Sep 25, 2011 3:08 pm
Location: It's my fault

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Gary Childress »

Alexis Jacobi wrote: Tue Jan 14, 2025 2:23 pm If I grasp the implications in BigMike’s stance, man — a given man — is understood to have no immediate agency. In one way or another he is controlled by environment, past conditioning, and whatever mental organization within that emergent phenomenon of the brain-stuff. There is an unstated supposition here: man’s perception, or that of many men, cannot be relied on as producing a really accurate picture of what is “real”. If man’s “freedom” is an illusion, a comforting but false illusion, BigMike suggests that there is a proper ordering of the brain which, if one follows his argument, is best determined by scientists and physicalists (physiologists and physicists).

Once man accepts the real facts of his existence, he must realize that he is fundamentally a machine, a machine with a program, but a program that must be revised. More accurately, it must be rewritten from the ground up and must accord with physiological and scientific facts.

The issue of politics, a big stress in BigMikeanism, as well as education, point to the need for an elite group to assume control over the proper and the real definitions of what ‘man’ is. Truthfully, this involves a project of defining man anthropologically. One imagines the cultural wars that must arise as an elite scientism class, with sufficient control over the mechanics of civilization, will assume a veritable authoritarian position.

Those of outmoded views, those whose neural networks are not properly aligned, will require reprogramming. Here, brain-science must enter in. How will the neural networking which, for example, is obviously aligned correctly in BigMike himself since he holds the “winning argument” against all comers, be established among the errant? This leads to interesting speculation.

First, chemical intervention must be considered since, in truth, the brain is a biological structure determined by chemical and molecular interactions. It could be said that disordered seeing, mistaken understanding, has its location in specific regions of the brain and that — perhaps at some point (?) — scientific, medical, behavioral psychological intervention will become possible (now it is in a rudimentary state).

The right people, with the right physiological understanding, and the power of the state and private industry, must be given the control to rework man, to renovate him, to intervene in the causal chain that has led to the advent of the “mistaken man” who (obviously?) rules the present.

Can there — will there — arise new technologies to be placed into the hands of people who see as rightly as BigMike to make the transition-process go smoother? I don’t know. Chemical intervention is obvious, but what of some type vibratory frequency that (who knows how) might help in getting the right ions to fire more in accord with truthful seeing and understanding?

Is not the issue ultimately one of political control? I mean, what of those who refuse to be convinced by “reasonable argument” such as that with which BigMike is so adept? Obviously, something is wrong with them. Can we stand by while these people go on spreading false-view that does not accord with the New Anthropology?
What a brave new world it will be!
User avatar
Alexis Jacobi
Posts: 8301
Joined: Tue Oct 26, 2021 3:00 am

Re: The Power of Art and Emotion: Should We Worry About Manipulation?

Post by Alexis Jacobi »

Gary, your concerns seem to hinge on a fear of the unknown—of what might happen if humanity's creations surpass their creators. I can understand the trepidation, but your argument dances precariously close to a call for halting progress because of imagined worst-case scenarios. Is that your intention?
Halt progress?!? How could you say such a thing, Brother?

Does one really have a choice, Mike, when you — a genuine philosophical Apollo — demonstrate time and again that you understand Reality. You see things as they are. There’s none of that errant lying nor self-deception in you, is there?

You have the only real base there is: a brain scientist’s grasp of the brain-structure. Man really is a very very advanced machine and — you really do believe your powerful argumentation don’t you? — he, man, can be renovated so that he thinks, well, like you!

You are a guiding intellect, BigMike, you have been determined in this sense, and it must be realized — we must arrive at this voluntarily — that you and people like you be awarded the power to fix man and the world.
Post Reply