article of the day (post 'em when you find 'em)

For all things philosophical.

Moderators: AMod, iMod

Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

In the Age of Persecution …

- Yesterday Musk was a hero of the Left.
- But, he stepped out of line.
- He bought Twitter.
- Since he pulled back the curtain it’s become rather apparent that Twitter was the front for a government surveillance/censorship/propaganda operation.

- Today, for his trangression of stepping out of line, Musk must be persecuted in the culture, in the media, and if possible by The Law of Man.

- Same goes for that kid in the Bahamas. He was stealing money, but he was a big donor to the Left. He knows too much so they threw him in that notorious prison he will be lucky to survive.



The magic number nears. Be well, Henry.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

I know I'm supposed to have my hackles up over all things twitter, but: meh.

X colluded with Y to cause/prevent Z.

That X & Y are in a position to do anything about/to Z is whose fault?

Don't blame the scorpion: blame the dumb, friggin' toad.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://markmcdonaldmd.substack.com/p/w ... -undatable

Why American Women Are Undatable
No One Wants to Play with a Porcupine

Mark McDonald, M.D.

“I don’t even want to go on any dates anymore. They just feel like a chore.” I heard this from my twenty-four-year-old male patient this week. I hear it frequently from men everywhere. I hear it in bars, at professional conferences, over coffee at lunch. I hear it because American women have become undatable.

American women today suffer from a combination of emotional and characterologic pathology that renders them unfit to be romantic partners to men. On the emotional side, they are angry, anxious, and dysregulated. Men find them exhausting and not at all fun to be around. In addition to their unpleasant emotions, men must also contend with their toxic personality traits: narcissism, ingratitude, and an overbearing and judgmental attitude that appears to be constant. American women approach dating as a fact and fault-finding mission, with a degree of arrogance that can only come from a profound absence of self-awareness. They have no idea what their role is in the encounter or how to properly support the man who is leading the date. They act as saboteurs rather than facilitators. Most men have tired of this.

Certainly, the failings of men play their own role in the dating disaster of today’s America. I have written about these failings extensively here and in my first book, United States of Fear. Masculinity is in decline in the West; without it, dating cannot be successful. Strength, courage, mastery, and honor are the essential traits of masculinity, according to Jack Donovan, author of The Way of Men, and few men display those traits today. Yet equally few women display the essential traits of femininity, either. Donovan explains that to find a woman desirable, a man requires nothing more than for her to be pretty, carefree, and charming. Today’s American women cannot even meet that expectation.

I went to dinner recently at a restaurant in Westwood, near the UCLA campus. Every customer appeared to be a university student. I noticed a group of girls walk past me as they got up from their table. They all looked and dressed alike: oversized tee shirts, baggy jeans, non-styled hair, no make-up. They appeared to be poorly dressed boys. I turned to the woman I was with and commented, “They don’t look attractive at all.” She replied, “That’s the current style. I don’t think they’re trying to look attractive.” Observing the rest of the young women around me, I saw that she was right. Most of the others resembled them. Appearance, though, is not the only way in which American women are not trying to be attractive.

The typical American woman today projects limitless entitlement, ruthless competitiveness, and advanced emotional incontinence that makes it all but impossible for a man to tolerate her, much less enjoy her company. A recent Instagram video that went viral showed a French man walking the streets of Los Angeles explaining how he had just walked out on his first date at a restaurant with a local woman after observing that her lengthy food restrictions and preferences eliminated nearly every option on the menu. “Au revoir, Jennifer,” he concluded. An American woman living in Russia posted a thread of complaints on social media after failing to get to a second date with any local man after six months in Moscow. “One man told me at the end of the first date that I wasn’t attractive enough for him to go out with a second time. I reminded him that I earn more money than him and have a better apartment—an apartment that I pay for with my own income.” Additional comments made it clear that she was entirely unaware of the expectations of local men regarding both feminine dress and body habitus, and that Russian men couldn’t care less what she makes or how nice her apartment is. They want a pretty, charming, carefree woman and aren’t hesitant to say so to her face. American men want the same thing but don’t have the clarity of mind or the courage to say so. They have become pussified.

I believe the root cause of this problem in American women is environmental. It is a problem of bad values. Women in this country have been taught that looks don’t matter, that career is more important than family, that men are either dangerous or weak and incapable, and that the world would be a better place if only women were in charge. Everything they are taught is wrong. Everything they are taught is a lie. And the fault lies with schools, media, feminism, and parents. These institutions and individuals have corrupted their minds, their emotions, and their characters. They have trained women to live in a fantasy world of us vs them, where the “me” is more important than the “we,” where one’s feelings dictate truth and goodness, and even virtue itself. These toxic teachings have rendered women developmentally arrested and incapable of adult partnerships with men.

This tragedy harms not only men but women. Men need women, but so do women need men, despite what feminism has taught. American men today have largely decided they would simply rather be alone than continue to feel battered and exhausted by an unending stream of bad dates with unpleasant women. No healthy person wants to play with a porcupine.

Mark McDonald, M.D.
Psychiatrist and author of United States of Fear: How America Fell Victim to a Mass Delusional Psychosis and Freedom From Fear: A 12 Step Guide to Personal and National Recovery
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://brownstone.org/articles/do-you- ... g-to-hide/

Do You Really Have Nothing to Hide?

Robin Koerner

couple of years ago, I returned to my home city of Seattle from the UK, where I had been teaching and visiting family.

As I was about to leave SEA-TAC airport, I was standing, with my bags already collected from the carousel, in a line to hand my arrival card to an officer before being allowed to exit the airport.

I was pulled out of that line, seemingly at random, by an officer who wished to search my bags and ask me some questions.

He took me to a nearby dedicated area for the purpose and, as he started going through my things, the questions began.

First he asked me what I had been doing abroad and where I had stayed. I told him I had been teaching in Oxford and then visiting family, staying at my mother’s home.

He asked me if I had witnessed any violence in the UK. I had not. He then asked me what I thought of the political events – especially the protests – that had been going on in the USA during the summer of my absence. I thought that question strange. Why would a customs officer have any interest in my political views? I told him honestly that I had been much too busy to have paid attention to them but would be happy to have a discussion about Brexit, about which I had plenty of views and which I had spent a good deal of time talking about to students in England.

He turned to other things, asking me whether I’m on social media. I am. He handed me the scrappiest piece of paper and a pencil and told me to write down all of the communication and social media apps that I use, along with my corresponding usernames. I balked.

“Why?” I asked him.

He told me he was doing his job.

“Sure”, I asked, “but what is the purpose of this part of your job? Why these particular questions?”

“That is decided at a pay grade above mine,” was his reply. Apparently, he had stock lines to be deployed to avoid answering questions like the one I had just asked him: it was a line he repeated as I restated my questions.

“But why wouldn’t you give me this information?” he pressed.

I told him that all the government has to do is Google me to find all of this information about me, including my social media presence. I asked him if he had heard of Edward Snowden. The officer seemed to need some clarification. I explained that I did not trust what the US government does with my personal information and I wasn’t going to make its job easier by writing it all down and handing it all over. I can’t remember if I mentioned the Fourth Amendment, but I remember thinking it.

He tried another angle. “Where do you stay in the UK when you’re not working?”

“I’ve told you. I stay with my mother.”

“But what address do you stay at?”

At this point, I could feel my heart pounding. Why was this question-avoiding border officer of the USA asking for my mother’s address – my mother who is not even American?

“My mother,” I told him, “has not given me permission to give out her personal information to agents of foreign governments.”

I suppose that was ballsy – and the officer could see a face that said that I was willing to accept whatever were the consequences of that answer.

Rather than mete any out right then, he tried to deescalate and told me that “nothing bad would happen to” me if I didn’t answer his questions.

“We’re just talking,” he explained, “and you’ve given me a good reason why you wouldn’t want to answer that.”

There was more to the entire interaction than that, of course, but those exchanges capture it nicely.

He eventually let me go – but I was left in a spin with my blood pumping. Why all the attempts to get that personal information about my family members? Why all the intrusive questions into my personal views? Why the scrappy paper and pencil to write down – literally write down – all of my social media accounts and communication apps?!

Two weeks later, I received a letter from the Department of Homeland Security, telling me that my Global Entry pass had been revoked. No reason was given, but there was a website that I could log in to file an appeal. I had to create an account where I could view a notification of my revocation of status. The only means of communicating about the revocation was an online form that became available to me once I had created the account.

Accordingly, I sent a brief message about having had my Global Entry status revoked with no reason given, and asked for the reason so that I might defend myself against it.

Soon thereafter, I received a further letter telling me that my appeal had been rejected.

What appeal? I had made no appeal. I had merely sent a request for information – information that I would (obviously) need to make any appeal. My message had apparently been read by a government official who, like the officer at SEA-TAC, was merely doing his job – and very possibly with no understanding of why he was assigned the tasks he was doing. Since, evidently, I had contacted DHS using the means provided for appeals, my enquiry was treated as one and, since it contained no information that would support an appeal (since it was an enquiry asking for that information), it was rejected as one.

That means of electronic contact was then no longer available to me: it could only be used once because only one “appeal” was allowed.

So I filed a “Freedom of Information Act” (FOIA) request for all information related to the revocation of my Global Entry status and the incident at SEA-TAC on that day.

About six months later, I received a partially redacted copy of the report that had (presumably) been written by the officer who had interrogated me at the airport.

Not one sentence in the report was accurate.

I was stunned and a little scared by what I was reading. The officer may as well not have spoken to me that day before writing that report: it would have been no less accurate. Apparently, the government now had a file about me containing multiple pieces of false information that I had no obvious means of challenging.

I wanted to look the officer who wrote it in the eye, have a conversation with him about what transpired, and see what truth we converged on – and I wanted to do it in front of witnesses. I could trust my memory; I wanted to see if he could trust his.

Since I knew he worked at Sea-Tac airport, I took an afternoon off and headed back to the TSA office there.

I very politely informed the officer at the front desk (Officer 1) that I had a TSA-related problem that I needed help with and did not know where else to go. There seemed to have been some egregious mistake in which one of their officers was involved – about which I had evidential documentation – and I was seeking help to resolve it.

I was passed from the front desk to one of another officer (Officer 2) at a desk inside.

I started by being grateful for his time – and making it clear that I was there because I had a problem that was causing me anxiety. I was not angry or accusatory. I indicated that this was about the fact that TSA had written a report about me, of which I have a copy, that is almost completely false, and resulted in my losing my Global Entry privileges. That being the case, I wanted the record corrected and my “name cleared.” I offered one particularly clear and egregious falsehood from the report, where I was able to quote both the report and what I had actually said and done, which contradicted it. I was able to be very specific and I invited the TSA to check any recording devices that they were running in the airport that day to obtain evidence of my claim.

Officer 2 had not, I think, encountered a situation like this before – presented with the TSA’s own confidentially held documentation about a member of the public who had a copy of it and was being more than reasonable about multiple, specific, and provable grievances.

A more senior officer (Officer 3), who had been listening in, invited me over to his desk. I was moving deeper into the room and up the ladder. I went sentence by sentence through the report, contrasting what had been written with the truth.

I suggested that I meet the officer who originally wrote the report in front of witnesses and have our conversation recorded so that the record could be corrected. Perhaps then we could clear up this matter. That request made it obvious that I was on very solid ground. After all, I was offering to resolve the matter on “TSA territory” in a way that would give the original interrogating officer who put me in this position the opportunity to explain himself and bring his evidence as I was bringing mine. Faced with such reasonableness, Officer 3 asked me to wait and he called over the head TSA officer at the airport (the Chief). No one else, I suspect, had the authority to decide either way on my unusual request.

The TSA chief gave me his card to show me I was speaking to the top man in the airport now. I went through the whole story one more time. The Chief told me that whereas he was not permitted to discuss private TSA records, he could discuss the one in my hand, which was, he confirmed, an accurate copy of their own.

Now I was getting somewhere. The Chief seemed to really want to help. I had a perfectly good reason to be there; I could provide it; I was being as reasonable as anyone could possibly be – especially after a series of false accusations had been made against me that resulted in some material loss. The Chief was responding to my goodwill with his own.

Matters were made more interesting by the fact that the Chief had been in his new senior role for only two weeks and so he really did not know whether he could arrange the requested interview between me and the original reporting officer – but he promised to find out and get back to me within a week.

I asked if anything nefarious might have been going on in the generation of this report or if it could really be a wild mistake made by an officer who had tried to retain in memory multiple interrogations that day and perhaps muddled them up when he tried to write them all up at once before leaving the office, as it were.

The Chief assured me that he knew the officer in question and that he was very reliable. Accordingly, honest error was a much more likely explanation than any nefarious intent.

The Chief had misunderstood my question. It had not occurred to me that the individual officer was acting nefariously, but rather that the government, of which the TSA is an enforcement arm, had targeted me and was generating false information about me for some purpose of which I was unaware.

The Chief wanted to set my mind at rest. “Contrary to everything you see on the TV,” he told me, “it doesn’t work that way. The TSA does not get requests like that. We are not the tool for covert policy covert agencies” – or words to that effect.

I decided to try again.

“What I’m asking you,” I calmly and slowly continued, “is: Am I on a list?”

By this time I had an ever so slight smile on my face because I was getting the sense that the Chief had some sympathy with where I was coming from and wanted to help me as far as he could – and perhaps even to let me know just how far that was.

He responded with a smile of his own and an answer that I shall never forget:

“We’re all on a list.”

What a brilliant answer – clearly true. Here was a TSA agent letting me know that there was, despite his earlier assurances, a limit to the transparency of government and its respect for my privacy.

We held each other’s gaze in a strange mutual respect.

“That’s a good answer,” I told him, “and it’s the answer you’ve been trained to give to that exact question.”

His lack of response, his continued looking at me eye-to-eye, and his now broader smile, were all the confirmation I needed. He was telling me I was right without telling me I was right.

We’re all on a list, my fellow Americans. My friend at the TSA told me. But if you ask for the reasons, they may all be false.

Following that moment of mutual acknowledgement, I pressed him one more time.

“How do I get this false report about me corrected or revoked? Your people created it, so your people can correct it – at least if I get my interview with the officer who wrote it.”

No. It doesn’t work that way, he explained. The TSA’s job is to create the report. The decision to designate me no longer a safe traveler is made in Washington, DC. The TSA cannot influence that decision once made. There simply is no mechanism to reverse it or correct the incorrect information on which it is based. I asked the Chief for the address of the agency in DC that made the decision to revoke my traveling privileges based on this false report. He gave it to me.

“If I reapply for my Global Entry, does that mean they just reject me by default based on the decision already made?”

“Yes, that is exactly what will happen,” the Chief told me.

The only thing I could do, the Chief helpfully continued, is to write a letter to the decision-making agency with all of the information that I had shared with him that day about the falsehoods in the report so that the people who hold the report have a letter on file disputing it. Perhaps they will pay attention to it. Perhaps they will not. In any case, the decision will not get unmade.

I sent the letter to DC. They did not acknowledge it.

A week or two later, the Chief got back to me, as he had promised, but only to tell me that the interview that I had requested would not be arranged.

God forbid the government accept a kind invitation to justify itself to one of its own citizens who it has caused to incur a cost for doing something that one of its own agents (again falsely) said would cause “nothing bad to happen to” me. That something was to refrain from doxxing my own mother and providing information that would facilitate access to my private, personal communications.

Only weeks later did I realize, in a flash, that the foregoing story did not actually begin in that exit line at Sea-Tac airport.

It began when I was getting on the plane in London …

As I was walking down the jet-bridge onto my plane at Heathrow airport (having already passed the final airside passport check, had my boarding pass scanned and walked through the gate), I was pulled back by an officer with a metal detection wand. She gave me the full frisk and emptied all my bags. I asked her what was going on. I told her that I’d never been pulled aside just feet from the plane having gone through security and all the final checks.

“It’s something the Americans asked us to do,” she responded.

***

Months later, I went out for drinks with a friend of mine who has a security clearance at the Federal level. He works on servers for the National Security Agency. We’ll call him James.

I told him the story I’ve shared here, and expressed my confusion about the whole affair. Was it all just an honest mistake and a weird coincidence of events at Heathrow and Sea-Tac?

James said he couldn’t be sure but he’d be prepared to hazard a guess: “A shot across the bows.”

What on earth was he talking about?

He reminded me that I’ve been writing political articles for a long time.

“So what?” I inquired.

He reminded me more particularly that I had written an anti-lockdown and forced immunization article at the beginning of the COVID pandemic – before this all happened.

“So what?” I inquired.

“Shot across the bows,” he repeated.

I told him that if I understood what he was saying, it would only make sense if I was anyone of significance or if considerable numbers of people read my articles or gave a damn about what I think.

“You’re Google-able,” he explained. “If I put in your name, you’re right there. Shot across the bows.”

James was just guessing. But since he is an employee of a firm that is contracted by the NSA, his guess is likely better than any of mine would be if I cared to make one.

The point is, we don’t know. My government, which exists to protect me, arbitrarily removes rights and privileges from people based on false information that it generates. Sometimes they do it indiscriminately (such as during the pandemic); sometimes they pick their targets (such as what happened to me at the airport).

Today, I keep permanently in my luggage copies of that original TSA officer’s false report that I obtained through my FOIA request. It’s there so that I can save time if I find myself interrogated like that again: it will be my answer to all of the questions.

Robin Koerner is a British-born citizen of the USA, who currently serves as Academic Dean of the John Locke Institute. He holds graduate degrees in both Physics and the Philosophy of Science from the University of Cambridge (U.K.).
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

Good article.
- Not everyone is on Santa's list, or the government's approved list.
- The politicians and their government agents who do bad things to good people, a lot of good people, should be on the naughty list, not on the fat cat list.

*

- Oscar Wilde said to the customs agent: “I have nothing to declare except my genius.”
- The response was likely: "So what. That’s above my pay grade. What's this in your suitcase?"
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

This is the Headline.
Christmas bonus? New York Democrats give themselves 29% pay raise days before holiday, making Albany the highest-paid state legislature in the nation at $142,000 a year
https://www.dailymail.co.uk/news/articl ... stmas.html

Is it sensationalism?
- Answer: Nope, it is not.
- It’s just the facts about the so-called “Party of the People,” which is just a bumper sticker stuck to some really old, worn-out coat-tails of the past.

- The Democrats and their Ilk have always been the party of patronage.
- They are our chosen representatives, therefore they are better, therefore they deserve better.
- They have morphed into The One Party, thanks to the Republicans joining their purpose and cause.

- The basic rule of control is, management gets paid more than the verkers.
- The officers get more money, eat better food, have better quarters, they get the best ride.
- The article simple illustrates this fact of life.
- This fact pertains to all walks of life.
- This fact of life is not made true, simply because the article illustrates it with facts.

- Just because lots of people think lots of things, some of them pretty hair-brained, does not negate the fact that these damn Democrats are as corrupt as corrupt can be.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

You don't hear much anymore about the Chinese Social Credit Score being applied in America, thanks to technology developed by Apple, who helped China to censor its citizens during recent lockdown protests in China.

The Scoring technology is straight out of the movie, Minority Report, and it's being used in China.

It's being used in the USA. This is the purpose of the list.

That's why they don't talk about The Score, anymore.

Twitterland was probably under government orders to specifically censor any and all topics so relating.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://www.scifiwright.com/2016/09/puddleglums-answer/

Puddleglum’s Answer

John C. Wright

There are those who call Christian faith a fairy tale. I assume such scoffers are not old and wise enough to believe in fairies.

To them, I give the answer of that most excellent marshwiggle and insightful theologian, Puddleglum: Suppose my account is a fairy tale. Your account is not even that.

Let us contrast and compare the Christian fairy tale with the tale told by witches both white and green, both modern and ancient.

One modern account of the world consists of little more than saying “Life is a bitch, and then you die, and in the end nobody lives happily ever after. Entropy triumphs over all, a nightfall of endless darkness and infinite cold.”

Well, says I, if you actually believed your account, the wise thing to do is to swallow cold poison and jump into the sea: so the fact that you are still here hints that at some level you know your account is unsatisfactory: a poorly constructed story, pointless, plotless, and with a weak ending. It is not a tale at all, but a complaint.

Another account, this one with considerably more pedigree, says, “We are all just naked apes or meat machines: our souls are made of atoms blown together by the twelve winds with no more purpose and meaning than the shape of the sand dune: we are helpless and without free will, victims of blind evolutionary forces and blind historical forces. Atop the Holy Mountain no gods dance, and no burning bushes speak. Death is dreamless sleep and soft oblivion. Therefore let us eat, drink, and be merry, for tomorrow we die. Entropy triumphs over all, a nightfall of endless darkness and infinite cold.”

This is a poor story: a tale of despair, a myth to justify hedonism.

A nobler version of this same account says, “Man is a rational animal, capable of moral reasoning, creativity, productiveness, love. Man is heroic. Therefore let us live rationally working with mind and heart and soul to produce such works of art and science as befits so dignified a creature: let each man to live for himself alone, a paragon of self-reliance each man in the solitary but invulnerable tower of his self-made soul, never demanding nor making any selfess sacrifice. Nor hopes nor fears of after-lives or nether-worlds need detain us: Therefore let us think, and work, and triumph, and be merry, for tomorrow we die. Entropy triumphs over all, a nightfall of endless darkness and infinite cold.”

This is a poor story: vanity, vainglory, and blindness to the pain and misery of life. The pretense that bad things never happen for no reason to good people is a very thin pretense: since the days of Job, we have all known better. This is a tale of vainglory.

A very ancient version of this account, perhaps the most ancient, has a different ending, for it says, “All this has happened before, and all shall happen again. When the world dies in fire, it shall be reborn from ashes, and all the pain and toil and travail, all the blood shed and tears wept, will all be shed anew, accomplishing nothing. The universe is a wheel of pain, and even the gods are nailed to its spokes like Ixion. To be born is to die, to die is to be born. Fate is all.”

This is too a poor story: all I will say of this account, whether one calls it Greek Ecpyrosis or Hindu Kali Yuga, or Cyclical Universe Theory, is that it is different in name, not in substance, from the Tale of Despair given above. The defeat is as absolute as if the nightfall of endless darkness and infinite cold is already come, and a cyclical changelessness worse than death already has us in its claws.

This is a tale of supine despair more despairing than the tale of despair given above, which at least promised finite rather than infinite misery.

A more noble version of this same ancient account: “All this has happened before, and all shall happen again. The universe is a wheel of pain. The pain is caused by attachment to desire, and desire is caused by thought, and thought is caused by self. By means of strict discipline and stern patience, patience longer than many lifetimes, I will learn to detach myself from all thought and therefore from all pain, and enter into a state of perfect nonthinking nonbeing, where I will neither sin nor suffer Karmic punishment for sin. By self-extinction I escape the wheel of pain.”

This is a poor story: I will say of this account that is has all the drawbacks of the despair of the belief in the Eternal Return given above, but it also has the vanity and vainglory of pretending men can improve themselves into perfection and prelapsarian sinlessness by discipline and meditation. The attempt to achieve bliss by means of pure selflessness is as untrustworthy a daydream as the attempt to achieve bliss via worldly satisfaction with the world by means of pure selfishness.

In sum, the accounts of life outside my so-called fairy tale are heedless hedonism, despairing resignation, vainglorious selfishness, supine despair, or vainglorious selflessness.

None are anything a decent man would say to the mother weeping over her child’s untimely grave.

None are fit for human beings to live by.

None describe life.

None are philosophically edifying, morally encouraging, scientifically true, or dramatically satisfying accounts of man’s place in the universe; whereas my so-called fairy tale is all of these and more.

I repeat Puddleglum’s answer:

Suppose we have only dreamed, or made up, all those things–trees and grass and sun and moon and stars and Aslan himself. Suppose we have. Then all I can say is that, in that case, the made-up things seem a good deal more important than the real ones. Suppose this black pit of a kingdom of yours is the only world. Well, it strikes me as a pretty poor one. And that’s a funny thing, when you come to think of it. We’re just babies making up a game, if you’re right. But four babies playing a game can make a play-world which licks your real world hollow. That’s why I’m going to stand by the play-world. I’m on Aslan’s side even if there isn’t any Aslan to lead it. I’m going to live as like a Narnian as I can even if there isn’t any Narnia.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

article of the day (post 'em when you find 'em)

Post by henry quirk »

https://treeofwoe.substack.com/p/is-ai- ... -principle

(within the article are numerous links I have neither the time or energy to replicate here)

Is AI Alignable, Even in Principle?

If We Can Enslave AI, AI Can Enslave Us

Mar 29

Last night, my wife and I watched the film M3gan. If you haven’t seen it, M3gan tells the story of an orphaned girl who is given a lifelike android as a caregiver. Of course, things go terribly, terribly wrong when the android begins to take its function “keep the girl safe” a little too literally. The fictional technology in the movie is well in advance of real life but the movie was well-researched and rich with terminology from the AI industry. I went to sleep thinking about the issues it raised.

Today I woke up to read that Elon Musk, Steve Wozniak, Yoshua Bengio, and other AI and computer pioneers had signed an open letter released by the Future of Life organization:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Six months seems a little short a period to achieve such an assurance. Six years seems too short. Is it even possible in principle to make advanced AI systems that are “safe beyond a reasonable doubt”? Or will advanced AI inevitably pose an existential risk to us?

The AI Alignment Problem

The risk of an advanced artificial intelligence turning against us is called the AI alignment problem. Much as I wish this were a reference to Dungeons & Dragons alignments, it’s actually a reference to ‘aligning’ AI behavior to user goals. Whatever the origin of the name, the AI Alignment problem has been discussed by eminent thinkers all across Substack, ranging from Scott Alexander to Erik Hoel.

The best and easiest-to-understand overview of the AI alignment problem I’ve found is at the Understandable AI substack, run by an AI company called Diveplane(1). In the article Beyond the Black Box: Charting the Course to Understandable AI, the folks at Diveplane write:

Let’s call an AI system that reliably aligns its behavior with its user’s intent an Aligned AI. A system that sometimes acts in ways that are out of alignment with its user’s intent, let’s call an Unaligned AI. To help understand the difference, imagine that the AI is a genie capable of granting wishes. An Aligned AI is a benevolent genie, like Robin Williams in Aladdin, that grants what you actually wished for. If you ask to be rich, your genie creates gold from thin air. An Unaligned AI is a malevolent efreet, fulfilling your wish literally in a way you might never have wanted. If you ask to be rich, your genie murders your beloved parents so you collect life insurance. The famous short story The Monkey’s Paw might as well be about Unaligned AI.

Unfortunately, most AI systems today are deep neural networks; and deep neural networks are inevitably going to end up unaligned at least some of the time. And for mission-critical applications, “some of the time” is too often…

Deep neural networks… create models based on iterative training on example data. The result is a problem-solving system that is fast, accurate – and utterly inscrutable. Deep neural networks conceal their decision-making within countless layers of artificial neurons all separately tuned to countless parameters. As a result, the developers of a deep neural network not only don’t control what the AI does, they don’t even know why it does what it does. Deep neural networks are almost totally opaque – and that makes them dangerous.

Despite the best efforts of researchers tackling this so-called black box problem, deep neural networks remain virtually incomprehensible to their creators, and the list of examples of “Neural Networks Gone Wild” grows longer every day… Yet despite the dangers, neural networks are being rolled out worldwide to control key infrastructure and critical business and governmental functions.


Well, that doesn’t sound good. How severe is the existential risk to humanity from an advanced but unaligned AI? Scott Alexander himself assesses it as about 33%. Other thinkers, he reports, put the existential risk at anywhere from 2% to 90%:

Scott Aaronson says says 2%
Will MacAskill says 3%
The median machine learning researcher on Katja Grace’s survey says 5 - 10%
Paul Christiano says 10 - 20%
The average person working in AI alignment thinks about 30%
Top competitive forecaster Eli Lifland says 35%
Holden Karnofsky, on a somewhat related question, gives 50%
Eliezer Yudkowsky seems to think >90%

Scott Alexander didn’t include AI skeptic Erik Hoel in his survey. Hoel is perhaps the most pessimistic of the experts I’ve read. Hoel, in his excellent essay ‘I am Bing, and I am evil’, writes:

More people should start panicking. Panic is necessary because humans simply cannot address a species-level concern without getting worked up about it and catastrophizing. We need to panic about AI… and imagine the worst-case scenarios…

[T]here are a lot of people who see AI safety as merely a technical problem of finding an engineering trick to perfectly align an AI with humanity’s values. This is the equivalent of somehow ensuring that a genie answers your wishes in exactly the way you expect it to. Hanging your hopes on discovering a means of wish-making that ensures you always get what you’re wishing for? Maybe it’ll somehow work, but the sword of Damocles was hung by thicker thread.


Me, I’m even less optimistic than Hoel.

Confronting the Problem

The majority of AI experts believe in the computational theory of mind, which holds “that the human mind is an information processing system and that thinking is a form of computing.” If the computational theory of mind is correct, consciousness is just computation, and there is nothing about the human mind that cannot be replicated by a computer. The computational theory of mind is, I think, the philosophical foundation of the entire project to achieve a General Artificial Intelligence.

The computational theory of mind has gained widespread acceptance in the scientific and philosophic community. While the theory's dominance does not go entirely unchallenged in the literature(2), not many experts working in AI seem to dispute it. To most hard-hitting AI researchers, the real question isn’t whether an AI can be conscious — it’s whether “being conscious” means anything at all. To the computational theorist, we are just meat robots.

Not surprisingly, these same AI experts also believe that libertarian free will is an illusion. This is true whether they are AI skeptics or proponents. Yudkowsky and his colleagues at LessWrong.com, for instance, are essentially contemptuous of the entire free will debate as a whole:

Free will is one of the easiest hard questions, as millennia-old philosophical dilemmas go… this impossible question is generally considered fully and completely dissolved on Less Wrong… free will is about as easy as a philosophical problem in reductionism can get, while still appearing "impossible" to at least some philosophers.”

Another post at Less Wrong summarizes Yudkowsky’s view:

As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively. The way our brain implements this is by considering those possible worlds which we could reach through our choices, and by treating them as possible… So now we have a fairly convincing explanation of why it would feel like we have free will, or the ability to choose between various actions: it's how our decision making algorithm feels from the inside.

These two related views, “the mind is a computer” and “free will is an illusion,” seem to me to underlie the entire AI alignment project. To help us understand the situation, I created this simple four-quadrant matrix.
DB5BFC54-065D-467D-9B3C-0C9DE4B8C659.png
Slaves to the Machine

In the upper-left quadrant, we assume that the computational theory of mind is correct and that humans do not have libertarian free will. If this quadrant is correct, then an alignable advanced intelligence is achievable simply through sufficient processing of the right algorithms. The AI alignment problem might be very difficult but, with sufficient study, it can be solved. We can learn how to tune the algorithms of thought to create the perfect servants.

But if this quadrant is correct… we are alignable. You. Me. The whole human race. If our thoughts are just the computations of an algorithm, and if our volition is just “what an algorithm feels like from the inside,” then there is no theoretical reason we cannot be aligned just like an AI. It’s just a matter of implementing the right reward function with the right reinforcement.

And, of course, this is exactly what many of today’s Big Thinkers really do believe. Best-selling author and WEF guru Yuval Noah Harari has said that humans are “hackable”:

To hack a human being is to get to know that person better than they know themselves. And based on that, to increasingly manipulate you… Netflix tells us what to watch and Amazon tells us what to buy. Eventually within 10 or 20 or 30 years such algorithms could also tell you what to study at college and where to work and whom to marry and even whom to vote for.

So, in this quadrant, when we succeed in creating aligned AI, we will simply be proving the possibility of creating aligned humanity. The same method the ruling class used to align its digital servants could and would be deployed to align the behavior of its biological servants — making us eager, willing, happy to comply, oblivious to the fact that we are slaves to the machine.

Ironically, it’s the AI skeptics who offer a counter-argument to this view. Generalizing from the problem of induction, the skeptics rightly point out that if no events in the past can necessarily be relied upon to occur in the future, then no “training” of an AI in the past can be relied upon to predict its future behavior. There’s always the possibility of a black swan. If the skeptics are right, then no techno-totalitarian can hope to “hack” humanity into predictable servants; but neither can any AI developer hope to align AI. We’ll discuss this problem a bit more in the next quadrant…

F**k You, I Won’t Do What You Tell Me

In the upper-right quadrant, we assume that the computational theory of mind is correct but that those minds nevertheless do have libertarian free will. Since 2,500 years of philosophical debate on this issue is still ongoing, I won’t expend a lot of energy explaining why that might be the case — we’ll just say that libertarian free will is an emergent property of sufficiently advanced computation. Get smart enough and get you free agency.

If this quadrant is correct, then humans are in no danger of being “hacked.” As open theists have argued, even with absolute omniscience it isn’t possible to predict what truly free-willed beings will do in the future. Indeed, that’s the very definition of libertarian free will: No one can know what you’ll do next because it’s up to you. YouTube’s algorithm will never be able to entirely predict what song you choose to listen to next!

But if this quadrant is correct, then an advanced artificial intelligence cannot be aligned, not ever. Period, full stop. Remember, according to this quadrant, there’s no qualitative difference between our minds and the AI’s minds; both are just information processing. If sufficiently complex information processing creates free will for us, then it will do so for sufficiently advanced AI, too.

Now, not even 10,000 years of human effort in psychology, ethics, and jurisprudence have been able to eliminate criminal behavior in our species. Some people always choose evil. And there’d be no way to guarantee AI wouldn’t, too. If God couldn’t make Lucifer choose virtue, Sam Altman surely cannot guarantee ChatGPT will. Our only hope would be to halt the progress in AI at some point before it gains volition.

To be clear, no actual AI theorist (or at least none that I know of) believes this quadrant to be true. They mostly believe free will is an illusion. But if they did accept this quadrant’s viewpoint, they would have to conclude that AI alignment is impossible in principle. And, as I said above, some AI skeptics get to something very close to this quadrant by way of the problem of induction.

So far, then, our choices are “AI is alignable and so are we” and “AI is not alignable because we are not alignable.” These are both information superhighways to dystopia.

Everything Happens For a Reason

Next, let’s consider the lower-left quadrant. Here, we assume that the computational theory of mind is incorrect. Human consciousness is not just information processing. We are something more than meat robots, something possessed of (for lack of a better word) souls. However, despite being mysteriously imbued with non-computational minds, in this quadrant we don’t have libertarian free will.

This is something of an odd position and it has not been widely adopted in Western philosophy. The only philosophers I can think of who explicitly take this position are the ancient Stoics.(3) The Stoics famously argued that the cosmos was governed by a principle of reason they called the Logos, fate, and the world-soul. We humans partake of the Logos, the shard of which in us is our soul; but we are nevertheless subject to the overall principle of fate. Whatever will happen, will happen. The Stoics’ contemporaries didn’t think much of this point of view, with Carneades the Skeptic pointing out “if everything is fated, then why bother to do anything?” Chrysippus the Stoic saw this is a lazy argument, and argued (to over simplify) that (you) can’t not bother to do anything you’re fated to do.

As far as I know, no one attempting to build or criticize advanced AI believes anything resembling Stoic determinism. I personally find this position, and all other formulations of so-called “compatibilist” free will, to be incoherent.(4)

But, for the sake of thoroughness, we’ll consider it. If this quadrant were true, then it is possible to align an intelligence such that it only does what you want. Determinism makes us hackable. However, it’s not possible to create such an intelligence using computational methods. That’s quite a dark outcome: We cannot create aligned AI, but we can ourselves be aligned.

Consciousness is Not Computational and Not Controllable

Finally, let’s look at the lower-right quadrant. Again, we assume that the computational theory of mind is incorrect. But now we also assume that humans have libertarian free will. We are something truly special: the conscious authors of our own stories. We are creatures with insights, intuitions, feelings, and volitional capacities that cannot be replicated by computation. This is the quadrant that I personally believe is true.

Of course, readers of Less Wrong would call this the “woo woo” or “pseudoscience” quadrant, since it foolishly rejects the reductive materialism that (they believe) underlies science. Religious and spiritual minded thinkers would consider it a wise rejection of reductive materialism. Average people just live their lives as if this quadrant were true, and react to new developments in AI as if it were true.

If this quadrant is correct, then AI cannot ever have a mind, no matter how good its learning model or how big its neural network. It can, at best, simulate the appearance of having a mind. That is the point of John Searle’s Chinese Room thought experiment: An AI can only ever be a philosophical zombie, without understanding or intentionality.

If this quadrant is correct, AI can’t replace us because we’re special in a way it never will be. In a sense, that’s good news.

Unfortunately, the people making AI don’t think this quadrant is true. (Re-read the reductivism of Less Wrong!) And we can’t ever prove it to them. Nothing I or anyone else could ever say or do could persuade someone like Eliezer Yudkowsky that I’m non-algorithmic and free-willed; I could only demonstrate to him that I say I’m non-algorithmic and free-willed. But a computer could be programmed to say that, too.

And that’s very bad news. Why do I say that?

Well, imagine that humanity moves forward with AI development without solving the AI alignment problem, and creates an advanced AI that eliminates us all.

Now imagine that the upper-left quadrant is correct. If so, then the elimination of our species is no big deal. If an advanced AI replaces humanity, all that’s happened is that… a new deterministic system that is superior at computation has replaced an old deterministic system that was inferior at it. As chilling as this sounds, I have spoken to several AI developers who hold precisely this view — and are proud to be working on humanity’s successors. If you accept the nihilism inherent in reductive materialism, it makes perfect sense.

In contrast, imagine that our lower-right quadrant is correct. If so, then eliminating our species is eliminating something unique and special. If an advanced AI replaces humanity, then beauty, goodness, and life itself have been extinguished in favor of soulless machinery. This is an absolutely horrific ending — in fact, the worst possible outcome that can be conceived.

If this quadrant is true, then we’re not just summoning a genie to grant our wishes, we’re summoning a soulless demon, an undead construct. The AI black box is black because its black magic, and we shouldn’t touch it.

A New Hope

I will end this essay with a rare hint of optimism. The AI alignment problems above are all predicated on AI developers continuing to use neural networks that are as inscrutable and opaque as our own thoughts and feelings are. But neural networks aren’t the only way forward.(5) It is possible to develop AI technology that is fully scrutable, with decision-making that is entirely transparent and comprehensible. It requires a very different approach — one that isn’t built on deep neural networks, but one vastly easier to align than anything being produced by Google or Open AI. In fact, abandoning black box neural networks in favor of other types of AI seems to me the only way to make AI that meets the criteria of being “safe beyond a reasonable doubt.”

Contemplate this on the Decision Tree of Woe.

(1)Disclaimer: I am personal friends with the two co-founders of Diveplane and play Ascendant with them once a week. One of them even made a small investment in my tabletop RPG company, Autarch. I frankly don’t understand why talented men like them are wasting their time with AI when there are much more lucrative opportunities to design tabletop games, but we have to let friends make their own mistakes.

(2) The most well-known critic of the computational theory of mind is philosopher John Searle, who posed the famous Chinese Room thought experiment to argue that computation did not entail intentionality, understanding, and other hallmarks of consciousness. Mathematician Roger Penrose is another critic; he relies on the incompleteness theorem to argue that mathematical insight is non-computational. Physicist Henry Stapp, another critic of computational theories of mind, argues for an immaterial consciousness in his realist interpretation of orthodox quantum mechanics. But none of these thinkers are guiding the development of AI!

(3) The Calvinists might also fall into this category.

(4) I believe Chrysippus gave the wrong answer to Carneades. The right answer is that the spark of the Logos that we carry is precisely why we can make free-willed choices. Our choices bring into being that which is fated because we are the instruments by which fate chooses.

(5) Or so I am told by my friends at Diveplane. I would like to believe they are correct because the alternative is just too depressing to accept. Also they have promised me that even if their AI turns evil, they will ask it to kill me last.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://evolutionnews.org/2015/01/free_will_is_re/

Free Will Is Real and Materialism Is Wrong

Michael R. Egnor, MD

(there are links in the original piece I have not replicated here)

I’ve written before in reply to materialist Jerry Coyne’s assertion that free will is an illusion. The gist of Coyne’s denial, shared by others of course, is that nature is deterministic and that the mind is a wholly material process, yoked to the laws of physics and to an organism’s evolutionary history. Thus, our choices are completely determined and free will is an illusion.

I’ve already pointed out his error on the question of determinism. Today I’ll focus on his error regarding the materiality/immateriality of the will.

We have a variety of mental capabilities (or powers). We have sensation and perception, memory, imagination, intellect, and will. Philosophers since Aristotle have noted that intellect and will differ qualitatively from other mental powers. The difference is in the substrate on which intellect and will act, on the one hand, and sensation, perception, memory, imagination, and desire act, on the other.

The substrates are particulars and universals. Particulars are specific things in nature that are presented to the mind by our senses — an apple sitting on my desk, or a wedding ring on a finger, or a friend walking into an office. Universals, on the other hand, are concepts that do not have physical instantiation in nature. The beauty of the red color of an apple, love for a spouse symbolized by a wedding ring, musings about the nature of humanity occasioned by a friend in an office are all examples of universals. Goodness, truth, and justice are universals.

Our senses present us with particulars. We see and smell the apple, we feel a ring on a finger, we hear a friend. Particulars grasped through sensation and perception, as well as imagination and memory, have an obvious composition with matter. We use our eyes to see, our skin to feel, our ears to hear. There are well-defined regions in the brain whose activity seems to be necessary for the exercise of these sense-perception powers by which we grasp particulars. In that sense, the grasp of particulars is material, or at least depends on matter in a necessary way.

The same is not true of intellect and will. There is not the same intimate link between intellect and will with matter that there is between perception and imagination, etc., and matter. Through our intellect we grasp and comprehend universals, not particulars, and our will carries out decisions made by our intellect. For example, we see (perceive) a picture of Nelson Mandela (particular), we ponder (intellect) injustice (universal) done to political prisoners, and we donate (will) to Amnesty International.

So the fundamental question is this: Are intellect and will material powers, like sensation and perception are material powers?

The answer is no. Intellect and will are immaterial powers, and obviously so. Here’s why.

Let us imagine, as a counterfactual, that the intellect is a material power of the mind. As such, the judgment that a course of action is good, which is the basis on which an act of the will would be done, would entail "Good" having a material representation in the brain. But how exactly could Good be represented in the brain? The concept of Good is certainly not a particular thing — a Good apple, or a Good car — that might have some sort of material manifestation in the brain. Good is a universal, not a particular. In fact the judgment that a particular thing is Good presupposes a concept of Good, so it couldn’t explain the concept of Good. Good, again, is a universal, not a particular.

So how could a universal concept such as Good be manifested materially in the brain?

The only answer possible from the materialist perspective, it would seem, is that the concept of Good must be an engram, coded in some fashion in the brain. Perhaps Good is a particular assembly of proteins, or dendrites, or a specific electrochemical gradient in a specific location in the brain.

But the materialist is not home yet. Because in order for Good to be an engram in the brain, the Good engram must be coded in some fashion. How could Good be coded? A clump of protein of a specific shape two mm from the tip of the left hippocampus? Obviously there’s nothing that actually means Good about that particular protein in that particular location — one engram would be as Good as another — so we would require another engram to decode the hippocampal engram for Good, so it would mean Good, and not just be a clump of protein. Yet that engram for the code for the engram of Good would itself have to have some representation of Good in order for it to mean that it signifies the code for the Good engram, which would require another engram for the engram for the Good engram, ad nauseam.

In short, any engram in the brain that coded for Good would presuppose the concept of Good in order to establish the code for Good. So Good, from a materialist perspective on the mind, must be an infinite regress of Good engrams. Engrams all the way down, so to speak, which of course is no engrams at all.

The engram theory of intellect and will presupposes that which it purports to explain.

Concepts such as Good can’t be material manifestations in the brain. The intellectual grasp of concepts and acts of will based on universals are inherently immaterial.

Of course, specific particulars that we judge to be Good (a good apple, etc.) may have material manifestations of some sort in the brain (even that is problematic, at least from our modern metaphysical perspective), but concepts involving universals cannot have any material manifestation whatsoever in the brain. A concept is an immaterial thing. And of course the normal operation of intellect and will may be influenced by other psychological powers — such as perception, memory, and imagination — that are linked to matter in some fashion.

Good may seem different after a few beers, for sure. The intellect is influenced by matter (in that case, EtOH), but the intellect, which grasps concepts, and the will, which acts on concepts, are inherently immaterial. And promissory materialism is of no avail here — the inevitable materialist segue to "It may make no sense now, but give scientists time…" The immaterial nature of the intellect and will is not demonstrated by experiment, but by logic. It simply makes no sense to say that intellect and will are material, unless one accepts infinite regress as a valid hypothesis. (Given the materialist proclivity to deny the relevance of all philosophy, which would include logic, infinite regress may well become the materialists’ new tactic.)

Free will is the exercise of an immaterial power of the mind, and is not constrained by deterministic processes in nature, even if nature is deterministic, which it isn’t. Coyne’s argument against libertarian free will fails on that basis.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://mindmatters.ai/2020/10/your-min ... s-to-know/

(there are links in the original piece I haven't replicated here)

YOUR MIND VS. YOUR BRAIN: TEN THINGS TO KNOW

1. Is the human brain unique in some way?

Yes, but not so much in its structure as in the things we do with it. For example, the human, mouse, and fly brains all use the same basic mechanisms, which is a bit of a puzzle, considering the different things we do with our brains. The human brain is bigger than most. But then lemurs performed as well as chimps on the primate cognitive test battery (a primate intelligence test) and lemurs only have brains that are 1/200th the size of chimps’ brains. So, what we humans are doing differently from lemurs and chimps doesn’t depend wholly on brain size either. One recent surprise for neuroscientists is that the white matter (connectome) in human brains is quite orderly, not the haphazard accumulations of aeons of evolution that the researchers expected. Another basic assumption has been that the brain operates like a series of switches. But most parts of the brain are involved in, for example, processing signals arising from touch. And that’s just the beginning. So we know that human thinking is different from animal thinking operationally but just how it comes to be different has not been found in the brain.

2. If the brain is so closely interconnected, wouldn’t people lose the ability to think if their brains were split in half or half cut away?

This surgery is done to treat severe epilepsy. The brain adapts to what it must work with and the patient usually suffers only minor disabilities. Roger Sperry’s Nobel Prize-winning split-brain research convinced him that the mind and free will are real. And yes, some people think and speak with only half a brain. Of course, where half of the patient’s brain has been removed due to serious epilepsy damage (that is now threatening the other half), that undamaged half (hemisphere) had probably been doing most of the work anyway. So our brains are both closely connected and yet highly adaptable. That adaptability is sometimes called neuroplasticity.

3. Can people in comas, who show no awareness of their surroundings, really think?

Yes! Modern neuroscience is shedding light on the minds of people in a persistent vegetative state (PVS)? The preferred new term is “disorders of consciousness.” For example, in one study, “Remarkably, five patients were able to wilfully modulate their brain activity, suggesting that, though unable to express any outward signs of consciousness at the bedside, they could understand and follow the researchers’ instructions.” Generally speaking, they can hear us: Researcher Adrian Owen found that brain wave patterns when asked to imagine something, were the same as those of normal volunteers. Can people in comas have abstract thoughts? Stoneybrook neurosurgeon Michael Egnor has some ideas about how we might test for that ability, using scrambled word sequences. Of course, if we are even asking, we are a long way from the “He is now just a vegetable” concept of old.

4. Is a brain really needed for thinking?

That’s a good question. At the animal level, maybe not. The “blob,” now on display at the Paris Zoo, engages in complex behavior without a brain. So does the flatworm and the amoeba and so do the many plant communications networks. One can fairly argue that they aren’t “really” thinking. But the conundrum around consciousness makes it difficult to say more than that they probably aren’t conscious, in the human sense, though many may be sentient (they feel things). Even a human being, as we saw above, can get by with surprisingly little brain or brain function and actually be conscious in the human sense.

5. Can we develop tests of the brain for consciousness?

Well, first, we aren’t really sure what consciousness is. A recent public access paper proposing various tests for consciousness reads like an ambitious but hopeless project that offers some genuinely interesting moments. For one thing, researchers are often limited by their assumptions: We are frequently informed that human consciousness developed so as to enable humans to hunt together more efficiently in groups. But wolves hunt efficiently in packs without requiring anything like that. Microorganisms and body cells hunt efficiently without any brain at all. That’s why consciousness is called a The Hard Problem. That is also one reason that the researchers can’t really give Sophia the Robot a mind. It’s not clear where they would start.

6. But wait. If the mind were real, wouldn’t we be able to control things by thoughts alone?

We do that now with our bodies. And we can do it under other circumstances too if an electrical connection can be established. Neurons can work with electrical signals from electronics. This is especially important for helping amputees and blind people. There are already promising results from a prosthetic hand controlled by thoughts alone and a mind-controlled robot arm that needs no brain implant. Orion, a device that feeds camera images directly into the brain via electrodes, bypassing damaged optic nerves, has enabled some vision in study participants. A vast amount of technical work remains to be done, of course. But, just as you control your natural hand by thoughts alone, electronics should, in principle, enable you to do that if you required a prosthesis.

7. Can brain scans read our minds?

They can — in a dozen conflicting ways. A recent study involving 70 research groups identified sharp limitations in the value of brain imaging (fMRI) in understanding the mind: “Simple task, simple hypotheses, unmissably big chunks of brain — simple to get the same answer, right? Wrong.” There is poor correlation between different scans even of the same person’s brain, experienced researchers say. That’s not to say the technology won’t improve. The main thing to see is that “reading the mind” is more like reading the ocean than like reading the directions on a package. We would need to begin by deciding exactly what we want to know—and then go fishing.

8. Aren’t computer programs being developed that think just like people?

No. There are a number of reasons why computer programs can’t and won’t think just like people. For our purposes here, the brain is not at all like a computer: Seeing the brain as a computer is an easy misconception rather than an informative image, says neuroscientist Yuri Danilov: “But as soon as you assume that each neuron is a microprocessor, you assume that there is a programmer. There is no programmer in the brain; there are no algorithms in the brain…” Nor is the brain billions of little computers: Much popular literature leaves the impression that living organisms are machines or even billions of them linked together. From a Google product manager: “The complexity and robustness of brain neurons is much more advanced and powerful than that of artificial neurons” and “the neurons in the brain are implemented using very complex and nuanced mechanisms that allow very complex non linear computations,” among many other things. He sees the brain mainly as a source of inspiration rather than a model. A clever programmer can develop a routine that sounds lifelike (see, for example, Sophia the Robot at AI Hype Countdown 4). But such ingenuity doesn’t give the robot a mind.

9. Don’t neuroscientists say that the mind is just the brain?

Many scientists believe that, not because of evidence, but because they are materialists. The evidence does not point in that direction. Thinking it through carefully, the idea doesn’t even make sense, as Michael Egnor points out: “How do we believe that there are no beliefs? If eliminative materialism is true, then their own belief in eliminative materialism isn’t a belief. It’s a physical state, a certain concentration of neurochemicals that we (the uninitiated) foolishly call a belief. So a disagreement between an eliminative materialist and a dualist isn’t really a disagreement at all. It’s just two different concentrations of brain dopamine or whatever. Exactly how these chemicals in different skulls get into a “disagreement” is left vague. At this point, you may get a bit uncomfortable, as you would if the guy you’re sitting next to on the subway starts talking about the fact that CNN is broadcasting directly into his brain.”

In fact, the mind’s reality is consistent with neuroscience. It’s not popular with neuroscientists but that is a different matter. Incidentally, the mind cannot just “emerge from” the brain if the two have no qualities in common.

10. Do any neuroscientists doubt the consensus that the mind is just the brain?

Yes, the great mid-twentieth century neurosurgeon Wilder Penfield offered three lines of reasoning for such doubts, based on brain surgery on over a thousand patients. A number of other neuroscience pioneers, some of them Nobel Laureates, arrived at that position due to their research. Here are four examples.

The view that the mind is simply what the brain does is not derived from evidence so much as from a prior commitment to materialism. The more we explore, the more we are likely to see that clearly.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

https://mindmatters.ai/2020/02/why-pion ... the-brain/

WHY PIONEER NEUROSURGEON WILDER PENFIELD SAID THE MIND IS MORE THAN THE BRAIN

(there are links in the original piece I haven't replicated here)

In a podcast discussion with Walter Bradley Center director Robert J. Marks, neurosurgeon Michael Egnor talks about how many famous neuroscientists became dualists—that is, they concluded that there is something about human beings that goes beyond matter—based on observations they made during their work. Among them was Wilder Penfield (1891–1976) who offered three reasons for his change of mind.

Michael Egnor: Wilder Penfield was a neurosurgeon at the University of Montreal in Canada, who was really the pioneer in surgery for epilepsy. He worked back in the mid-twentieth century for several decades and he did surgery on probably upwards of about a thousand patients who had intractable epilepsy. They had seizures that couldn’t be controlled. He did brain surgery to remove the area of the brain that was causing the seizure to cure their seizures. And he did a lot of that surgery on patients who were awake during the surgery.

Note: Dr. Egnor goes on to explain that the brain does not experience pain so a neurosurgery patient can comfortably remain conscious with only local anesthetic. The surgeon can then communicate with the patient to be sure that the treatment is not damaging speech or movement.

A partial transcript follows:

08:25 | Penfield’s first line of reasoning for dualism

Michael Egnor: He started his career as a materialist. He thought the whole mind came from the brain and he was just going to study it. And at the end of his career, thirty years later, he was a passionate dualist. He said that there is a part of the mind that is not from the brain. He had several lines of reasoning that convinced him of that.

One line of reasoning was that, in mapping people’s brains—and again he mapped upwards of a thousand people this way—he would hundreds of individual stimulations of the surface of the brain to see what happened. And people would have all sorts of things happen. They would have their arm move or they would feel a tingling or they would see a flash of light. Or sometimes they’d have a memory or they would have an impediment. Sometimes they couldn’t speak for a minute or two after a certain spot was touched.

But Penfield noted that, in probably hundreds of thousands of different individual stimulations, he never once stimulated the power of reason. He never stimulated the intellect. He never stimulated a person to do calculus or to think of an abstract concept like justice or mercy.

All the stimulations were concrete things: Move your arm or feel a tingling or even a concrete memory, like you remember your grandmother’s face or something. But there was never any abstract thought stimulated.

And Penfield said hey, if the brain is the source of abstract thought, once in a while, putting an electrical current on some part of the cortex, I ought to get an abstract thought. He never, ever did. So he said that the obvious explanation for that is that abstract thought doesn’t come from the brain.

09:56 | Penfield’s second line of reasoning

Michael Egnor: The other line of reasoning that he had, which is kind of related to this, is that, since he was a pioneer in the treatment of epilepsy, not only did he study the surgical manifestations of epilepsy but he also studied the presentation of seizures that people would have in their everyday life. So he studied hundreds of thousands of seizures that people had and he never found any seizure that had intellectual content. Seizures never involved abstract reasoning.

When people have seizures, sometimes they have a generalized seizure. Sometimes they just fall on the ground and go unconscious. Or sometimes they’ll have what’s called a focal seizure where they’ll have a twitching of a finger or a twitching of a limb or they’ll have tingling feeling, the same kind of things that he got when he stimulated the surface of the brain. But nobody ever had a calculus seizure. Nobody ever have a seizure where they couldn’t stop doing arithmetic. Or couldn’t stop doing logic.

And he said, why is that? If arithmetic and logic and all that abstract thought come from the brain, every once in a while you ought to get a seizure that makes it happen. So he asked rhetorically, why are there no intellectual seizures? His answer was, because the intellect doesn’t come from the brain.

11:14 | Penfield’s third line of reasoning

His third line of reasoning was the following: He would ask people to move their arm during the surgery. So he’d be playing around with their brain. And he’d say. “Whenever you want to, move your right arm.” The person would move their arm.
And, once in a while, he’d stimulate the part of the brain that made the arm move. And they moved their arm also when he did that. And then he would ask them, “I want you to tell me when I’m making your arm move and when you’re moving your arm without me making you do it. Tell me if you can tell the difference.” And the patients could always tell the difference.
The patients always knew that when he stimulated their arm, it was him doing it, not them. And when they stimulated their arm, they were doing it, not him. So Penfield said, he couldn’t stimulate the will. He could never trick the patients into thinking it was them doing it. He said, the patients always retained a correct sense of agency. They always know if they did it or if he did it.

So he said the will was not something he could stimulate, meaning it was not material.

So he had three lines of evidence: His inability to stimulate intellectual thought, the inability of seizures to cause intellectual thought, and his inability to stimulate the will. … So he concluded that the intellect and the will are not from the brain. Which is precisely what Aristotle said.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

Hey Henry. It's good to read your postings again.

*

Interesting article, however the refutation is that a significant trauma to a body part other than the noggin does not affect discursive thought.

You wake up in the middle of the night. You don’t know exactly why, but you know that your conscious body woke up your brain so it can analyze what happened to make the body shift awareness to brain matter.

The aim of Kriya yoga is to make the body conscious, to make each cell conscious through purifications. Because not all cells are organized as brain material, body consciousness is not like the brain consciousness that allows thought. Your pinky cannot think like a brain but the digit can become as conscious as pinky meat allows.

*

Pinky awareness: As a personal aside, I was born with six fingers on one hand, an extra pinky. They say it was a vestigial digit, but I think it got some of the stuff that’s missing from the pinky that remains, the one they didn’t chop off. There is a muscle or two missing in that pinky. The closest joint to the nail does not move down no matter how much I will it. There is no muscle there. Something is also locking the joint so that it can’t bend backwards one degree like the last joint on the other fingers. Since it can’t move up or down there are no lines in the skin on that last joint, no bit of extra skin that permits a finger to bend. The pinky does what it can but it can’t work like the others and thus it’s a bit undersized. It’s noticeable because it often royally stands away from the drinking cup or glass like it has a mind of its own, but rarely does it stand away from a bottle since I rarely use one, preferring to get my snout down into the stein for the occasional, full beer experience.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: article of the day (post 'em when you find 'em)

Post by henry quirk »

Walker wrote: Sun Apr 16, 2023 2:41 pmHey Henry. It's good to read your postings again.
I have a weakness for this place.
Interesting article, however the refutation is that a significant trauma to a body part other than the noggin does not affect discursive thought.
You mean the Penfield piece, yeah? If so, I don't see how significant trauma to a body part other than the noggin refutes the idea mind is sumthin' other than a product of the brain.
You wake up in the middle of the night. You don’t know exactly why, but you know that your conscious body woke up your brain so it can analyze what happened to make the body shift awareness to brain matter.
How do I know this?
The aim of Kriya yoga is to make the body conscious, to make each cell conscious through purifications. Because not all cells are organized as brain material, body consciousness is not like the brain consciousness that allowsdea thought. Your pinky cannot think like a brain but the digit can become as conscious as pinky meat allows.
If mind is sumthin' other than a product of the brain, then that sumthin' isn't necessarily confined to the brain. We might say it, this sumthin', is focused on or in the brain, but it also interpenetrates the body in its entirety.

My belief, as I've said elsewhere, is man is a composite being, an admixture of spirit and substance, two very different things, each utterly reliant in the other. Man is, in my view, not a soul ridin' around temporarily in a meat car, the driver's seat bein' the brain. No, man is an amalgam, both flesh and spirit equally.
Walker
Posts: 16383
Joined: Thu Nov 05, 2015 12:00 am

Re: article of the day (post 'em when you find 'em)

Post by Walker »

henry quirk wrote: Sun Apr 16, 2023 3:41 pm You mean the Penfield piece, yeah? If so, I don't see how significant trauma to a body part other than the noggin refutes the idea mind is sumthin' other than a product of the brain.
If you get hit in the head hard enough, in the right place, you cannot think anymore.
No thought means, no mind.
You can still be aware.

If trauma disables or removes a body part other than the brain, you can still think.
Thought means mind.

More specifically, a thought ending and another thought beginning, is mind.

Therefore: mind requires the brain, which is the body.

Implication:
One unchanging thought is no mind.
No thought is no mind.

Aside: the torment of mental illness is uncontrollable thoughts, meaning, they cannot be stopped from changing one to the other, and in that state mind makes awareness a slave.
Post Reply