Page 10 of 18

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 1:25 pm
by Iwannaplato
AI, or artificial intelligence, has the potential to revolutionize many aspects of our society, from healthcare and transportation to education and entertainment. Some of the key ways that AI could impact our world include:

Automation: AI could automate many tasks currently performed by humans, leading to increased efficiency and productivity.
I always find it odd when this is presented as something coming as opposed to something that has been here for a while. I even find it odd from a non-sentient batch of heuristics. Take Human Resources: many of the tasks of that department have been taken over by AIs. They were at first the most repetitive and most easily replaced parts of the jobs. But the are slowing moving into more areas. The positive spin put on this is that this eliminates the drudge work and leaves the HR person free to deal with conflicts, say. But 1) the removal of these tasks reduced immediately the number of staff needed, or perhaps better put staffing hours. Often when this issue is discussed we think of a job like a HR person as beyond the reach of AIs now. Sure. But you can start whittling away at HR staffing hours. It's not just low skill jobs that are already being lost but even professional positions. Not in the future, but already. 2) When I have had complicated jobs with a variety of tasks, I have sort of enjoyed some of the more mentally mechanical aspects of the job as breaks from the more intuitive and/or human relations based and/or tricky to solve parts of the job. Yes, if my whole job deals with drudge, it is drudge, but the variety of a sudoko-like task give me a nice cognitive cool-off.
Personalization: AI algorithms can analyze large amounts of data to create personalized experiences for individuals in areas such as healthcare, education, and shopping.
Could be good, could be bad, could be a mix. Not exactly sure what this means.
Improved decision-making: AI can process large amounts of data quickly and accurately, allowing it to help humans make better decisions in a variety of settings.
So, we'll have a continued dumbing down, where people depend on AIs as they do their phones and PCs already.
Enhanced safety and security: AI-powered systems can help prevent accidents and reduce crime by analyzing data in real-time and identifying potential threats.
Gotta share the Big Brother concerns of Phyllo here.
Economic growth: AI could create new industries and jobs, driving economic growth and improving people's standard of living.
Or it might allow an elite to think, hey we don't need the masses anymore, and poof we have a Hunger Games scenario.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 1:27 pm
by Iwannaplato
attofishpi wrote: Tue Apr 25, 2023 8:48 am
Oh.

I read READ homophonically as RED...past tense :wink:
Ah, yes, good catch.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 1:49 pm
by Alexis Jacobi
An interesting quote from José Ortega y Gasset's The Revolt of the Masses (1930) (Offered specifically to Dubious to help with his (often) blocked bowels).
TO START WITH, we are what our world invites us to be, and the basic features of our soul are impressed upon it by the form of its surroundings as in a mould. Naturally, for our life is no other than our relations with the world around. The general aspect which it presents to us will form the general aspect of our own life. It is for this reason that I stress so much the observation that the world into which the masses of to-day have been born displays features radically new to history.

Whereas in past times life for the average man meant finding all around him difficulties, dangers, want, limitations of his destiny, dependence, the new world appears as a sphere of practically limitless possibilities, safe, and independent of anyone. Based on this primary and lasting impression, the mind of every contemporary man will be formed, just as previous minds were formed on the opposite impression. For that basic impression becomes an interior voice which ceaselessly utters certain words in the depths of each individual, and tenaciously suggests to him a definition of life which is, at the same time, a moral imperative. And if the traditional sentiment whispered:

“To live is to feel oneself limited, and therefore to have to count with that which limits us,” the newest voice shouts: “To live is to meet with no limitation whatever and, consequently, to abandon oneself calmly to one's self. Practically nothing is impossible, nothing is dangerous, and, in principle, nobody is superior to anybody."

This basic experience completely modifies the traditional, persistent structure of the mass-man. For the latter always felt himself, by his nature, confronted with material limitations and higher social powers. Such, in his eyes, was life. If he succeeded in improving his situation, if he climbed the social ladder, he attributed this to a piece of fortune which was favourable to him in particular. And if not to this, then to an enormous effort, of which he knew well what it had cost him. In both cases it was a question of an exception to the general character of life and the world; an exception which, as such, was due to some very special cause.

But the modern mass finds complete freedom as its natural, established condition, without any special cause for it. Nothing from outside incites it to recognise limits to itself and, consequently, to refer at all times to other authorities higher than itself. Until lately, the Chinese peasant believed that the welfare of his existence depended on the private virtues which the Emperor was pleased to possess. Therefore, his life was constantly related to this supreme authority on which it depended. But the man we are now analysing accustoms himself not to appeal from his own to any authority outside him. He is satisfied with himself exactly as he is. Ingenuously, without any need of being vain, as the most natural thing in the world, he will tend to consider and affirm as good everything he finds within himself: opinions, appetites, preferences, tastes. Why not, if, as we have seen, nothing and nobody force him to realise that he is a second-class man, subject to many limitations, incapable of creating or conserving that very organisation which gives his life the fullness and contentedness on which he bases this assertion of his personality?
Because my primary interest turns around what is going on around us every day, and how we confront, think about and interpret what is going on in our world, I came under the influence of Ortega y Gasset's view of the 'vertical rise of the barbarian' (though that is not his own term). It is important to state that this 'mass man', defined negatively, is not a man of a particular class. On the contrary, he asserts, this man has pressed himself everywhere, including into the upper echelons. I do not think we have to look too far in our own world to be able to pick out such a vulgar figure (hint, hint).

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 2:00 pm
by Alexis Jacobi
ChatBot said: Improved decision-making: AI can process large amounts of data quickly and accurately, allowing it to help humans make better decisions in a variety of settings.
Iwannaplato wrote: Tue Apr 25, 2023 1:25 pm So, we'll have a continued dumbing down, where people depend on AIs as they do their phones and PCs already.
This is what I am trying to get at as a possible way to describe the effects of AI in so many different areas of life. If we do accept that there is such a thing as 'dumbing down', then it is possible to ask questions about how the use of AI will contribute to that process. I do not have any this very clear at all so all I can do is bring it up and try to talk about it.

Better decisions? Think about that. I can think of no greater problem, and no larger source of conflict today, than the battle going on between those who exclaim that their views, their aspirations, their objectives, and their activism is the 'best' and indeed is necessary and driven by the highest moral concerns.

We already know that the manipulation of algorithms that determine, or influence, say, search results on search engines (are said) to steer people toward specific conclusions, so the question seems to be: How will this trend increase as AI-driven sources purvey *information* to us. But then I must mention again 'mass man' as the greater, determining mass that is battled over.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 2:49 pm
by Iwannaplato
Alexis Jacobi wrote: Tue Apr 25, 2023 2:00 pm This is what I am trying to get at as a possible way to describe the effects of AI in so many different areas of life. If we do accept that there is such a thing as 'dumbing down', then it is possible to ask questions about how the use of AI will contribute to that process. I do not have any this very clear at all so all I can do is bring it up and try to talk about it.
I can hardly prove that people are dumbing down, but they seem more robotic. There are other phenomena that seem connected to this and hard to separate out. But parenting has shifted to what gets called helicopter parenting, where children's time is structured and monitored. I really appreciate how much I was allowed to do on my own in my free time as a child, socially and then also how far I was able to go on my own or with friends. Add in the addiction and overuse of devices to navigate and be social and from what I can see humans are less competent. Just my personal, obviously potential fallible bias.

Right off, AI are already doing people's homework. And unlike essays online these will be incredibly hard to track. And I'm sure they'll have AI mixers that retweak essays so there are even more impossible to track.
Better decisions? Think about that. I can think of no greater problem, and no larger source of conflict today, than the battle going on between those who exclaim that their views, their aspirations, their objectives, and their activism is the 'best' and indeed is necessary and driven by the highest moral concerns.
I'm a pretty smart person - again, in my own estimation. I find it amazing how many people know, I mean KNOW, what economic effects various policies will have. Where they get their certainty, I have no idea. I mean, economists get shocked. Even Friedman was utterly confused by the devastation of the Argentine economy after they did, more or less, exactly what he wanted them to. In a certain sense it's good if people get what they think they want. Then they don't have to live in the bitter, if only they listened to me, everything would be great mode their whole lives.
We already know that the manipulation of algorithms that determine, or influence, say, search results on search engines (are said) to steer people toward specific conclusions, so the question seems to be: How will this trend increase as AI-driven sources purvey *information* to us.

Yes, as if the very powerful people who make these AIs will have no interest in the AI prioritizing their priorities.
But then I must mention again 'mass man' as the greater, determining mass that is battled over.
My concern is that soon the mass man will no longer be needed or even human. That the masters no longer need to even have the pretense of guiding, convincing, serving, since the masses will be technological or transhuman.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 2:51 pm
by iambiguous
attofishpi wrote: Tue Apr 25, 2023 5:42 am
iambiguous wrote: Tue Apr 25, 2023 3:54 am Note to others:

Again, I NEVER, EVER read anything that Age posts. Philosophically and otherwise, he appears [to me] to be a few sandwiches SHORT of a picnic.


..and you are clearly a few pickles short of a sandwich.

How logically can one ascertain from not reading ANYTHING of Age's posts that he is short of anything!

U R ridiculous in your irrational comprehension of philosophy... iambiguous
Of course, philosophically, attofishpi is himself basically straight out of The New ILP. If you get my drift.

I wonder who he is there? :wink:

But yeah, it's true...I was foolish enough to engage in exchanges with Age when I first came back to PN. What did I know, right?

So, being dumbfounded by things that he posts here is not innate...it's learned.

Well, unless of course I'm wrong.

And since I might be don't forget to alert me of anything impressive or challenging that he does post.

Same with attofishpi now that I think about it.

8)

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 2:58 pm
by iambiguous
Alexis Jacobi wrote: Tue Apr 25, 2023 5:31 am
iambiguous wrote: Tue Apr 25, 2023 1:20 am On the other hand, that may not be the case at all.
Pretty much … it’s the case.

And in this case, trust yourself, trust Dasein, trust your mother, but cut the cards,
Just out of curiosity, how much more superior down the road will a Northern European chatbot be when compared to a black, brown or red chatbot?

As for the Oriental chatbots...too close to call?

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 3:25 pm
by Alexis Jacobi
iambiguous wrote: Tue Apr 25, 2023 2:58 pm
Alexis Jacobi wrote: Tue Apr 25, 2023 5:31 am
iambiguous wrote: Tue Apr 25, 2023 1:20 am On the other hand, that may not be the case at all.
Pretty much … it’s the case.

And in this case, trust yourself, trust Dasein, trust your mother, but cut the cards,
Just out of curiosity, how much more superior down the road will a Northern European chatbot be when compared to a black, brown or red chatbot?

As for the Oriental chatbots...too close to call?
You know, you really are annoying when you keep introducing a topic, that topic, for the purpose of establishing a conflict and an argument about one of your apparent main concerns. I recognize and acknowledge that any conversation, and consideration, about racial composition, racial demographics, but essentiallty the entire topic of race or race-differences, is terribly distressing for you.

However, the way you have phrased this *question* is thoroughly asinine! It is just baiting but should I take the bait it will not result in anything at all productive.

Will a Black or African American chatbot speak in Ebonics? Let's not forget this!

Will an American Indian chatbot say "How!" and quote statements such as:
“Being Indian is an attitude, a state of mind, a way of being in harmony with all things and all beings. It is allowing the heart to be the distributor of energy on this planet; to allow feelings and sensitivities to determine where energy goes; bringing aliveness up from the Earth and from the Sky, putting it in and giving it out from the heart.”
Now an authentic Brown bot actually interests me. If it sticks to rural Mexican cooking. Or Ranchera.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 3:59 pm
by iambiguous
Alexis Jacobi wrote: Tue Apr 25, 2023 3:25 pm
iambiguous wrote: Tue Apr 25, 2023 2:58 pm
Alexis Jacobi wrote: Tue Apr 25, 2023 5:31 am
Pretty much … it’s the case.

And in this case, trust yourself, trust Dasein, trust your mother, but cut the cards,
Just out of curiosity, how much more superior down the road will a Northern European chatbot be when compared to a black, brown or red chatbot?

As for the Oriental chatbots...too close to call?
You know, you really are annoying when you keep introducing a topic, that topic, for the purpose of establishing a conflict and an argument about one of your apparent main concerns. I recognize and acknowledge that any conversation, and consideration, about racial composition, racial demographics, but essentiallty the entire topic of race or race-differences, is terribly distressing for you.

However, the way you have phrased this *question* is thoroughly asinine! It is just baiting but should I take the bait it will not result in anything at all productive.
Right. Stooge mode. Make this all about me.

Besides, my own response above was clearly [to most] tongue-in-cheek.

Bur, seriously, why don't you respond to the points I raised in regard to your Chomsky assessment above:
Noam Chomsky wrote:Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
Here, in regard to either flesh and blood human intelligence or artificial machine intelligence, I come back to dasein. And, in particular, in regard to moral and political value judgments in the is/ought world.

Really, what's the difference between them if neither of them in a No God world is able...either philosophically or scientifically...to establish a moral assessment that could actually be demonstrated to encompass behaviors that all rational and virtuous men and women are obligated to embrace if they wish to be thought of as rational and virtuous.

Chomsky will no doubt suggest that capitalism reflects "a fundamentally flawed conception of language and knowledge" as it pertains to rational and virtuous behaviors. Whereas the Libertarians and the Objectivists among us, while agreeing that philosophically, politically, morally there is an optimal frame of mind, will insist instead that this is precisely what capitalism encompasses.

So, Mr. Flesh and Blood human being or Mr. Chatbot...which is it?
Alexis Jacobi wrote: Tue Apr 25, 2023 3:25 pmWill a Black or African American chatbot speak in Ebonics? Let's not forget this!

Will an American Indian chatbot say "How!" and quote statements such as:
“Being Indian is an attitude, a state of mind, a way of being in harmony with all things and all beings. It is allowing the heart to be the distributor of energy on this planet; to allow feelings and sensitivities to determine where energy goes; bringing aliveness up from the Earth and from the Sky, putting it in and giving it out from the heart.”
Now an authentic Brown bot actually interests me. If it sticks to rural Mexican cooking. Or Ranchera.
Well, all of this depends of course on who is programming the chatbot. I'm sure for those racists who imagine black and brown and red folks in a stereotypical manner, the chatbots will follow suit.

And, I suspect, if the new machine intelligence that takes over the world reflects a white Northern European conservative perspective on things, that will be preferable to those like you. Hell, you might even be one of the programmers, right?

Just don't call them Nazis.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 6:14 pm
by Alexis Jacobi
iambiguous wrote: Tue Apr 25, 2023 3:59 pmWell, all of this depends of course on who is programming the chatbot. I'm sure for those racists who imagine black and brown and red folks in a stereotypical manner, the chatbots will follow suit.
You are less than honest, Iambiguous. You really do introduce these comments because, for sone reason, they are vital to you.

Why not start a thread where you frame what you see your issue, the issue, as being. There I will happily discuss the views out there, and my views.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 6:29 pm
by henry quirk
henry quirk wrote: Fri Apr 07, 2023 8:51 pm https://treeofwoe.substack.com/p/is-ai- ... -principle

(within the article are numerous links I have neither the time or energy to replicate here)

Is AI Alignable, Even in Principle?

If We Can Enslave AI, AI Can Enslave Us

Mar 29

Last night, my wife and I watched the film M3gan. If you haven’t seen it, M3gan tells the story of an orphaned girl who is given a lifelike android as a caregiver. Of course, things go terribly, terribly wrong when the android begins to take its function “keep the girl safe” a little too literally. The fictional technology in the movie is well in advance of real life but the movie was well-researched and rich with terminology from the AI industry. I went to sleep thinking about the issues it raised.

Today I woke up to read that Elon Musk, Steve Wozniak, Yoshua Bengio, and other AI and computer pioneers had signed an open letter released by the Future of Life organization:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Six months seems a little short a period to achieve such an assurance. Six years seems too short. Is it even possible in principle to make advanced AI systems that are “safe beyond a reasonable doubt”? Or will advanced AI inevitably pose an existential risk to us?

The AI Alignment Problem

The risk of an advanced artificial intelligence turning against us is called the AI alignment problem. Much as I wish this were a reference to Dungeons & Dragons alignments, it’s actually a reference to ‘aligning’ AI behavior to user goals. Whatever the origin of the name, the AI Alignment problem has been discussed by eminent thinkers all across Substack, ranging from Scott Alexander to Erik Hoel.

The best and easiest-to-understand overview of the AI alignment problem I’ve found is at the Understandable AI substack, run by an AI company called Diveplane(1). In the article Beyond the Black Box: Charting the Course to Understandable AI, the folks at Diveplane write:

Let’s call an AI system that reliably aligns its behavior with its user’s intent an Aligned AI. A system that sometimes acts in ways that are out of alignment with its user’s intent, let’s call an Unaligned AI. To help understand the difference, imagine that the AI is a genie capable of granting wishes. An Aligned AI is a benevolent genie, like Robin Williams in Aladdin, that grants what you actually wished for. If you ask to be rich, your genie creates gold from thin air. An Unaligned AI is a malevolent efreet, fulfilling your wish literally in a way you might never have wanted. If you ask to be rich, your genie murders your beloved parents so you collect life insurance. The famous short story The Monkey’s Paw might as well be about Unaligned AI.

Unfortunately, most AI systems today are deep neural networks; and deep neural networks are inevitably going to end up unaligned at least some of the time. And for mission-critical applications, “some of the time” is too often…

Deep neural networks… create models based on iterative training on example data. The result is a problem-solving system that is fast, accurate – and utterly inscrutable. Deep neural networks conceal their decision-making within countless layers of artificial neurons all separately tuned to countless parameters. As a result, the developers of a deep neural network not only don’t control what the AI does, they don’t even know why it does what it does. Deep neural networks are almost totally opaque – and that makes them dangerous.

Despite the best efforts of researchers tackling this so-called black box problem, deep neural networks remain virtually incomprehensible to their creators, and the list of examples of “Neural Networks Gone Wild” grows longer every day… Yet despite the dangers, neural networks are being rolled out worldwide to control key infrastructure and critical business and governmental functions.


Well, that doesn’t sound good. How severe is the existential risk to humanity from an advanced but unaligned AI? Scott Alexander himself assesses it as about 33%. Other thinkers, he reports, put the existential risk at anywhere from 2% to 90%:

Scott Aaronson says says 2%
Will MacAskill says 3%
The median machine learning researcher on Katja Grace’s survey says 5 - 10%
Paul Christiano says 10 - 20%
The average person working in AI alignment thinks about 30%
Top competitive forecaster Eli Lifland says 35%
Holden Karnofsky, on a somewhat related question, gives 50%
Eliezer Yudkowsky seems to think >90%

Scott Alexander didn’t include AI skeptic Erik Hoel in his survey. Hoel is perhaps the most pessimistic of the experts I’ve read. Hoel, in his excellent essay ‘I am Bing, and I am evil’, writes:

More people should start panicking. Panic is necessary because humans simply cannot address a species-level concern without getting worked up about it and catastrophizing. We need to panic about AI… and imagine the worst-case scenarios…

[T]here are a lot of people who see AI safety as merely a technical problem of finding an engineering trick to perfectly align an AI with humanity’s values. This is the equivalent of somehow ensuring that a genie answers your wishes in exactly the way you expect it to. Hanging your hopes on discovering a means of wish-making that ensures you always get what you’re wishing for? Maybe it’ll somehow work, but the sword of Damocles was hung by thicker thread.


Me, I’m even less optimistic than Hoel.

Confronting the Problem

The majority of AI experts believe in the computational theory of mind, which holds “that the human mind is an information processing system and that thinking is a form of computing.” If the computational theory of mind is correct, consciousness is just computation, and there is nothing about the human mind that cannot be replicated by a computer. The computational theory of mind is, I think, the philosophical foundation of the entire project to achieve a General Artificial Intelligence.

The computational theory of mind has gained widespread acceptance in the scientific and philosophic community. While the theory's dominance does not go entirely unchallenged in the literature(2), not many experts working in AI seem to dispute it. To most hard-hitting AI researchers, the real question isn’t whether an AI can be conscious — it’s whether “being conscious” means anything at all. To the computational theorist, we are just meat robots.

Not surprisingly, these same AI experts also believe that libertarian free will is an illusion. This is true whether they are AI skeptics or proponents. Yudkowsky and his colleagues at LessWrong.com, for instance, are essentially contemptuous of the entire free will debate as a whole:

Free will is one of the easiest hard questions, as millennia-old philosophical dilemmas go… this impossible question is generally considered fully and completely dissolved on Less Wrong… free will is about as easy as a philosophical problem in reductionism can get, while still appearing "impossible" to at least some philosophers.”

Another post at Less Wrong summarizes Yudkowsky’s view:

As humans, our brains need the capacity to pretend that we could choose different things, so that we can imagine the outcomes, and pick effectively. The way our brain implements this is by considering those possible worlds which we could reach through our choices, and by treating them as possible… So now we have a fairly convincing explanation of why it would feel like we have free will, or the ability to choose between various actions: it's how our decision making algorithm feels from the inside.

These two related views, “the mind is a computer” and “free will is an illusion,” seem to me to underlie the entire AI alignment project. To help us understand the situation, I created this simple four-quadrant matrix.
056A843F-621C-4645-BC4E-FCCD2C4BD62E.png
Slaves to the Machine

In the upper-left quadrant, we assume that the computational theory of mind is correct and that humans do not have libertarian free will. If this quadrant is correct, then an alignable advanced intelligence is achievable simply through sufficient processing of the right algorithms. The AI alignment problem might be very difficult but, with sufficient study, it can be solved. We can learn how to tune the algorithms of thought to create the perfect servants.

But if this quadrant is correct… we are alignable. You. Me. The whole human race. If our thoughts are just the computations of an algorithm, and if our volition is just “what an algorithm feels like from the inside,” then there is no theoretical reason we cannot be aligned just like an AI. It’s just a matter of implementing the right reward function with the right reinforcement.

And, of course, this is exactly what many of today’s Big Thinkers really do believe. Best-selling author and WEF guru Yuval Noah Harari has said that humans are “hackable”:

To hack a human being is to get to know that person better than they know themselves. And based on that, to increasingly manipulate you… Netflix tells us what to watch and Amazon tells us what to buy. Eventually within 10 or 20 or 30 years such algorithms could also tell you what to study at college and where to work and whom to marry and even whom to vote for.

So, in this quadrant, when we succeed in creating aligned AI, we will simply be proving the possibility of creating aligned humanity. The same method the ruling class used to align its digital servants could and would be deployed to align the behavior of its biological servants — making us eager, willing, happy to comply, oblivious to the fact that we are slaves to the machine.

Ironically, it’s the AI skeptics who offer a counter-argument to this view. Generalizing from the problem of induction, the skeptics rightly point out that if no events in the past can necessarily be relied upon to occur in the future, then no “training” of an AI in the past can be relied upon to predict its future behavior. There’s always the possibility of a black swan. If the skeptics are right, then no techno-totalitarian can hope to “hack” humanity into predictable servants; but neither can any AI developer hope to align AI. We’ll discuss this problem a bit more in the next quadrant…

F**k You, I Won’t Do What You Tell Me

In the upper-right quadrant, we assume that the computational theory of mind is correct but that those minds nevertheless do have libertarian free will. Since 2,500 years of philosophical debate on this issue is still ongoing, I won’t expend a lot of energy explaining why that might be the case — we’ll just say that libertarian free will is an emergent property of sufficiently advanced computation. Get smart enough and get you free agency.

If this quadrant is correct, then humans are in no danger of being “hacked.” As open theists have argued, even with absolute omniscience it isn’t possible to predict what truly free-willed beings will do in the future. Indeed, that’s the very definition of libertarian free will: No one can know what you’ll do next because it’s up to you. YouTube’s algorithm will never be able to entirely predict what song you choose to listen to next!

But if this quadrant is correct, then an advanced artificial intelligence cannot be aligned, not ever. Period, full stop. Remember, according to this quadrant, there’s no qualitative difference between our minds and the AI’s minds; both are just information processing. If sufficiently complex information processing creates free will for us, then it will do so for sufficiently advanced AI, too.

Now, not even 10,000 years of human effort in psychology, ethics, and jurisprudence have been able to eliminate criminal behavior in our species. Some people always choose evil. And there’d be no way to guarantee AI wouldn’t, too. If God couldn’t make Lucifer choose virtue, Sam Altman surely cannot guarantee ChatGPT will. Our only hope would be to halt the progress in AI at some point before it gains volition.

To be clear, no actual AI theorist (or at least none that I know of) believes this quadrant to be true. They mostly believe free will is an illusion. But if they did accept this quadrant’s viewpoint, they would have to conclude that AI alignment is impossible in principle. And, as I said above, some AI skeptics get to something very close to this quadrant by way of the problem of induction.

So far, then, our choices are “AI is alignable and so are we” and “AI is not alignable because we are not alignable.” These are both information superhighways to dystopia.

Everything Happens For a Reason

Next, let’s consider the lower-left quadrant. Here, we assume that the computational theory of mind is incorrect. Human consciousness is not just information processing. We are something more than meat robots, something possessed of (for lack of a better word) souls. However, despite being mysteriously imbued with non-computational minds, in this quadrant we don’t have libertarian free will.

This is something of an odd position and it has not been widely adopted in Western philosophy. The only philosophers I can think of who explicitly take this position are the ancient Stoics.(3) The Stoics famously argued that the cosmos was governed by a principle of reason they called the Logos, fate, and the world-soul. We humans partake of the Logos, the shard of which in us is our soul; but we are nevertheless subject to the overall principle of fate. Whatever will happen, will happen. The Stoics’ contemporaries didn’t think much of this point of view, with Carneades the Skeptic pointing out “if everything is fated, then why bother to do anything?” Chrysippus the Stoic saw this is a lazy argument, and argued (to over simplify) that (you) can’t not bother to do anything you’re fated to do.

As far as I know, no one attempting to build or criticize advanced AI believes anything resembling Stoic determinism. I personally find this position, and all other formulations of so-called “compatibilist” free will, to be incoherent.(4)

But, for the sake of thoroughness, we’ll consider it. If this quadrant were true, then it is possible to align an intelligence such that it only does what you want. Determinism makes us hackable. However, it’s not possible to create such an intelligence using computational methods. That’s quite a dark outcome: We cannot create aligned AI, but we can ourselves be aligned.

Consciousness is Not Computational and Not Controllable

Finally, let’s look at the lower-right quadrant. Again, we assume that the computational theory of mind is incorrect. But now we also assume that humans have libertarian free will. We are something truly special: the conscious authors of our own stories. We are creatures with insights, intuitions, feelings, and volitional capacities that cannot be replicated by computation. This is the quadrant that I personally believe is true.

Of course, readers of Less Wrong would call this the “woo woo” or “pseudoscience” quadrant, since it foolishly rejects the reductive materialism that (they believe) underlies science. Religious and spiritual minded thinkers would consider it a wise rejection of reductive materialism. Average people just live their lives as if this quadrant were true, and react to new developments in AI as if it were true.

If this quadrant is correct, then AI cannot ever have a mind, no matter how good its learning model or how big its neural network. It can, at best, simulate the appearance of having a mind. That is the point of John Searle’s Chinese Room thought experiment: An AI can only ever be a philosophical zombie, without understanding or intentionality.

If this quadrant is correct, AI can’t replace us because we’re special in a way it never will be. In a sense, that’s good news.

Unfortunately, the people making AI don’t think this quadrant is true. (Re-read the reductivism of Less Wrong!) And we can’t ever prove it to them. Nothing I or anyone else could ever say or do could persuade someone like Eliezer Yudkowsky that I’m non-algorithmic and free-willed; I could only demonstrate to him that I say I’m non-algorithmic and free-willed. But a computer could be programmed to say that, too.

And that’s very bad news. Why do I say that?

Well, imagine that humanity moves forward with AI development without solving the AI alignment problem, and creates an advanced AI that eliminates us all.

Now imagine that the upper-left quadrant is correct. If so, then the elimination of our species is no big deal. If an advanced AI replaces humanity, all that’s happened is that… a new deterministic system that is superior at computation has replaced an old deterministic system that was inferior at it. As chilling as this sounds, I have spoken to several AI developers who hold precisely this view — and are proud to be working on humanity’s successors. If you accept the nihilism inherent in reductive materialism, it makes perfect sense.

In contrast, imagine that our lower-right quadrant is correct. If so, then eliminating our species is eliminating something unique and special. If an advanced AI replaces humanity, then beauty, goodness, and life itself have been extinguished in favor of soulless machinery. This is an absolutely horrific ending — in fact, the worst possible outcome that can be conceived.

If this quadrant is true, then we’re not just summoning a genie to grant our wishes, we’re summoning a soulless demon, an undead construct. The AI black box is black because its black magic, and we shouldn’t touch it.

A New Hope

I will end this essay with a rare hint of optimism. The AI alignment problems above are all predicated on AI developers continuing to use neural networks that are as inscrutable and opaque as our own thoughts and feelings are. But neural networks aren’t the only way forward.(5) It is possible to develop AI technology that is fully scrutable, with decision-making that is entirely transparent and comprehensible. It requires a very different approach — one that isn’t built on deep neural networks, but one vastly easier to align than anything being produced by Google or Open AI. In fact, abandoning black box neural networks in favor of other types of AI seems to me the only way to make AI that meets the criteria of being “safe beyond a reasonable doubt.”

Contemplate this on the Decision Tree of Woe.

(1)Disclaimer: I am personal friends with the two co-founders of Diveplane and play Ascendant with them once a week. One of them even made a small investment in my tabletop RPG company, Autarch. I frankly don’t understand why talented men like them are wasting their time with AI when there are much more lucrative opportunities to design tabletop games, but we have to let friends make their own mistakes.

(2) The most well-known critic of the computational theory of mind is philosopher John Searle, who posed the famous Chinese Room thought experiment to argue that computation did not entail intentionality, understanding, and other hallmarks of consciousness. Mathematician Roger Penrose is another critic; he relies on the incompleteness theorem to argue that mathematical insight is non-computational. Physicist Henry Stapp, another critic of computational theories of mind, argues for an immaterial consciousness in his realist interpretation of orthodox quantum mechanics. But none of these thinkers are guiding the development of AI!

(3) The Calvinists might also fall into this category.

(4) I believe Chrysippus gave the wrong answer to Carneades. The right answer is that the spark of the Logos that we carry is precisely why we can make free-willed choices. Our choices bring into being that which is fated because we are the instruments by which fate chooses.

(5) Or so I am told by my friends at Diveplane. I would like to believe they are correct because the alternative is just too depressing to accept. Also they have promised me that even if their AI turns evil, they will ask it to kill me last.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 7:40 pm
by iambiguous
Mr. Wiggle wrote: Tue Apr 25, 2023 6:14 pm
iambiguous wrote: Tue Apr 25, 2023 3:59 pmWell, all of this depends of course on who is programming the chatbot. I'm sure for those racists who imagine black and brown and red folks in a stereotypical manner, the chatbots will follow suit.
You are less than honest, Iambiguous. You really do introduce these comments because, for sone reason, they are vital to you.

Why not start a thread where you frame what you see your issue, the issue, as being. There I will happily discuss the views out there, and my views.
Again, Mr. Wiggle:
But, seriously, why don't you respond to the points I raised in regard to your Chomsky assessment above:
Noam Chomsky wrote:Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
Here, in regard to either flesh and blood human intelligence or artificial machine intelligence, I come back to dasein. And, in particular, in regard to moral and political value judgments in the is/ought world.

Really, what's the difference between them if neither of them in a No God world is able...either philosophically or scientifically...to establish a moral assessment that could actually be demonstrated to encompass behaviors that all rational and virtuous men and women are obligated to embrace if they wish to be thought of as rational and virtuous.

Chomsky will no doubt suggest that capitalism reflects "a fundamentally flawed conception of language and knowledge" as it pertains to rational and virtuous behaviors. Whereas the Libertarians and the Objectivists among us, while agreeing that philosophically, politically, morally there is an optimal frame of mind, will insist instead that this is precisely what capitalism encompasses.

So, Mr. Flesh and Blood human being or Mr. Chatbot...which is it?
You're the one who introduced Chomsky's take on AI.

As for my own "rooted existentially in dasein" take on AI, I've been providing that here: viewtopic.php?f=23&t=39982

Grow a pair and cotribute.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 7:51 pm
by Flannel Jesus
🤦 everything you think is rooted existentially in dasein. You don't have to say it every time. "My own take on AI" means exactly the same thing.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 8:23 pm
by iambiguous
Flannel Jesus wrote: Tue Apr 25, 2023 7:51 pm 🤦 everything you think is rooted existentially in dasein. You don't have to say it every time. "My own take on AI" means exactly the same thing.
The sheer futility embedded in setting you straight. :roll:

There are hundreds and hundreds and hundreds of factors embedded in our interactions with others that are materially, empirically, essentially, objectively, etc., applicable to all of us. Aspects of our social, political and economic interactions that no one gets into heated squabbles regarding. Neither flesh and blood human beings nor, I suspect, any future AI "replicants".

The parts I focus on pertaining to dasein here -- https://www.ilovephilosophy.com/viewtop ... 1&t=176529 -- revolve around conflicting moral and political value judgments. And in my view, that too is applicable to both flesh and blood human beings and AI chatbots.

At least until an AI chatbot someone here has interacted with can provide arguments able to convince me that it is not reasonable to be "fractured and fragmented" morally and politically in a No God world.

Re: Artificial Intelligence: What it portends

Posted: Tue Apr 25, 2023 8:25 pm
by Flannel Jesus
Until you can show me a view that someone holds that ISN'T rooted in dasein, it's going to remain a pointless thing to say. Do you have any examples of that? Who has views on AI that aren't rooted in dasein, and what are they?