moral relativism

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

User avatar
iambiguous
Posts: 11317
Joined: Mon Nov 22, 2010 10:23 pm

Re: moral relativism

Post by iambiguous »

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?
Ethical decisions are rarely black and white. They often involve balancing competing principles, such as fairness, justice, and the greater good.

Or, as William Barrett once put it in Irrational Man:

"For the choice in...human [moral conflicts] is almost never between a good and an evil, where both are plainly marked as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the the ultimate outcome and even---or most of all---our own motives are unclear to us. The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rules that will apply, if only to save them from the task of choosing themselves."

On the other hand, tell that to the moral objectivists among us. Any number of them will insist that fairness, justice and the only good revolves entirely around their own dogmatic ethics.

Then the endless hypothetical examples in which objectivists all up and down the moral and political spectrum explain to you why you should behave as they do. For some, you must behave exactly as they do. In other words, or else.

Thus...

For example, should a doctor prioritize saving the life of a young patient over an elderly one? Should a judge impose a harsher sentence to deter future crimes, even if it seems unfair to the individual?
Okay, if you are a moral objectivist let us know, given a particular set of circumstances, what the optimal assessment might be. Or, perhaps, the only rational assessment?

These questions highlight the complexity of morality, which is shaped by cultural norms, personal beliefs, and situational factors. Translating this complexity into algorithms is no small feat, especially as machines lack the ability to feel empathy or to understand the nuances of human experience.
Exactly! What can a machine really know about complex medical and legal interactions among flesh and blood mere mortals? Instead, it will tell you what the flesh and blood human beings who programmed it think and feel about them.

Still, I'm the first to admit I don't grasp AI technology in a sophisticated manner. So, what seems crucial here is the extent to which AI machines are able to reconfigure that which they are programmed to think into actual original thinking.
MikeNovack
Posts: 502
Joined: Fri Jul 11, 2025 1:17 pm

Re: moral relativism

Post by MikeNovack »

iambiguous wrote: Sun Aug 17, 2025 10:45 pm
iambiguous wrote: Sun Aug 17, 2025 10:45 pm
Exactly! What can a machine really know about complex medical and legal interactions among flesh and blood mere mortals? Instead, it will tell you what the flesh and blood human beings who programmed it think and feel about them.

Still, I'm the first to admit I don't grasp AI technology in a sophisticated manner. So, what seems crucial here is the extent to which AI machines are able to reconfigure that which they are programmed to think into actual original thinking.
Explanation (this, or at least was, my line of country)

The modern AIs are neural nets. For the moment, we are in the realm of thought. Imagine a collection of nodes. Each node has a receptor at one end *where lines able to cry signals come in from some subset of nodes and at the other end where lines go out to a subset of nodes. These lines each have a signal strength modifier, and the node itself a threshold value. SOME nodes also have signal lines coming in from outside (the neural net) and some have lines going out from the neural net. What each node does, is add up all the signals coming in to its receptor , and if the totl is more than the threshold, send a signal down all of its outgoing lines. The threshold and signal modifiers are just numbers.

STOP! At this point you might have recognized this abstract structure as your brain, neurons being the nodes, the connecting lines axons, the numbers stored as concentration of some chemical. Biology is EMULATING the abstract structure "neural net"

BUT -- as an experienced software designer/writer I could also write a computer program to emulate that neural net. So when you say "what they are programmed to think, NOTHING YET. My program is only implementing the neural net. Not determining what the neural net will do. THAT comes by training the neural net, in formal terms, to evaluate some functions. The leaning process involves changing those stored numbers until "works better" and that can be a semi random process.

OK, let's say I'm going to teach it to make only legal moves in chess. I am NOT programming in the rules. I'm giving a board with some possible arrangement of pieces and telling it "make a move". If the move is illegal (remember, it does not start out knowing how pieces move) I order "try readjusting your values (slightly) until you make a legal move. Now go back to redo all the previous tries. All still legal moves. Fine, we have made progress. After a lot of this (training is expensive in terms of computer power) the neural net will only make legal moves. It has been taught.

You can't get back out of it an explanation of why a move is legal (it has just "learned" a set of values that make it work. But if I wanted to have a second neural net (with the same structure) also make just legal chess moves wouldn't have to teach from scratch, could just COPY all those values to the corresponding place. Pity we can't clone our brains that way????

Now I am NOT at all sure what the training process would be for "make the moral choice" but I think you can see that it's not coming from the programmers (we just make sure the neural net is emulated correctly). It's the TRAINERS that matter, how they decide in training run 4556778 the answer was wrong (or right) --- unlike "was that a legal move in chess" not going to be as easy.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: moral relativism

Post by Belinda »

MikeNovack wrote: Sat Aug 16, 2025 8:06 pm
Belinda wrote: Sat Aug 16, 2025 10:16 am
Indeed and to deprive anyone of hope is the worst that anyone can do to them. Attempts to deprive of hope are to be seen all around in the world today.

All leaders of men always have been concerned with seeding and fostering hope. Honest leaders don't knowingly seed and foster false hopes at any level, economic or spiritual.
Except that last would imply honest leaders do not remain leaders. Because people will not follow those whose message is "there is no way forward except through what is truly dreadful". Of course honest leaders may UNKNOWINGLY seed and foster false hope. They themselves believe in these false hopes.

The sad reality is that there are NO sustainable solutions for ten billion humans on this planet. Not unless we succeed in harnessing fusion, and success with that has stubbornly remained "just twenty years in the future" for MANY decades now (my friends who remained in Physics are not optimistic, tell me will stay "just twenty years in the future" for the foreseeable future).

Of course leaders COULD take the position, the crash is inevitable, enough for the people around the having to try to live through it to deal with it. Why make people around now unhappy if there is nothing constructive to be dome to avert the crash. Simply say all is well and DON'T LOOK AHEAD.

Might I suggest the Japanese animated movie Pom Poko. It's only on the surface about tanukis (and kitsunes) responding to ecological threat. It represents the range of ways we humans respond when faced with existential threat.
Look how influencers do raise false hopes. Used to be called beer and circuses. Now , besides escapist fantasies, beer and circuses includes lying politicians and an assortment of self seekers .
I believe that unless a grown adult is ill or dying , she is owed knowledge of the truth, at least the best truth available. News reporters bear a heavy moral responsibility.

I will look out for Pom Poko
User avatar
iambiguous
Posts: 11317
Joined: Mon Nov 22, 2010 10:23 pm

Re: moral relativism

Post by iambiguous »

Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?
To better understand the ethical challenges of AI, we can turn to the insights of Immanuel Kant and John Stuart Mill.
Of course, the truly hardcore determinists among us are compelled to insist that both Kant and Mill were themselves just two more of nature's very own automatons.
Kant’s deontological ethics emphasizes the following of moral rules or duties, regardless of the consequences. From a deontological perspective, an AI system might be programmed to always prioritize human life, even if it leads to what in some senses are less efficient outcomes.
Right, like in regard to any number of moral and political conflagrations, AI can finally pin down that which mere mortals have so utterly failed to do after thousands of years. Then the part where AI today basically revolves around the fact that they are programmed...programmed by flesh and blood human beings.

So, if the issue being debated pertains to human sexuality, what on Earth can a machine grasp regarding that?
In contrast, Mill’s utilitarianism focuses on maximizing overall happiness and minimizing harm. A utilitarian approach might program an AI system to make decisions that result in the greatest good for the greatest number, even if it means sacrificing individual rights.
Same thing? After all, we still live in a world where over and over and over again, what makes some happy makes others downright miserable. How would AI ethics go about for all practical purposes changing that? From what set of assumptions, assessments and conclusions into what other set?

Anyone here care to explore this given a specific set of circumstances involving conflicting goods?
These contrasting philosophical frameworks highlight the complexity of ethical decision-making, and indicate the difficulty of translating human morality into algorithms. While AI can assist in making decisions, then, it cannot replace the human capacity for moral reasoning.
More to the point, in my view, is the fact there are any number of moral philosophies that mere mortals have already "thought up": https://en.wikipedia.org/wiki/Category:Ethical_theories

So, will this be no less the case in regard to machine morality?
MikeNovack
Posts: 502
Joined: Fri Jul 11, 2025 1:17 pm

Re: moral relativism

Post by MikeNovack »

I approach the human vs machine question differently.

I would ask if possible for me to describe/model some abstract device that would be an exact modeling of a human brain. Since our brains contain a very large number of neurons, the connections between them very complex, and with no way to extract the controlling values (neuron firing threshold, axon signal modifier. this for now THEORETICAL. Nevertheelss, justification for the statement "there exists a neural net quivalent to our brain".

We could say, not our brain thinking bur this neural net.

Now, imagine that I take some computer and write a program that can emulate this neural net. If the SAME NET then would also be able to handle the same thoughts. Note that this is not "the machine thinking" but the (abstract) neural net thinking.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: moral relativism

Post by Belinda »

MikeNovack wrote: Sun Aug 24, 2025 2:23 pm I approach the human vs machine question differently.

I would ask if possible for me to describe/model some abstract device that would be an exact modeling of a human brain. Since our brains contain a very large number of neurons, the connections between them very complex, and with no way to extract the controlling values (neuron firing threshold, axon signal modifier. this for now THEORETICAL. Nevertheelss, justification for the statement "there exists a neural net quivalent to our brain".

We could say, not our brain thinking bur this neural net.

Now, imagine that I take some computer and write a program that can emulate this neural net. If the SAME NET then would also be able to handle the same thoughts. Note that this is not "the machine thinking" but the (abstract) neural net thinking.
The machine is well able to evaluate quantity.

In order to evaluate quality the machine would have to embody , besides cognition, the quality of affect: of feeling pain, fear, and joy and of attaching those to its very existence.

While modern AI machines have ethical criteria, those are not created by the machine but are trained by humans into the machines' repertoires.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: moral relativism

Post by Belinda »

iambiguous wrote: Sun Aug 24, 2025 12:43 am Ethics for the Age of AI
Mahmoud Khatami asks, can machines make good moral decisions?
To better understand the ethical challenges of AI, we can turn to the insights of Immanuel Kant and John Stuart Mill.
Of course, the truly hardcore determinists among us are compelled to insist that both Kant and Mill were themselves just two more of nature's very own automatons.
Kant’s deontological ethics emphasizes the following of moral rules or duties, regardless of the consequences. From a deontological perspective, an AI system might be programmed to always prioritize human life, even if it leads to what in some senses are less efficient outcomes.
Right, like in regard to any number of moral and political conflagrations, AI can finally pin down that which mere mortals have so utterly failed to do after thousands of years. Then the part where AI today basically revolves around the fact that they are programmed...programmed by flesh and blood human beings.

So, if the issue being debated pertains to human sexuality, what on Earth can a machine grasp regarding that?
In contrast, Mill’s utilitarianism focuses on maximizing overall happiness and minimizing harm. A utilitarian approach might program an AI system to make decisions that result in the greatest good for the greatest number, even if it means sacrificing individual rights.
Same thing? After all, we still live in a world where over and over and over again, what makes some happy makes others downright miserable. How would AI ethics go about for all practical purposes changing that? From what set of assumptions, assessments and conclusions into what other set?

Anyone here care to explore this given a specific set of circumstances involving conflicting goods?
These contrasting philosophical frameworks highlight the complexity of ethical decision-making, and indicate the difficulty of translating human morality into algorithms. While AI can assist in making decisions, then, it cannot replace the human capacity for moral reasoning.
More to the point, in my view, is the fact there are any number of moral philosophies that mere mortals have already "thought up": https://en.wikipedia.org/wiki/Category:Ethical_theories

So, will this be no less the case in regard to machine morality?

Within one absolutely necessary future , humans don't have to be pushmepullyou automata. We send young humans to school and university so they can be free to wittingly harmonise with necessary truth .

Humans are free to the extent we are guided by reason when we choose our futures.
MikeNovack
Posts: 502
Joined: Fri Jul 11, 2025 1:17 pm

Re: moral relativism

Post by MikeNovack »

Belinda wrote: Sun Aug 24, 2025 7:19 pm
In contrast, Mill’s utilitarianism focuses on maximizing overall happiness and minimizing harm. A utilitarian approach might program an AI system to make decisions that result in the greatest good for the greatest number, even if it means sacrificing individual rights.
Same thing? After all, we still live in a world where over and over and over again, what makes some happy makes others downright miserable. How would AI ethics go about for all practical purposes changing that? From what set of assumptions, assessments and conclusions into what other set?

Anyone here care to explore this given a specific set of circumstances involving conflicting goods?
Well staying closer to the Epicurian original of some use. Utilitarianism often stated with "greatest good" as opposed to "greatest happiness"

Happiness is an emotion. We probably agree that while humans differ greatly in what good makes them happy, at least the resulting happiness is the same for all, a possible metric. But when we go back to the level of the good (that causes the happiness) we have no way to measure (there is no one "good" we could use)

The problem with any strict consequentialist basis for ethics is that it can only tell us retrospectively we chose right/wrong. It cannot guide our choice of action (since that choice must be made before the eventual outcome is known. And there is also a matter of cause and effect (we chose X, the eventual outcome Y was wrong, but was there a causal relationship?

Morality based on the probability of outcomes could be used as a guide. Just because the eventual outcome one of low probability does not mean the probability assessment was wrong. If I roll one die, the probability of not showing "one" is 5/6. If I do roll a "one", the probability still 5/6.

ONE advantage of deontology is that since independent of future events, it can always serve as a guide for our choices
popeye1945
Posts: 3058
Joined: Sun Sep 12, 2021 2:12 am

Re: moral relativism

Post by popeye1945 »

Morality is biology-based, though denied by the absurd interjection of the supernatural. If you take this as the foundation of any moral system, then moral relativism is dependent upon geographical isolation and the morphing of the experiences of place into a culture that loses itself in fashion and mythology, with mutations compounded through the culture's history. This geographical isolation is the means by which species of anything mutate into something new and populate the planet. This makes it more difficult to recognize our species as one humanity, much more similar than not. Culture, in a sense, has aborted human adaptive mutation into something new due to adapting to the synthetic nature of culture. Culture is more similar than not; thus, the adaptation to nature is almost flatlined.

We must, if we are to survive as a species, find a way to guide cultures into adaptive mutations to nature; otherwise, we continue on a journey of culture-bound monstrosities that destroy the foundations of life itself. Cultures, as well as nature itself, must become sacred to the life it produces. Cultures must adapt to human self-control in harmonizing with nature. Recognition that biology is the foundation of morality, indeed the source of all meaning in the world, is an aim in this direction. The beginning of such a self-controlled process could be a more realistic understanding of the human relationship to the development of the world and its duties to the world's diversity of life as its most conscious creature to date. A start in this direction would be to acknowledge that life is the source of all measures and meanings, and human morality is based on the survival and well-being of the human species.
MikeNovack
Posts: 502
Joined: Fri Jul 11, 2025 1:17 pm

Re: moral relativism

Post by MikeNovack »

popeye1945 wrote: Sun Aug 24, 2025 10:55 pm Morality is biology-based, though denied by the absurd interjection of the supernatural. If you take this as the foundation of any moral system, then moral relativism is dependent upon geographical isolation and the morphing of the experiences of place into a culture that loses itself in fashion and mythology, with mutations compounded through the culture's history.

Except --- something could be moral in abstract, the particular cultural solution picks up a sense of being moral, but that a culture have A particular solution might be serving a different "cultural evolution purpose"

Example: "It is right to cover your nakedness is learned in every culture. << sorry folks, but I am going to attach learning about morality to EARLY training like "potty training" >> BUT, what what we do not see are any cultures doing this randomly, simply cover any which way. Instead the culture will have some approved method AND this gets attached to it a moral feeling. However I think that is because cultures that had defined "how you cover" had an evolutionary advantage over a "any which way" << you could tell at distances far beyond that over which you could recognize people whether that person was or was not from your culture >>

[/quote]
This makes it more difficult to recognize our species as one humanity, much more similar than not. Culture, in a sense, has aborted human adaptive mutation into something new due to adapting to the synthetic nature of culture.
[/quote]

We are a species that has always had cultures. Cultures can evolve at a rate orders of magnitude faster than our biological evolution. However, our species never has had trouble recognizing "one humanity". People taken from different cultures mate freely if mates from their own unavailable and often even if available. I've heard it said tat the fastest way tpo learn another language id=s in bed.

[/quote]
We must, if we are to survive as a species, find a way to guide cultures into adaptive mutations to nature; otherwise, we continue on a journey of culture-bound monstrosities that destroy the foundations of life itself.
[/quote]

Uh yes ---- but that's the sort of thing we should be discussing in the "environmental ethics" section that this forum till lacks. By all indication, it's already too late to prevent a crash. So really you might be wanting to think about cultures tat COULD live in balance with the ecosystem as well as cultures for right behavior during the crash.
Belinda
Posts: 10548
Joined: Fri Aug 26, 2016 10:13 am

Re: moral relativism

Post by Belinda »

popeye1945 wrote: Sun Aug 24, 2025 10:55 pm Morality is biology-based, though denied by the absurd interjection of the supernatural. If you take this as the foundation of any moral system, then moral relativism is dependent upon geographical isolation and the morphing of the experiences of place into a culture that loses itself in fashion and mythology, with mutations compounded through the culture's history. This geographical isolation is the means by which species of anything mutate into something new and populate the planet. This makes it more difficult to recognize our species as one humanity, much more similar than not. Culture, in a sense, has aborted human adaptive mutation into something new due to adapting to the synthetic nature of culture. Culture is more similar than not; thus, the adaptation to nature is almost flatlined.

We must, if we are to survive as a species, find a way to guide cultures into adaptive mutations to nature; otherwise, we continue on a journey of culture-bound monstrosities that destroy the foundations of life itself. Cultures, as well as nature itself, must become sacred to the life it produces. Cultures must adapt to human self-control in harmonizing with nature. Recognition that biology is the foundation of morality, indeed the source of all meaning in the world, is an aim in this direction. The beginning of such a self-controlled process could be a more realistic understanding of the human relationship to the development of the world and its duties to the world's diversity of life as its most conscious creature to date. A start in this direction would be to acknowledge that life is the source of all measures and meanings, and human morality is based on the survival and well-being of the human species.
Have you read about panentheism, especially Spinoza's panentheism where God is another name for Nature?
MikeNovack
Posts: 502
Joined: Fri Jul 11, 2025 1:17 pm

Re: moral relativism

Post by MikeNovack »

Belinda wrote: Mon Aug 25, 2025 9:46 am
Have you read about panentheism, especially Spinoza's panentheism where God is another name for Nature?
He needs to consider both. Spinoza was Pantheism ("there is only one substance").
popeye1945
Posts: 3058
Joined: Sun Sep 12, 2021 2:12 am

Re: moral relativism

Post by popeye1945 »

MikeNovack wrote: Mon Aug 25, 2025 12:08 am
popeye1945 wrote: Sun Aug 24, 2025 10:55 pm Morality is biology-based, though denied by the absurd interjection of the supernatural. If you take this as the foundation of any moral system, then moral relativism is dependent upon geographical isolation and the morphing of the experiences of place into a culture that loses itself in fashion and mythology, with mutations compounded through the culture's history.

Except --- something could be moral in abstract, the particular cultural solution picks up a sense of being moral, but that a culture have A particular solution might be serving a different "cultural evolution purpose"

Example: "It is right to cover your nakedness is learned in every culture. << sorry folks, but I am going to attach learning about morality to EARLY training like "potty training" >> BUT, what what we do not see are any cultures doing this randomly, simply cover any which way. Instead the culture will have some approved method AND this gets attached to it a moral feeling. However I think that is because cultures that had defined "how you cover" had an evolutionary advantage over a "any which way" << you could tell at distances far beyond that over which you could recognize people whether that person was or was not from your culture >>
This makes it more difficult to recognize our species as one humanity, much more similar than not. Culture, in a sense, has aborted human adaptive mutation into something new due to adapting to the synthetic nature of culture.
[/quote]

We are a species that has always had cultures. Cultures can evolve at a rate orders of magnitude faster than our biological evolution. However, our species never has had trouble recognizing "one humanity". People taken from different cultures mate freely if mates from their own unavailable and often even if available. I've heard it said tat the fastest way tpo learn another language id=s in bed.

[/quote]
We must, if we are to survive as a species, find a way to guide cultures into adaptive mutations to nature; otherwise, we continue on a journey of culture-bound monstrosities that destroy the foundations of life itself.
[/quote]

Uh yes ---- but that's the sort of thing we should be discussing in the "environmental ethics" section that this forum till lacks. By all indication, it's already too late to prevent a crash. So really you might be wanting to think about cultures tat COULD live in balance with the ecosystem as well as cultures for right behavior during the crash.
[/quote]

It is more about the enlightenment of humanity on its significance in being a functional aspect of the world, and humanity's lack of self-control.
popeye1945
Posts: 3058
Joined: Sun Sep 12, 2021 2:12 am

Re: moral relativism

Post by popeye1945 »

Belinda wrote: Mon Aug 25, 2025 9:46 am
popeye1945 wrote: Sun Aug 24, 2025 10:55 pm Morality is biology-based, though denied by the absurd interjection of the supernatural. If you take this as the foundation of any moral system, then moral relativism is dependent upon geographical isolation and the morphing of the experiences of place into a culture that loses itself in fashion and mythology, with mutations compounded through the culture's history. This geographical isolation is the means by which species of anything mutate into something new and populate the planet. This makes it more difficult to recognize our species as one humanity, much more similar than not. Culture, in a sense, has aborted human adaptive mutation into something new due to adapting to the synthetic nature of culture. Culture is more similar than not; thus, the adaptation to nature is almost flatlined.

We must, if we are to survive as a species, find a way to guide cultures into adaptive mutations to nature; otherwise, we continue on a journey of culture-bound monstrosities that destroy the foundations of life itself. Cultures, as well as nature itself, must become sacred to the life it produces. Cultures must adapt to human self-control in harmonizing with nature. Recognition that biology is the foundation of morality, indeed the source of all meaning in the world, is an aim in this direction. The beginning of such a self-controlled process could be a more realistic understanding of the human relationship to the development of the world and its duties to the world's diversity of life as its most conscious creature to date. A start in this direction would be to acknowledge that life is the source of all measures and meanings, and human morality is based on the survival and well-being of the human species.
Have you read about panentheism, especially Spinoza's panentheism, where God is another name for Nature?
Yes, of course, Spinoza is inspirational when one deals with this topic, and though not stated, his example intimates self-control
popeye1945
Posts: 3058
Joined: Sun Sep 12, 2021 2:12 am

Re: moral relativism

Post by popeye1945 »

MikeNovack wrote: Mon Aug 25, 2025 11:56 am
Belinda wrote: Mon Aug 25, 2025 9:46 am
Have you read about panentheism, especially Spinoza's panentheism where God is another name for Nature?
He needs to consider both. Spinoza was Pantheism ("there is only one substance").
One substance with many modes. Today's substance of concern is energy, its frequencies, vibrations, and its many forms in apparent reality.
Post Reply