bahman wrote: ↑Tue Jan 30, 2024 2:23 pm
Atla wrote: ↑Tue Jan 30, 2024 1:28 pm
bahman wrote: ↑Tue Jan 30, 2024 1:14 pm
Let's see what is his current definition.
Well morality, first and foremost, deals with what is right and wrong. VA has taken the morality out of morality, VA's "morality" is not concerned with what is right and wrong. His "morality" is just about the objective features of the inherent human moral sense.
Except then he smuggles back right and wrong, by prescribing to us which parts of the human moral sense we should enhance, and which parts of it we should get rid of. But he's pretending that he didn't smuggle back right and wrong.
So in short, VA's morality is not really about morality, and it's also a self-contradiction.
But how we could enhance some moral sense and get rid of others if we have no criteria to do this?
Well, the interesting thing is what we generally believe is objective, in VA's assessment. Months ago, perhaps more than a year ago, he had specific changes in moral attitude that we should make, that humans in general should make, and this included training brains to be morally better, in his sense of better.
He didn't seem to notice that if the current state of brains and the attitudes brains have justify objective moral attitudes - his position - we cannot possibly have objective grounds to change/improve brains' moral attitudes, because that would be going against objectivity. Objectivity being determined buy looking at what brains tend to believe/have now.
He hasn't talked about these neo-transhumanist changes in the last while, but they were a foundational proposed plan for years.
Oh, and to answer your question about criteria: if VA's intuition says so. He's never said that openly like that. But it seems to be what happens. For a long time mirror neurons demonstrated objective morals given their involvement in empathy. But there was never an justification for view empathy as objective since it is hardwired in brains, but not aggressive tendencies which are also hardwired in. He wanted us to enhance certain neuronal patterns but not others.
But there cannot be objective grounds for this if we determine morality via what's there in brains. What's there is the already present ratio between aggressive and compassionate neuronal patterns. You can't decide based on what HE was saying what the determining source of objectivity to change that source, yet he repeatedly asserted that this should be done.
Pointing this out did little to elicit some other source of objectivity that allowed changing the source of objective conclusions - the current states of brains.
I think he just thought it was obvious and I have sympathy for that position, but one person's obviously morally right may well be someone else's obviously morally wrong or low priority.