creating foolproof perfectly "moral" humans?Veritas Aequitas wrote: ↑Thu May 18, 2023 2:59 amYes, goals drive actions but merely setting expected actions against merely goals is vague.Flannel Jesus wrote: ↑Wed May 17, 2023 12:38 pm I'm going to lay out a bit of the background thoughts I have about all of this, maybe it will be illuminating - at the very least, illuminating my own stupidities, but I hope it's more than that.
The start of it for me is narrowing down what it means to say someone should, or shouldn't, do something. "You should do this." "You shouldn't do that." Morality aside for a moment, what do statements like these mean and how can they be objective?
It's simple: should statements are goal oriented statements. "If your goal is this, you should do that." If you have a goal, there are usually things you can do that will move you towards that goal, and things you can do that will move you away from that goal. Goal oriented should statements are capable of being objective - I'm on board with the objectivity of statements like "if you want to learn guitar, you should watch these lessons" or "if you want to build an extension on your house without getting in trouble with the local government, you should have plans drawn up and you should submit them to your local council for approval first."
You have a goal, and there are things that are objectively good for reaching that goal, and other things that are objectively counter to that goal.
Rather, 'shoulds' and 'oughts' are conditioned upon a Framework and System with its embedded constitution, ultimate and sub-goals.
This is where I introduced the concept of a Framework and System of Reality [FSR] or Knowledge [FSK].
You need to be very precise here.Morality can be framed as a subset of these goal oriented statements. Morality most often comes in one of two flavours: virtue based morality, and consequence-based morality. I'm going to just ignore virtue based morality for the time being and focus on mortality based on consequences.
Now, I don't know how any of you guys define morality in particular, but in terms in consequence based morality, what we have is a "moral goal" of, for example, achieving the greatest happiness for the most people (or contentment or satisfaction), or avoiding the most suffering for the most people - something along those lines. Those are the moral "goals" - maybe not exactly those, I'm using very loose wording here, but it's a reasonable enough starting point. So we have some goals, now we can construct some "should" statements.
ALL humans has various mental functions supported by its specific neural correlates, e.g. intelligence, memory, dreaming, reasoning, etc.
As such, ALL humans has a specific inherent moral functions supported by its specific neural correlates.
So, we need a precise definition for 'what is morality' based on empirical evidences that represent what morality mean in terms of the human being and its brain.
I see you are critiquing your own proposals and I agree that 'happiness' as a goal is not fool proof.If your goal is to achieve the greatest happiness for people and avoid suffering, you shouldn't torture people. Obviously the consequence of torturing is usually suffering, so if your goal is to avoid suffering, it stands to reason...
If your goal is to achieve the greatest happiness for people, you should (support some policy that someone wants to argue will result in more happiness).
The objectivity of those two statements above, for me, is perfectly fine. I could totally agree that both of those statements above could be objective. If your goal is this, OBJECTIVELY an effective way of achieving that goal is this. No problem here whatsoever.
But, the reason why I don't go from that to "I believe in objective morality" is, that's a big IF.
If your goal is to achieve the greatest happiness for people and avoid suffering, you shouldn't torture people. Yes, this is objectively true. But what if you come across a creature whose goal isn't that?
What if an alien species shows up to earth, and they're intelligent and rational - more intelligent and rational than us - but, for them, the greatest pleasure is in consuming human babies? What if they experience the euphoria of a thousand human orgasms when they consume a human baby?
Let's imagine this species isn't hostile to humans, apart from their desire to consume babies. Let's even imagine there's only, say, 1000 of these aliens, and they only want to each eat 1 baby a week, so there's no real risk of us going extinct any time soon. And they're more than willing to talk to adult humans.
What could we do to convince them that they shouldn't eat human babies? What could we do to convince them that it's an objective fact that they shouldn't eat babies?
They are rational beings. They are scientific beings - they know how to look at evidence and decide what they think is true based on that evidence and the models that best explain it.
If we're talking to these beings, humanity could convince them of, say, the atomic model of chemistry. Humanity could convince them of relativity, or quantum mechanics. Humanity probably wouldn't need to convince them, because they'd probably have their own models that are more or less isomorphic to these things, if not more refined, but we could.
But could we convince them that they should want to share our moral goals? Could we use observable, objective, scientific facts to convince them that their goal of achieving euphoria through consuming human babies should be replaced by our goals of avoiding human suffering, increasing human happiness, not killing human babies for pleasure?
I don't think we could. I don't think there's any objective scientific fact we could point to to make them give up their goals and replace them with ours. And I don't think the aliens would be in any way irrational or unscientific if they said, no thanks, we'll just keep eating the babies.
That's what I mean when I disagree with objective morality. I fully agree with the objectivity of "if your goal is this, you should do this". I don't agree that all rational, scientific agents in the universe could necessarily be convinced to share our moral goals. Their axioms of value are too far removed from our own, and they're not irrational for that. I don't believe in objective morality means I don't think there's an objective reason why some other being with different goals must share our moral goals.
And it doesn't apply only to aliens either. It applies to humans. Psychopaths and sociopaths. You could convince a sociopath that, IF their goal is to achieve the most happiness for the most people, they shouldn't be scamming all these old people. But could you convince that sociopath to care about the most happiness for the most people? Rationally? If they don't care, then they don't care. I don't know what you could say to make them care.
As I had stated, we need to define morality precisely such that it is foolproof.
In addition 'morality' is only restricted to humans and never applicable non-humans [aliens].
This is why humans can continue to kill non-humans consciousness or unconsciously to prioritize own interests. Otherwise, humans should not kill viruses, bacteria, insects, etc.; humans should not kill any living non-human things like plants and animals for food to facilitate their survival.
But killing of living non-humans has to be optimized, confined and limited to the extent where it is NOT to be detrimental to the long term survival of humans.
As I had stated, ALL humans has an inherent moral potential is that slowly unfolding where at present 2023 is at a slow rate and low activity.But could you convince that sociopath to care about the most happiness for the most people? Rationally? If they don't care, then they don't care. I don't know what you could say to make them care.
This is why at present we have psychopaths and all sort of evil laden humans with low moral competence.
I have also stated, it is too late for the present because we cannot change the brain states of the evil laden human overnight to enable them with high morally competent.
As such have to bear with the current state of morality.
To expedite the activity and unfoldment of the inherent moral potential in a human will take lots of time in rewiring the moral neural correlates in the brain.
Thus the ONLY possibility of efficient moral progress is to effect such changes in future generations which may take >50, 75, 100, 150 years or more to achieve reasonable results.
To expedite the above neural changes in the brains of the average humans in the future to ensure high moral competence, we must at present be able to identify the objective moral facts and its physical mechanisms so that appropriate changes can be made in the FUTURE.
The above must be carried out within a FOOLPROOF mode in the future within the scientific-FSK and the moral FSK.
I want to emphasize FOOLPROOF to avoid any inkling of creating 'frankensteins'.
history never repeats
-Imp