Mahmoud Khatami asks, can machines make good moral decisions?
Ethical decisions are rarely black and white. They often involve balancing competing principles, such as fairness, justice, and the greater good.
Or, as William Barrett once put it in Irrational Man:
"For the choice in...human [moral conflicts] is almost never between a good and an evil, where both are plainly marked as such and the choice therefore made in all the certitude of reason; rather it is between rival goods, where one is bound to do some evil either way, and where the the ultimate outcome and even---or most of all---our own motives are unclear to us. The terror of confronting oneself in such a situation is so great that most people panic and try to take cover under any universal rules that will apply, if only to save them from the task of choosing themselves."
On the other hand, tell that to the moral objectivists among us. Any number of them will insist that fairness, justice and the only good revolves entirely around their own dogmatic ethics.
Then the endless hypothetical examples in which objectivists all up and down the moral and political spectrum explain to you why you should behave as they do. For some, you must behave exactly as they do. In other words, or else.
Thus...
Okay, if you are a moral objectivist let us know, given a particular set of circumstances, what the optimal assessment might be. Or, perhaps, the only rational assessment?For example, should a doctor prioritize saving the life of a young patient over an elderly one? Should a judge impose a harsher sentence to deter future crimes, even if it seems unfair to the individual?
Exactly! What can a machine really know about complex medical and legal interactions among flesh and blood mere mortals? Instead, it will tell you what the flesh and blood human beings who programmed it think and feel about them.These questions highlight the complexity of morality, which is shaped by cultural norms, personal beliefs, and situational factors. Translating this complexity into algorithms is no small feat, especially as machines lack the ability to feel empathy or to understand the nuances of human experience.
Still, I'm the first to admit I don't grasp AI technology in a sophisticated manner. So, what seems crucial here is the extent to which AI machines are able to reconfigure that which they are programmed to think into actual original thinking.