I don't know how fully formed your argument is at his point, whether you are teasing out a thought as you go along, or attempting to present a completed thesis using an easily digestible step by step method.MikeNovack wrote: ↑Wed Nov 05, 2025 3:22 amYou I want to address directly'FlashDangerpants wrote: ↑Tue Nov 04, 2025 8:03 pm
a) OK, but moral oughts are not the same as prudential oughts, or goal derived oughts in general. So a bait and switch to something like a prudential ought based on the need to live in cooperative societies won't work out.
b) I should point out that sympathy, empathy, and a basic sense of fairness are not uniquely human traits, all sorts of other social animals display an awareness of such things. It should be fairly simple to get any reasonable interlocutor to accept that the basic building blocks of moral sentiment are widely distributed beyond human society. How to construct moral reason from moral sentiment is not an easy question though.
"a" ---- but for humans, we might later decide that "moral oughts" ARE a subset of necessary prudential oughts. For humans to continue to co-operate we might NEED morality. I just think that's the wrong place to start. If we have understanding about the properties of necessary human oughts maybe we can make sense of "the problem" << we see something that feels like it SHOULD be a moral question but cannot get there from whatever moral system we choose >>
If it's something like the latter, then I would rather help work through the thing as I don't think you will ever get the likes of age and IC to do so.
If it's the former, then I have proceeded along similar lines myself in the past and will likely do so again. I would probably want to front-load the questions I have so that by the time your argument has got there you are for-warned and therefore fore-armed. It suits me for your argument to be as strong as possible in the end even if I don't agree with it myself.
One point I feel safe raising either way is this: If you construct your assessment of morality out of the available parts given to us by our nature (i.e. excluding celestial forms and divine revelation) you will eventually run up against a limit to the logical necessity of your results. The part where you say 'X is wrong' may not have sufficient power that you can simultaneously assert that in any possible world it is erroneous to say 'X is not wrong'.
If your account is descriptive in form, you will end up saying this is just a limit on what we can morally achieve. If your accounting is designed to prescriptively overcome all such failings, you might be tempted to beef up an input so that some goal of some sort becomes a necessary ought. Often this would be something like wellness, wellbeing or health - prudential goals that have a surreptitiously inserted normative evaluation built in. Those don't normally work out, for the reason I just gave, they fail to make it over the is-ought gap. There is a lot to be said for the descriptive approach that doesn't try to create an ultimate prescriptive ought out of what's lying around.
Human exceptionalism probably cannot be ignored. If morality is entirely unique to humans that has implications.MikeNovack wrote: ↑Wed Nov 05, 2025 3:22 am "b: I agree with many of you that it may not make sense try to include IC. But ignoring the person, the concept of human exceptionalism is respectable (even if we believe it wrong). I'm simply suggesting that "Man is a moral animal" does not imply "Man is the only moral animal". Leave that for another day.
On the one hand it can be an entirely fictive and affected cultural construct just like economics, literature, genders and customs - open to complete re-evaluation at the drop of a hat because somebody felt it was time for change.
Or for the weirdo dualists who think humans have a special soul with a spark of the divine that hails from a non-extended realm of celestial substances, it might seem obvious that this is what grants humans access to the moral properties of the universe, something that animals with their lack of divine spark can never be privy to.
Suppose there is a distant island, or another planet or whatever where a new type of creature emerges to fulfil the same evolutionary niche as humanity in that location. A creature with intelligence sufficient to form large cooperative societies with complex rules and technologies and all that stuff. This creature though began not as a monkey but as a spider. They mate by drizzling sperm over a huge pile of eggs, then the female eats the male and the little spiderlings that emerge from those eggs run off to a swamp to eat slime and cannibalise each other until they are 10 years old, at which point the surviving 0.0005% return to society and get taught to read etc.
Their morality would be very different from ours. Each having, as a little wildling, eaten between 5 and 20 of their own siblings for instance. They would not be fans of our decadent mode of child rearing and our unwillingness to eat each other. Would this result of divergent evolution with its competing set of necessary oughts be just as good as our version of morality? Upon meeting them, would we be obliged to convert them to our way of thinking, or to accept that their way is right for them and ours right for us, or would there be some way to build a conversation and try to find persuasive methods over time?
For all his faults, IC can probably answer that question much more efficiently than you. Maybe these creatures are in his view animals with no access to morality, and no souls for Jesus to save - I don't care about the details, I just suspect he wouldn't stumble.