10k Philosophy challenge

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

Putting aside possible worlds, I'm not sure where this human-based framework and system of reality comes in. The aim of ethics should be to get at the truth of morality, and there are almost certainly other moral agents somewhere in the world (where "world" is understood as a complete spacio-temporal manifold, rather than a planet).

Human weakness, or any other weakness for that matter, might be a problem with getting from the world we live in to the world we ought to live in, but that is a seperate issue from trying to figure out what that world should look like. If we can't do the latter, then it will be very hard to get any real work done on the former, weakness or not.
User avatar
Immanuel Can
Posts: 27604
Joined: Wed Sep 25, 2013 4:42 pm

Re: 10k Philosophy challenge

Post by Immanuel Can »

Daniel McKay wrote: Mon Jul 22, 2024 7:10 am I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
You've got a real problem, Daniel. I don't think it's solvable.

Consequentialisms (you will know there are various versions of it) are all dogged by several concerns. One is the ultimately-arbitrary nature of the implied ranking of values, as I'm sure you know. Another is the arbitrary nature of the choosing of any particular "consequence" as the priority: that move needs justification, of course; and you're asking that it be universal and applicable to all agents in all possible worlds. That's no small thing. Another such problem is the difficulty of weighing in things like motives or intentions, or habits, character and virtues, for that matter: things by which all moral practice routinely operates assumptively, and which it seems most moral agents do seem to intuit as not irrelevant to any moral calculation, but which consequentialisms all tend to avoid or obscure on the way to...consequences. There are others, but you'll probably already know them. They've been abundantly rehearsed.

But I think your biggest problem is going to be this: that ontological beliefs are logically prior to moral ones, and either make moral judgments rationally possible or impossible to rationalize fully. "Freedom" is indeed felt by many to be a value; but it's very hard to say why it ought to be, especially if some sort of Materialism, Physicalism, or some simliar kind of belief that logically inevitably implies something like Determinism, Subjectivism or Amoralism is weighed as the foundational ontological belief in play. "Freedom" then becomes an imaginary, emotive, solipsistic or entirely personal kind of preference, and is not assertable as an "ought" or a universal basis of anything, I think. We may all like it: but we can never say why we are owed it.

But if you can beat this problem, the problem of ontology inevitably already imposing limits on what is thinkable as a universal value, I'd be very interested in knowing how you would hope to pull that off. And I think you'll find that the person who can do it deserves more than 10K. They'd have solved what has been called "the major problem in modern moral philosophy." A Rosetta Stone for universal ethical legitimation is (to mix a metaphor) the Holy Grail of current moral theory...and really, that's what you'd need.

Sorry to sound like a "wet blanket," but I think that's where this goes.
Veritas Aequitas
Posts: 15722
Joined: Wed Jul 11, 2012 4:41 am

Re: 10k Philosophy challenge

Post by Veritas Aequitas »

Daniel McKay wrote: Tue Jul 23, 2024 3:39 am Putting aside possible worlds, I'm not sure where this human-based framework and system of reality comes in. The aim of ethics should be to get at the truth of morality, and there are almost certainly other moral agents somewhere in the world (where "world" is understood as a complete spacio-temporal manifold, rather than a planet).
On the point of human-based framework and system of reality we have to drill down into the realist [philosophical] vs antirealist dichotomy.

The philosophical realist who is a moral realist grounds is moral realism on the basis there is an absolutely mind-independent external world that exists regardless of whether there are humans or not.
Those who oppose specifically philosophical realism [antirealist], in my case, empirical moral realism believe that reality is contingent upon a human-based framework and system [FS], e.g. scientific reality is based on the scientific FS, i.e. scientific method and all other conditions of the FS.
This FS cover the emergence and realization of reality [a priori] and its subsequent perception, cognition and description of that realized reality [a posteriori].

If morality is to be extended to the whole of the spacio-temporal, then the moral agents must exactly be human-like fundamentally.
I believe on the basis of Occam, it is most effective to confine moral agents to humans only.
Human weakness, or any other weakness for that matter, might be a problem with getting from the world we live in to the world we ought to live in, but that is a seperate issue from trying to figure out what that world should look like. If we can't do the latter, then it will be very hard to get any real work done on the former, weakness or not.
I agree with the above idea, i.e. 'the world we ought to live in'.
But not based on consequentialism as defined here,
https://en.wikipedia.org/wiki/Consequentialism
where it would appear the world we ought to live in is one where there is a greater good over evil.

My approach is different,
the world we ought to live in must have ZERO evil, which will spontaneously manifest its related goods.
But this is merely an ideal only not to be enforceable as a rule or law on individuals.
Thereupon we strive to close the moral gap optimally between the world we are living in and the ideal oughtness using the most effective and optimal approaches [multi-disciplinary] which will include the considerations of consequences but not consequentialism per se.
Immanuel Can wrote: You've got a real problem, Daniel. I don't think it's solvable.
IC believe it is not solvable by humans but only possible by a God, i.e.
Divine Command Theory
https://iep.utm.edu/divine-command-theory/
The omnipotent, omniscient, omnipresent and omni-whatever will resolve all moral issues.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

Seems a bit like offering a 10K prize for the answer to 'what is the meaning of life?' :lol:
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

Immanuel Can wrote: Tue Jul 23, 2024 4:24 am
Daniel McKay wrote: Mon Jul 22, 2024 7:10 am I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
You've got a real problem, Daniel. I don't think it's solvable.

Consequentialisms (you will know there are various versions of it) are all dogged by several concerns. One is the ultimately-arbitrary nature of the implied ranking of values, as I'm sure you know. Another is the arbitrary nature of the choosing of any particular "consequence" as the priority: that move needs justification, of course; and you're asking that it be universal and applicable to all agents in all possible worlds. That's no small thing. Another such problem is the difficulty of weighing in things like motives or intentions, or habits, character and virtues, for that matter: things by which all moral practice routinely operates assumptively, and which it seems most moral agents do seem to intuit as not irrelevant to any moral calculation, but which consequentialisms all tend to avoid or obscure on the way to...consequences. There are others, but you'll probably already know them. They've been abundantly rehearsed.

But I think your biggest problem is going to be this: that ontological beliefs are logically prior to moral ones, and either make moral judgments rationally possible or impossible to rationalize fully. "Freedom" is indeed felt by many to be a value; but it's very hard to say why it ought to be, especially if some sort of Materialism, Physicalism, or some simliar kind of belief that logically inevitably implies something like Determinism, Subjectivism or Amoralism is weighed as the foundational ontological belief in play. "Freedom" then becomes an imaginary, emotive, solipsistic or entirely personal kind of preference, and is not assertable as an "ought" or a universal basis of anything, I think. We may all like it: but we can never say why we are owed it.

But if you can beat this problem, the problem of ontology inevitably already imposing limits on what is thinkable as a universal value, I'd be very interested in knowing how you would hope to pull that off. And I think you'll find that the person who can do it deserves more than 10K. They'd have solved what has been called "the major problem in modern moral philosophy." A Rosetta Stone for universal ethical legitimation is (to mix a metaphor) the Holy Grail of current moral theory...and really, that's what you'd need.

Sorry to sound like a "wet blanket," but I think that's where this goes.

So, you've mentioned a lot, so I'll try to focus on what I think your core points are and you can tell me if I'm wrong.

First, physicalism doesn't imply determinism. Many people would also say that determinism doesn't conflict with moral realism, or indeed with freedom, though I think they're just wrong so you won't hear any such argument from me. Just because a minimal physical duplicate of our world would be a duplicate simpliciter of our world doesn't mean that world would be deterministic (depending on your definition of a world and whether that duplicate needs to be identical into the future). We can discuss that if you like, but it's rather a seperate issue.

Second, on the scope of the project. I don't think the issue is determining a universal moral value. I think I've essentially done that (or at least found the best candidate for universal moral value). That being said, yes, I agree that I am aiming very high here, that I essentially trying to solve ethics, and that what I am asking for is worth more than $10,000. But that's what I have to offer, so that's what's on the table.

There's no sense apologizing for sharing your views, especially when I asked for them.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

accelafine wrote: Tue Jul 23, 2024 5:09 am Seems a bit like offering a 10K prize for the answer to 'what is the meaning of life?' :lol:
No, this is much harder. Meaning of life is first year stuff.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

Can't tell if that's a joke or not. Do nyoo zinders have a sense of humour?
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

accelafine wrote: Tue Jul 23, 2024 6:40 am Can't tell if that's a joke or not. Do nyoo zinders have a sense of humour?
It's the best kind of joke, the kind that is also true.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

I think we might end up talking at cross-purposes if we can't agree on whether reality would exist without us, but we can try.

Why would moral agents need to be human-like for moral truths to be truth through the entire spacio-temporal manifold (I was actually saying that they must be true within all possible spacio-temporal manifolds, but whatever)? I'm not sure how you've come to that conclusion.

I'm not sure why you think a consequentialist would be more okay with having bad stuff in the world. But, for what it's worth, a "morally perfect" world according to FC is one where everyone is able to understand and freely make all choices that belong to them. And yes, I think we should enforce this to the extent that I don't think we should allow people to take the choices that belong to others away from them (such as through theft, violence, etc) except when it protects a greater amount of freedom.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

Daniel McKay wrote: Tue Jul 23, 2024 6:44 am
accelafine wrote: Tue Jul 23, 2024 6:40 am Can't tell if that's a joke or not. Do nyoo zinders have a sense of humour?
It's the best kind of joke, the kind that is also true.
Prove it.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

Prove it.
[/quote]

I mean, I cover the meaning of life in first and second year courses when my lectures run short. But, since I imagine you don't want to look at a powerpoint, I might quote from a book if you don't mind. It's written a little bit cheekily, but it saves me having to write out a whole explanation. Spoiler warning for anyone planning on reading The Black Swan Killer (though who am I kidding, nobody was planning on reading that). Also, be warned, I can give you the right answer to the question "What is the meaning of life", but that doesn't mean you'll find it satisfying.

"Before we go any further, I think we need to address the meaning of life. It’s come up a few times so far, and I just used it to save myself and Maria from a bunch of nihilists, so it’s possible that a few of you are wondering what exactly the meaning of life is. If you already know, feel free to skip ahead just this once and rejoin us when we get to the diner for pancakes.
Right, so, the meaning of life. The way I explained it to the nihilists might not be the most helpful to you because they were moronic nihilists and you, I assume, aren’t. So, for your benefit let’s imagine a little scenario.
Let’s imagine the great philosopher Plato is out for a stroll one day, and up comes another ancient Greek; let’s call him “Bob.” Bob, as well as having a rather strange name for an ancient Greek, has a problem. He doesn’t know the meaning of life. So, he asks Plato.
“Plato,” he says. “What is the meaning of life?”
Now, what’s wrong with Bob’s question here? Well, the first is the use of the word “meaning.” It’s ambiguous, and ambiguity is the enemy of good philosophy. Bob might be asking what the word “life” means. So first, let’s help Bob clarify his language here. It seems what he really means is this:
“Plato, what is the purpose of life?”
That’s better, though still not quite right. After all, it would be very strange were all life, from a sea anemone to an elephant to a kudzu to you and me, to be possessed of the same purpose. Plato’s pupil Aristotle would certainly have a lot to say about that. He thought that plants, other animals, and humans all had fundamentally different souls. In any event, Bob is likely not that concerned with the meaning of a sea anemone’s life. In fact, he’s probably not that concerned with the meaning of yours. What Bob really wants to ask, it seems, is this:
“Plato, what is the purpose of my life?”
Now we’re getting somewhere. When people talk about the “meaning of life,” this is what they’re really getting at. However, there’s still something wrong with this question, and to get at what that is, we need to consider what we mean by “purpose.”
So, let’s think about purpose. What is the purpose of a chair? Anyone? It’s not a trick question. The purpose of a chair is to be sat on. But, a more interesting question is why is that the purpose of a chair? The obvious answer is that is what they are made for, but that isn’t the whole truth of it. When a person buys a chair, they buy it so they have something to sit on, or for others to sit on. If a person were to buy a chair with some other purpose in mind, say they wanted to put their television on it, or they planned to turn it into a piece of modern art, or they wanted to use it to make part of a blanket fort, then is the chair’s purpose still to be sat on? It gets a bit murky. Who decides the purpose of a chair, the one who makes it or the one who puts it to use? Suffice to say, whoever is making the decision, it is someone with a mind. The chair doesn’t have a purpose beyond that which people decide it has. Purpose is imbued into things by people. Or, more accurately, by persons, but that’s a distinction for another time.
So, purpose is imbued by people, and Bob is already a person. In the case of the chair, there is perhaps some disagreement to be had about who is most suited to decide what purpose a chair has. But Bob is in a unique position to both know himself and set himself toward goals. Unlike the chair, Bob is most definitely the person best suited to decide what purpose Bob has.
All of which means, Bob is doing something wrong when he asks his question.
When Bob asks Plato what the purpose of his own life is, he is asking the wrong person.
Hopefully that answers your questions regarding the meaning of life. If it doesn’t, go back and read it a couple more times."
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: 10k Philosophy challenge

Post by FlashDangerpants »

henry quirk wrote: Fri Jul 19, 2024 3:44 pm
FlashDangerpants wrote: Fri Jul 19, 2024 3:22 pmmaybe Henry will be inspired
Oh, 6 grand (that's what it is in US bucks) would be nice (more to hide from the State), but I'm not bright enough to do the job justice. I downloaded the pdf, though, so mebbe I'll have comments to post here.
The usual suspects are attacking our poor visitor with their own obsessions as per expectation. You are actually the closest person to his views in key respects so you alone can work your own hobby horse here without being instantly off topic. I therefore draw your attention to this para from the doc linked in the OP (emphasis added)...
However, the measure of value here is not merely the ability to understand and make choices. It is specifically the ability to understand and make choices that belong to the person in question. This is for a few reasons, such as irresolvable conflict occurring if everyone has an equal claim to all choices. For example, if your choice to keep your car in your driveway were morally equivalent to my choice to steal your car, we would quickly have an irresolvable conflict. This would also heavily conflict with moral intuitions. So, the kind of freedom that freedom consequentialism is concerned with is specifically freedom over those choices that belong to the person in question. The choices that belong to a person, or the choices a person has a “right” to make if you prefer, are the ones over those things that they own, specifically their mind, body, and property. Owning one’s own mind and body is fairly easy to establish because this is essentially just self-ownership, especially in the case of the mind. Owning property is a bit harder to establish, and it is a bit of an odd concept generally, but certainly if we can own property, then it is something we ought to have freedom over, so we will assume that we can own property and include it on the list of things we can have freedom over.
On the one hand, you might have something to offer the guy, the fix for a different problem to the one he's asking for in the OP, but a weakness of theory that he aknowledges to do with justification of property, which is something I believe you have fixed?

On the other, you have skin in this game. That exact brand of three way action, or something an outsider couldn't recognise as different from it, is sort of your deal too is it not? I feel like you place that specific type of freedom in pole position and derive all other moral goods from it.

You seem world weary about discussing yourt moral theory with antirealists who sort of poke of fun at you like me and Harbal, and you've expressed a belief that antirealists can't understand you. This guy is a realist, he seems to have some common ground with you. He's equipped to understand you, right?
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: 10k Philosophy challenge

Post by FlashDangerpants »

Daniel McKay wrote: Tue Jul 23, 2024 9:00 am Spoiler warning for anyone planning on reading The Black Swan Killer (though who am I kidding, nobody was planning on reading that).
Available at all good book stores?
Image
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

So he's here to promote his book. How tacky and disingenuous.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

He's exceptionally arrogant and unlikeable anyway. IMO.
Post Reply