10k Philosophy challenge

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: 10k Philosophy challenge

Post by FlashDangerpants »

accelafine wrote: Tue Jul 23, 2024 9:36 am He's exceptionally arrogant and unlikeable anyway. IMO.
He answered your meaning of life question well enough didn't he? Why do you need to inflict your perpetual shadow of joylessness on him?
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

FlashDangerpants wrote: Tue Jul 23, 2024 9:43 am
accelafine wrote: Tue Jul 23, 2024 9:36 am He's exceptionally arrogant and unlikeable anyway. IMO.
He answered your meaning of life question well enough didn't he? Why do you need to inflict your perpetual shadow of joylessness on him?
Are you saying he has discovered 'the meaning of life'? And yes, I am very good at reading people.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

FlashDangerpants wrote: Tue Jul 23, 2024 9:10 am
henry quirk wrote: Fri Jul 19, 2024 3:44 pm
FlashDangerpants wrote: Fri Jul 19, 2024 3:22 pmmaybe Henry will be inspired
Oh, 6 grand (that's what it is in US bucks) would be nice (more to hide from the State), but I'm not bright enough to do the job justice. I downloaded the pdf, though, so mebbe I'll have comments to post here.
The usual suspects are attacking our poor visitor with their own obsessions as per expectation. You are actually the closest person to his views in key respects so you alone can work your own hobby horse here without being instantly off topic. I therefore draw your attention to this para from the doc linked in the OP (emphasis added)...
However, the measure of value here is not merely the ability to understand and make choices. It is specifically the ability to understand and make choices that belong to the person in question. This is for a few reasons, such as irresolvable conflict occurring if everyone has an equal claim to all choices. For example, if your choice to keep your car in your driveway were morally equivalent to my choice to steal your car, we would quickly have an irresolvable conflict. This would also heavily conflict with moral intuitions. So, the kind of freedom that freedom consequentialism is concerned with is specifically freedom over those choices that belong to the person in question. The choices that belong to a person, or the choices a person has a “right” to make if you prefer, are the ones over those things that they own, specifically their mind, body, and property. Owning one’s own mind and body is fairly easy to establish because this is essentially just self-ownership, especially in the case of the mind. Owning property is a bit harder to establish, and it is a bit of an odd concept generally, but certainly if we can own property, then it is something we ought to have freedom over, so we will assume that we can own property and include it on the list of things we can have freedom over.
On the one hand, you might have something to offer the guy, the fix for a different problem to the one he's asking for in the OP, but a weakness of theory that he aknowledges to do with justification of property, which is something I believe you have fixed?

On the other, you have skin in this game. That exact brand of three way action, or something an outsider couldn't recognise as different from it, is sort of your deal too is it not? I feel like you place that specific type of freedom in pole position and derive all other moral goods from it.

You seem world weary about discussing yourt moral theory with antirealists who sort of poke of fun at you like me and Harbal, and you've expressed a belief that antirealists can't understand you. This guy is a realist, he seems to have some common ground with you. He's equipped to understand you, right?
I am perhaps a bit weary when it comes to antirealists. I don't think they can't understand me, but I do think the chances of us having enough common ground to have a productive conversation are lower than with moral realists. Also, while I have a fair bit of time for moral error theory, I think other forms of moral antirealism are just sort of misunderstanding the topic.

That being said, if this person has a good justification for coming to own previously unowned property, I'd certainly like to hear it. It's not my primary concern and not one I'd be willing to part with a significant portion of my own money for, but that would be a worthwhile contribution to philosophy generally if it holds up.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

FlashDangerpants wrote: Tue Jul 23, 2024 9:13 am
Daniel McKay wrote: Tue Jul 23, 2024 9:00 am Spoiler warning for anyone planning on reading The Black Swan Killer (though who am I kidding, nobody was planning on reading that).
Available at all good book stores?
Image
Ha, thanks for the plug.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

accelafine wrote: Tue Jul 23, 2024 9:34 am So he's here to promote his book. How tacky and disingenuous.
I am here for answers to a specific question. You were the one who got us onto the meaning of life. I'm not above engaging in a bit of self-promotion if it comes up (great book everyone, available on Amazon). But if that was my goal, I would have brought it up earlier. I just have a few of these posts to reply to, and I didn't particularly feel like writing out a whole thing when I had a pre-written one to hand.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

accelafine wrote: Tue Jul 23, 2024 9:36 am He's exceptionally arrogant and unlikeable anyway. IMO.
I mean, even if that is at least half true, it's not really relevant, is it?
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

.
Last edited by accelafine on Tue Jul 23, 2024 4:37 pm, edited 1 time in total.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: 10k Philosophy challenge

Post by henry quirk »

FlashDangerpants wrote: Tue Jul 23, 2024 9:10 amYou seem world weary
Just older, with far fewer days ahead than behind. Not wastin' what's left on dumbasses and evil folks.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: 10k Philosophy challenge

Post by henry quirk »

Daniel McKay wrote: Tue Jul 23, 2024 11:37 ama good justification for coming to own previously unowned property
Seems to me if I invest myself (my time, energy, money, etc.) in unclaimed property, morally it's mine.
Will Bouwman
Posts: 1334
Joined: Sun Sep 04, 2022 2:17 pm

Re: 10k Philosophy challenge

Post by Will Bouwman »

Not so fast Henry Quirk! Here ya go; ethics in a nutshell: https://www.youtube.com/watch?v=JmoG4JY_T58
As it happens, I'll be in New Zealand in a couple of weeks. I'll have me ten grand in a brown paper bag. Thanks.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: 10k Philosophy challenge

Post by henry quirk »

Will Bouwman wrote: Tue Jul 23, 2024 1:49 pmI'll have me ten grand in a brown paper bag. Thanks.
👍
Peter Holmes
Posts: 4134
Joined: Tue Jul 18, 2017 3:53 pm

Re: 10k Philosophy challenge

Post by Peter Holmes »

FlashDangerpants wrote: Fri Jul 19, 2024 3:22 pm I for one have neve heard of 'freedom consequentialism' before, so I went to have a look for some.
Currently the only literature on the subject appears to be Daniel's PhD thesis, available here

For a bit of what it's about, here's a para from near the top of that...
My goal in constructing my normative theory is to determine how free, rational agents ought to be or act,
where “ought” is understood in an objective and universal sense, assuming that this question has an
answer. Because this is my goal, I put free, rational agency, or “personhood” at the heart of my theory.
The measure of value I use is the ability of persons to understand and make their own choices, as being
able to do these two things in conjunction is the defining characteristic of free, rational agency. In this
way, my theory shares the advantage deontology has of closely connecting moral value with moral agency.
Because my theory is also consequentialist, it shares the advantages utilitarianism has of not having to
draw a strong distinction between action and inaction, and of being able to make clear recommendations
in most circumstances by analysing the consequences of the various courses of action available. So, to the
extent that one thinks that morality should describe the way all persons ought to be or act, or that one
finds both consequentialism and a close connection between moral value and moral agency appealing,
one has a reason to be interested in my theory
Thanks for identifying this explanation, Flash. I haven't kept up with the whole discussion so far, so apologies if my response has been covered already.

The problem is: 'how to weigh freedom over different things within the normative theory of freedom consequentialism'.

1 Surely, any theory that asserts 'oughts' is normative: 'establishing, relating to, or deriving from a standard or norm, especially of behaviour'. So what work is 'normative' doing in 'the normative theory of freedom consequentialism'? Not sure about this - I just don't know.

2 If we begin with deontology and consequentialism - which is just the deontological can kicked down the road - then surely we're already committed to moral realism or objectivism. Whether the moral rightness or wrongness of either an action or its consequences is inherent or intrinsic is the issue. If it isn't, then that's the end of deontology and consequentialism. And good riddance, as far as I'm concerned.

3 Non-moral premises can't entail moral conclusions. And the falsifiability of the premise is irrelevant. ''The highest good is X (eg freedom or free rational agency)', or 'The purpose of life/human life/my life is Y' - and so on - can never entail an ought, such as: 'therefore, the highest good ought to be the free exercise of rational agency'.

And it's because morality can't be objective that normative ethical and moral theories fail. What constitutes 'the good life' and 'good behaviour' can only ever be a matter of opinion.
User avatar
Immanuel Can
Posts: 27604
Joined: Wed Sep 25, 2013 4:42 pm

Re: 10k Philosophy challenge

Post by Immanuel Can »

Daniel McKay wrote: Tue Jul 23, 2024 6:33 am
Immanuel Can wrote: Tue Jul 23, 2024 4:24 am
Daniel McKay wrote: Mon Jul 22, 2024 7:10 am I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
You've got a real problem, Daniel. I don't think it's solvable.

Consequentialisms (you will know there are various versions of it) are all dogged by several concerns. One is the ultimately-arbitrary nature of the implied ranking of values, as I'm sure you know. Another is the arbitrary nature of the choosing of any particular "consequence" as the priority: that move needs justification, of course; and you're asking that it be universal and applicable to all agents in all possible worlds. That's no small thing. Another such problem is the difficulty of weighing in things like motives or intentions, or habits, character and virtues, for that matter: things by which all moral practice routinely operates assumptively, and which it seems most moral agents do seem to intuit as not irrelevant to any moral calculation, but which consequentialisms all tend to avoid or obscure on the way to...consequences. There are others, but you'll probably already know them. They've been abundantly rehearsed.

But I think your biggest problem is going to be this: that ontological beliefs are logically prior to moral ones, and either make moral judgments rationally possible or impossible to rationalize fully. "Freedom" is indeed felt by many to be a value; but it's very hard to say why it ought to be, especially if some sort of Materialism, Physicalism, or some simliar kind of belief that logically inevitably implies something like Determinism, Subjectivism or Amoralism is weighed as the foundational ontological belief in play. "Freedom" then becomes an imaginary, emotive, solipsistic or entirely personal kind of preference, and is not assertable as an "ought" or a universal basis of anything, I think. We may all like it: but we can never say why we are owed it.

But if you can beat this problem, the problem of ontology inevitably already imposing limits on what is thinkable as a universal value, I'd be very interested in knowing how you would hope to pull that off. And I think you'll find that the person who can do it deserves more than 10K. They'd have solved what has been called "the major problem in modern moral philosophy." A Rosetta Stone for universal ethical legitimation is (to mix a metaphor) the Holy Grail of current moral theory...and really, that's what you'd need.

Sorry to sound like a "wet blanket," but I think that's where this goes.

So, you've mentioned a lot, so I'll try to focus on what I think your core points are and you can tell me if I'm wrong.
I'm not trying to be difficult, of course; I'm trying to help you at least map the problem a bit, so you are aware of what your thesis examiners might raise against the project you've chosen. If that helps you prepare, I'm good with that.
First, physicalism doesn't imply determinism. Many people would also say that determinism doesn't conflict with moral realism, or indeed with freedom, though I think they're just wrong so you won't hear any such argument from me. Just because a minimal physical duplicate of our world would be a duplicate simpliciter of our world doesn't mean that world would be deterministic (depending on your definition of a world and whether that duplicate needs to be identical into the future). We can discuss that if you like, but it's rather a seperate issue.
But it's not really separate, if you think carefully about it, and if you realize that ontology determines the range of ethical possibilities that can be sustained. In any deterministic universe, we cannot think that moral concern is anything but an epiphenomenon of some kind, an odd kind of happening or "seeming" that people have, for no readily-explicable reason. Our supposition would have to be that it corresponds to a "freedom" that is never genuine, to a situation in which no genuine "choice" ever takes place, and thus no moral judgment can possibly be realisticallly applied. The buck seems to stop with, "Whatever is, just is."

So even though determination might look like a separate concern, it would render illusory any moral philosophy at all. And thus the answer to the problem of Freedom Consequentialism you pose would become moot: since no moral judgments have any referent within reality, other than the purely contingent fact that people just happen to hold such delusions, not only all Consequentialisms but every other moral framework has to go out the window in the service of a realistic and honest Determinism.

But I'm sure you can see the logic of that. No choice, no morality. It's that simple.

There's an additional problem you might wish to consider, as well. Is it not likely that "freedom" is a condition of an outcome, not in itself a targetable consequence or outcome? To put this another way, is not "freedom" always "the freedom TO...X or Y?" "Being free," it seems to me, begs the whole question of what one is supposed to be "free" to do. Freedom bears only an instrumental relationship to other potential 'goods,' does it not? So one man may be freed from jail to live a better life, and another be freed to kill again: but in such cases, the mere having of the "freedom" doesn't seem to tell us anything about the moral outcome, does it?
Second, on the scope of the project. I don't think the issue is determining a universal moral value. I think I've essentially done that (or at least found the best candidate for universal moral value).
I'd be really interested in how that case would be made.
That being said, yes, I agree that I am aiming very high here, that I essentially trying to solve ethics, and that what I am asking for is worth more than $10,000. But that's what I have to offer, so that's what's on the table.
I'm not accusing you of being penny-pinching, Dan. I was merely emphasizing the proper magnitude and scope of the required task. And I'd love as much as anybody else would any amount of free money. But I have to say that I think your money's safe: that is, unless something very revolutionary has suddenly been uncovered in your research or somebody else's. This has proved a very intractible problem for a lot of quite brilliant minds, so far.

That being said, let the inquiry continue.
There's no sense apologizing for sharing your views, especially when I asked for them.
Oh, yes...fair enough...and no point either in resenting the voices of any criticisms that might just fortify your project against future criticisms that might appear at your thesis examination. It all has to be taken as to-the-good, does it not?
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: 10k Philosophy challenge

Post by FlashDangerpants »

Peter Holmes wrote: Tue Jul 23, 2024 2:09 pm And it's because morality can't be objective that normative ethical and moral theories fail. What constitutes 'the good life' and 'good behaviour' can only ever be a matter of opinion.
I suggest we thing about it this way: You and I are antirealsits within the discourse of ethics, but we don't take from that the lesson that therefore we get to opt out of morality as a way of life, we just recognise some sort of limit to moral reason such that there is no absolute truth of the matter. But at the end of the day, we use argument still to recommend our preferred solutions to moral quandries. The fundamental difference is that we are limited by our ability to explain our own point of view in a way that persuades others rather than our ability to perceive or measure grand truths in a way that renders them publicly mistaken.

IMO, put that way the difference between moral realist and antirealist is not really all that stark, and so eventually the task of carrying out moral discourse according to some sort of rationale must carry on. So then questions of how we would choose to go about describing what rationale we have for holding a moral view of some sort comes into play. So we can probably put aside the traditional bun fight over facts vs values and just look at the ways in which this theory might be a good way of doing that thing.

When it comes to explaining the generic problems with unspecified versions of consequentialism, I suggest we let IC cary that burden, if only for the fun of watching a retired English teacher try to condescend to what I suspect may be an associate lecturer in philosophy (certainly he would have defended that PhD thesis some time around 2017) on the subject of ontology.

The theory in question, although I haven't taken time to read the details yet, appears to be aimed at fixing those generic flaws. If it does so, then it would seem to be useful on a day to day basis, even for dirty scoundrels like you and I, and even if we cannot defend it against Mackie or Hume. TBH, it will be at least a few days before I have an informed opinion about anything though. I am a slow reader and my boss thinks I am doing work, so I should probably do some.
Peter Holmes
Posts: 4134
Joined: Tue Jul 18, 2017 3:53 pm

Re: 10k Philosophy challenge

Post by Peter Holmes »

FlashDangerpants wrote: Tue Jul 23, 2024 3:05 pm
Peter Holmes wrote: Tue Jul 23, 2024 2:09 pm And it's because morality can't be objective that normative ethical and moral theories fail. What constitutes 'the good life' and 'good behaviour' can only ever be a matter of opinion.
I suggest we thing about it this way: You and I are antirealsits within the discourse of ethics, but we don't take from that the lesson that therefore we get to opt out of morality as a way of life, we just recognise some sort of limit to moral reason such that there is no absolute truth of the matter. But at the end of the day, we use argument still to recommend our preferred solutions to moral quandries. The fundamental difference is that we are limited by our ability to explain our own point of view in a way that persuades others rather than our ability to perceive or measure grand truths in a way that renders them publicly mistaken.

IMO, put that way the difference between moral realist and antirealist is not really all that stark, and so eventually the task of carrying out moral discourse according to some sort of rationale must carry on. So then questions of how we would choose to go about describing what rationale we have for holding a moral view of some sort comes into play. So we can probably put aside the traditional bun fight over facts vs values and just look at the ways in which this theory might be a good way of doing that thing.

When it comes to explaining the generic problems with unspecified versions of consequentialism, I suggest we let IC cary that burden, if only for the fun of watching a retired English teacher try to condescend to what I suspect may be an associate lecturer in philosophy (certainly he would have defended that PhD thesis some time around 2017) on the subject of ontology.

The theory in question, although I haven't taken time to read the details yet, appears to be aimed at fixing those generic flaws. If it does so, then it would seem to be useful on a day to day basis, even for dirty scoundrels like you and I, and even if we cannot defend it against Mackie or Hume. TBH, it will be at least a few days before I have an informed opinion about anything though. I am a slow reader and my boss thinks I am doing work, so I should probably do some.
Okay. :) . I hear you. I just don't think, to echo McKay elsewhere, s/he's asking the right question - which is why it's unanswerable. And I think the opening of the passage you quoted says it all.

''My goal in constructing my normative theory is to determine how free, rational agents ought to be or act,
where “ought” is understood in an objective and universal sense, assuming that this question has an
answer. Because this is my goal, I put free, rational agency, or “personhood” at the heart of my theory.
The measure of value I use is the ability of persons to understand and make their own choices, as being
able to do these two things in conjunction is the defining characteristic of free, rational agency.'

'Assuming that this question has an answer' means we're already lost. 'How ought free, rational agents to act?' Come to that, 'How ought unfree, irrational agents to act?' And would there be a difference between the moral rightness or wrongness of the action, depending on the freedom and rationality of the agent? And would that be on a scale with degrees of freedom and rationality?

I'm sorry, but I think this is so mired in delusions about the nature and function of abstract nouns, such as 'freedom' and 'rationality' - how on earth can we 'weigh freedom over different things'? - that it's down the rabbit hole where PhDs get written - as it has to be.

I think it's better to be bloody, bold and resolute. For example: I think human beings ought to be free and rational. Couldn't agree more. Only I'd add 'good'.

Philosopher: 'Okay. But what are freedom, rationality and goodness? And it's no use showing me how we use those words and their cognates in different contexts. That won't get us to the heart of the matter. What we need is theories of freedom, rationality and goodness. Write me a thesis on each of them.'

PS Does a free, rational agent know how she ought to act with regard to abortion? What use would a normative theory of freedom consequentialism be?
Post Reply