10k Philosophy challenge

Should you think about your duty, or about the consequences of your actions? Or should you concentrate on becoming a good person?

Moderators: AMod, iMod

Post Reply
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

10k Philosophy challenge

Post by Daniel McKay »

This challenge is now closed. I have a solution I am satisfied with. While the solution was developed by me, it was partially inspired by someone's suggestion and they received 5% of the money as an inspiration fee.
Last edited by Daniel McKay on Tue Feb 04, 2025 5:10 am, edited 1 time in total.
User avatar
FlashDangerpants
Posts: 8815
Joined: Mon Jan 04, 2016 11:54 pm

Re: 10k Philosophy challenge

Post by FlashDangerpants »

I for one have neve heard of 'freedom consequentialism' before, so I went to have a look for some.
Currently the only literature on the subject appears to be Daniel's PhD thesis, available here

For a bit of what it's about, here's a para from near the top of that...
My goal in constructing my normative theory is to determine how free, rational agents ought to be or act,
where “ought” is understood in an objective and universal sense, assuming that this question has an
answer. Because this is my goal, I put free, rational agency, or “personhood” at the heart of my theory.
The measure of value I use is the ability of persons to understand and make their own choices, as being
able to do these two things in conjunction is the defining characteristic of free, rational agency. In this
way, my theory shares the advantage deontology has of closely connecting moral value with moral agency.
Because my theory is also consequentialist, it shares the advantages utilitarianism has of not having to
draw a strong distinction between action and inaction, and of being able to make clear recommendations
in most circumstances by analysing the consequences of the various courses of action available. So, to the
extent that one thinks that morality should describe the way all persons ought to be or act, or that one
finds both consequentialism and a close connection between moral value and moral agency appealing,
one has a reason to be interested in my theory
I'm very unlikely to challenge for the 10K prize myself, being the incorrigible moral skeptic that I am, but the idea at least seems interesting enough to try and stop VA from drowning the thread before it can get going as he does to everything else in this wretched garden of choke-weeds. So I will at least work through the paper and see if I can find a few snippets here and there to discuss.

Who knows, maybe Henry will be inspired to get him that 10K in cold hard Southern Hemispherical cash? He likes bit of freedom.
User avatar
henry quirk
Posts: 16379
Joined: Fri May 09, 2008 8:07 pm
Location: 🔥AMERICA🔥
Contact:

Re: 10k Philosophy challenge

Post by henry quirk »

FlashDangerpants wrote: Fri Jul 19, 2024 3:22 pmmaybe Henry will be inspired
Oh, 6 grand (that's what it is in US bucks) would be nice (more to hide from the State), but I'm not bright enough to do the job justice. I downloaded the pdf, though, so mebbe I'll have comments to post here.

Mebbe Harbal, the author of the stunningly complex I don't know why I care, I just know that I do care theory of morality, might have a go at it.
User avatar
Harbal
Posts: 10729
Joined: Thu Jun 20, 2013 10:03 pm
Location: Yorkshire
Contact:

Re: 10k Philosophy challenge

Post by Harbal »

henry quirk wrote: Fri Jul 19, 2024 3:44 pm
FlashDangerpants wrote: Fri Jul 19, 2024 3:22 pmmaybe Henry will be inspired
Oh, 6 grand (that's what it is in US bucks) would be nice (more to hide from the State), but I'm not bright enough to do the job justice. I downloaded the pdf, though, so mebbe I'll have comments to post here.

Mebbe Harbal, the author of the stunningly complex I don't know why I care, I just know that I do care theory of morality, might have a go at it.
Mebbe not, henry. 🙂
promethean75
Posts: 7113
Joined: Sun Nov 04, 2018 10:29 pm

Re: 10k Philosophy challenge

Post by promethean75 »

Before I go any further, do u have zelle?
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

No. I have Paypal. But, so long as it is easy to setup and doesn't cost a bunch extra, I don't mind setting up other payment-based systems.
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

Daniel McKay wrote: Mon Jul 22, 2024 1:50 am No. I have Paypal. But, so long as it is easy to setup and doesn't cost a bunch extra, I don't mind setting up other payment-based systems.
But no one knows what you are asking them to find a solution 'to'? Is it a problem that is empirically provable? If not, then how are you going to judge whether or not it is a 'solution'? What purpose does it serve? Is finding a 'solution' going to be useful to anyone? If so, then how?
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

There is a hyperlink to a document explaining the problem in the initial post on the word "here"

That should explain what the problem I'm looking for a solution to is in detail, but I'm available if you have further questions
User avatar
accelafine
Posts: 5042
Joined: Sat Nov 04, 2023 10:16 pm

Re: 10k Philosophy challenge

Post by accelafine »

Daniel McKay wrote: Mon Jul 22, 2024 3:42 am There is a hyperlink to a document explaining the problem in the initial post on the word "here"

That should explain what the problem I'm looking for a solution to is in detail, but I'm available if you have further questions
I saw it. I have just asked 'further questions' here. Could you answer them please?
Veritas Aequitas
Posts: 15722
Joined: Wed Jul 11, 2012 4:41 am

Re: 10k Philosophy challenge

Post by Veritas Aequitas »

I find consequentialism itself, whichever way it is considered, is not complete thus has holes.

Re the Challenge, here's from AI:
The author acknowledges that while the Preferential Order Method (POM) seems like a good approach to weighing freedoms, it struggles with conflicting preferences. Here are some possible solutions to address this:

Refine POM for Group Decisions: Perhaps POM could be extended to handle groups. Instead of individual preferences, the method could consider the average or median preference within a group, or some other way to aggregate preferences. This might not be perfect (ignoring outliers), but it could be a compromise solution.

Thresholds and Trade-offs: One could set a threshold for preference strength. If a certain percentage of individuals have a strong preference for one freedom over another, that could be decisive. This approach allows for some trade-offs between freedoms while respecting strong preferences.

Prioritization based on Vulnerability: Maybe prioritize freedoms based on vulnerability. If a person's life or well-being is significantly more at stake in one choice compared to another, that freedom could be considered more important.

Meta-preferences: The author mentions ideal preferences being irrelevant. However, what about asking individuals about their preferences for how to handle conflicting preferences? This "meta-preference" approach could help guide decisions when individual preferences clash.

Hybrid Approach: Perhaps a combination of POM with another method could work. For example, use POM for individual preferences as a starting point, then consider additional factors like vulnerability or societal impact when conflicts arise.

Fairness Principles: Integrate fairness principles into the framework. This could involve considering factors like equality of opportunity or minimizing harm to the most vulnerable.

Public Discourse and Consensus: Maybe resolving conflicting preferences requires ongoing public discourse and building consensus on how to weigh freedoms. This approach emphasizes the importance of social dialogue in moral decision-making.

It's important to note that each solution has its own strengths and weaknesses. The best approach might depend on the specific situation and the values being considered.

Ultimately, the author is calling for further development of freedom consequentialism to address this challenge of conflicting preferences. There might not be a single perfect solution, but exploring these possibilities can help refine the theory and make it more practical.
If the above provide reasonable leads to your problem, do I get not full but a % of the reward in showing you the effective pathway to find the solutions you need?
Last edited by Veritas Aequitas on Mon Jul 22, 2024 4:48 am, edited 1 time in total.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

Sorry, I thought from your questions that perhaps you hadn't read it yet. Let me answer them one by one:

what you are asking them to find a solution 'to'? - I am asking for a solution to the problem of how to weigh freedom over different things that allows for freedom consequentialism to be action guiding, and is in line with all of the other restrictions detailed in the primer and the post.

Is it a problem that is empirically provable? - No, almost certainly not.

If not, then how are you going to judge whether or not it is a 'solution'? - The same way we judge whether anything is a solution in philosophy, through reasoned analysis.

What purpose does it serve? - The purpose of developing the correct normative theory and solving the project of ethics... or at least getting a step closer.

Is finding a 'solution' going to be useful to anyone? If so, then how? - It will be useful in that it would provide correct answers to moral questions (or at least, closer to correct answers than we have now).
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

Yeah, AI is good at stringing words together, and it looks like you have crafted your prompt pretty well, but I don't think it's going to cut it here. In this case, the problems with those solutions are:

* Preferences themselves aren't important, so strength of preferences isn't important. (Meta-preferences is kind of a fun idea, but again, preferences aren't important so people's preferences for how preferences should be taken into account is itself not important and also faces the same issue of the lack of consensus)
* Vulnerability, wellbeing, and fairness are also all not important.
* Social dialogue would only be a helpful if you could convince everyone to value all of their freedoms the same amount, which seems unlikely. Also, that it would make our calcuations easier is not really a reason why they should.
Veritas Aequitas
Posts: 15722
Joined: Wed Jul 11, 2012 4:41 am

Re: 10k Philosophy challenge

Post by Veritas Aequitas »

Daniel McKay wrote: Mon Jul 22, 2024 4:55 am Yeah, AI is good at stringing words together, and it looks like you have crafted your prompt pretty well, but I don't think it's going to cut it here. In this case, the problems with those solutions are:

* Preferences themselves aren't important, so strength of preferences isn't important. (Meta-preferences is kind of a fun idea, but again, preferences aren't important so people's preferences for how preferences should be taken into account is itself not important and also faces the same issue of the lack of consensus)
* Vulnerability, wellbeing, and fairness are also all not important.
* Social dialogue would only be a helpful if you could convince everyone to value all of their freedoms the same amount, which seems unlikely. Also, that it would make our calcuations easier is not really a reason why they should.
As stated by AI;
It's important to note that each solution has its own strengths and weaknesses. The best approach might depend on the specific situation and the values being considered.
There might not be a single perfect solution, but exploring these possibilities can help refine the theory and make it more practical.
Actually you can grind out with AI on the above problems and AI can give your loads of alternatives and directions, if you prompt effectively.

However, as I had claimed, relying on consequentialism itself [Freedom or otherwise] is a lost cause with reference to moral progress for humanity.
It is ridiculous to rely on individual measurements of preferences in dealing with morality in general or in making moral judgments. Note the ridiculous proposal of Hedonistic Calculus by Bentham.
It true that at present people has to make moral judgments but this is fire fighting and to taking effective actions one has to tackle the overall moral problem at its roots.
Daniel McKay
Posts: 96
Joined: Thu Oct 29, 2015 2:48 am

Re: 10k Philosophy challenge

Post by Daniel McKay »

I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
Veritas Aequitas
Posts: 15722
Joined: Wed Jul 11, 2012 4:41 am

Re: 10k Philosophy challenge

Post by Veritas Aequitas »

Daniel McKay wrote: Mon Jul 22, 2024 7:10 am I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
Possible worlds is only a speculation.
There is only reality, i.e. all-there-is contingent upon a human-based framework and system of reality. There is no possible 'all-there-is' it is just "is".

When you focus on consequences, that is focusing on 'putting out the fire' and not on the root cause of the fire; in the case of consequentialism it is striving for more good than evil.
As such, the focus on consequentialism itself, [Freedom or others] it is at its best a firefighting approach with never-ending problem.

The never-ending problem [weakness] is inherent in your thesis, thus it is unlikely anyone will provide you solutions that will meet your expectations.
Post Reply