Re: 10k Philosophy challenge
Posted: Tue Jul 23, 2024 9:43 am
He answered your meaning of life question well enough didn't he? Why do you need to inflict your perpetual shadow of joylessness on him?
For the discussion of all things philosophical.
https://canzookia.com/
He answered your meaning of life question well enough didn't he? Why do you need to inflict your perpetual shadow of joylessness on him?
Are you saying he has discovered 'the meaning of life'? And yes, I am very good at reading people.FlashDangerpants wrote: ↑Tue Jul 23, 2024 9:43 amHe answered your meaning of life question well enough didn't he? Why do you need to inflict your perpetual shadow of joylessness on him?
I am perhaps a bit weary when it comes to antirealists. I don't think they can't understand me, but I do think the chances of us having enough common ground to have a productive conversation are lower than with moral realists. Also, while I have a fair bit of time for moral error theory, I think other forms of moral antirealism are just sort of misunderstanding the topic.FlashDangerpants wrote: ↑Tue Jul 23, 2024 9:10 amThe usual suspects are attacking our poor visitor with their own obsessions as per expectation. You are actually the closest person to his views in key respects so you alone can work your own hobby horse here without being instantly off topic. I therefore draw your attention to this para from the doc linked in the OP (emphasis added)...henry quirk wrote: ↑Fri Jul 19, 2024 3:44 pmOh, 6 grand (that's what it is in US bucks) would be nice (more to hide from the State), but I'm not bright enough to do the job justice. I downloaded the pdf, though, so mebbe I'll have comments to post here.
On the one hand, you might have something to offer the guy, the fix for a different problem to the one he's asking for in the OP, but a weakness of theory that he aknowledges to do with justification of property, which is something I believe you have fixed?However, the measure of value here is not merely the ability to understand and make choices. It is specifically the ability to understand and make choices that belong to the person in question. This is for a few reasons, such as irresolvable conflict occurring if everyone has an equal claim to all choices. For example, if your choice to keep your car in your driveway were morally equivalent to my choice to steal your car, we would quickly have an irresolvable conflict. This would also heavily conflict with moral intuitions. So, the kind of freedom that freedom consequentialism is concerned with is specifically freedom over those choices that belong to the person in question. The choices that belong to a person, or the choices a person has a “right” to make if you prefer, are the ones over those things that they own, specifically their mind, body, and property. Owning one’s own mind and body is fairly easy to establish because this is essentially just self-ownership, especially in the case of the mind. Owning property is a bit harder to establish, and it is a bit of an odd concept generally, but certainly if we can own property, then it is something we ought to have freedom over, so we will assume that we can own property and include it on the list of things we can have freedom over.
On the other, you have skin in this game. That exact brand of three way action, or something an outsider couldn't recognise as different from it, is sort of your deal too is it not? I feel like you place that specific type of freedom in pole position and derive all other moral goods from it.
You seem world weary about discussing yourt moral theory with antirealists who sort of poke of fun at you like me and Harbal, and you've expressed a belief that antirealists can't understand you. This guy is a realist, he seems to have some common ground with you. He's equipped to understand you, right?
Ha, thanks for the plug.FlashDangerpants wrote: ↑Tue Jul 23, 2024 9:13 amAvailable at all good book stores?Daniel McKay wrote: ↑Tue Jul 23, 2024 9:00 am Spoiler warning for anyone planning on reading The Black Swan Killer (though who am I kidding, nobody was planning on reading that).
![]()
I am here for answers to a specific question. You were the one who got us onto the meaning of life. I'm not above engaging in a bit of self-promotion if it comes up (great book everyone, available on Amazon). But if that was my goal, I would have brought it up earlier. I just have a few of these posts to reply to, and I didn't particularly feel like writing out a whole thing when I had a pre-written one to hand.accelafine wrote: ↑Tue Jul 23, 2024 9:34 am So he's here to promote his book. How tacky and disingenuous.
I mean, even if that is at least half true, it's not really relevant, is it?
Just older, with far fewer days ahead than behind. Not wastin' what's left on dumbasses and evil folks.
Seems to me if I invest myself (my time, energy, money, etc.) in unclaimed property, morally it's mine.Daniel McKay wrote: ↑Tue Jul 23, 2024 11:37 ama good justification for coming to own previously unowned property
Thanks for identifying this explanation, Flash. I haven't kept up with the whole discussion so far, so apologies if my response has been covered already.FlashDangerpants wrote: ↑Fri Jul 19, 2024 3:22 pm I for one have neve heard of 'freedom consequentialism' before, so I went to have a look for some.
Currently the only literature on the subject appears to be Daniel's PhD thesis, available here
For a bit of what it's about, here's a para from near the top of that...
My goal in constructing my normative theory is to determine how free, rational agents ought to be or act,
where “ought” is understood in an objective and universal sense, assuming that this question has an
answer. Because this is my goal, I put free, rational agency, or “personhood” at the heart of my theory.
The measure of value I use is the ability of persons to understand and make their own choices, as being
able to do these two things in conjunction is the defining characteristic of free, rational agency. In this
way, my theory shares the advantage deontology has of closely connecting moral value with moral agency.
Because my theory is also consequentialist, it shares the advantages utilitarianism has of not having to
draw a strong distinction between action and inaction, and of being able to make clear recommendations
in most circumstances by analysing the consequences of the various courses of action available. So, to the
extent that one thinks that morality should describe the way all persons ought to be or act, or that one
finds both consequentialism and a close connection between moral value and moral agency appealing,
one has a reason to be interested in my theory
I'm not trying to be difficult, of course; I'm trying to help you at least map the problem a bit, so you are aware of what your thesis examiners might raise against the project you've chosen. If that helps you prepare, I'm good with that.Daniel McKay wrote: ↑Tue Jul 23, 2024 6:33 amImmanuel Can wrote: ↑Tue Jul 23, 2024 4:24 amYou've got a real problem, Daniel. I don't think it's solvable.Daniel McKay wrote: ↑Mon Jul 22, 2024 7:10 am I would say that I am trying to tackle the overall problem at its roots, but that humanity is far too small a lens. Morality must apply to all moral agents, actual and possible, in all possible worlds.
Consequentialisms (you will know there are various versions of it) are all dogged by several concerns. One is the ultimately-arbitrary nature of the implied ranking of values, as I'm sure you know. Another is the arbitrary nature of the choosing of any particular "consequence" as the priority: that move needs justification, of course; and you're asking that it be universal and applicable to all agents in all possible worlds. That's no small thing. Another such problem is the difficulty of weighing in things like motives or intentions, or habits, character and virtues, for that matter: things by which all moral practice routinely operates assumptively, and which it seems most moral agents do seem to intuit as not irrelevant to any moral calculation, but which consequentialisms all tend to avoid or obscure on the way to...consequences. There are others, but you'll probably already know them. They've been abundantly rehearsed.
But I think your biggest problem is going to be this: that ontological beliefs are logically prior to moral ones, and either make moral judgments rationally possible or impossible to rationalize fully. "Freedom" is indeed felt by many to be a value; but it's very hard to say why it ought to be, especially if some sort of Materialism, Physicalism, or some simliar kind of belief that logically inevitably implies something like Determinism, Subjectivism or Amoralism is weighed as the foundational ontological belief in play. "Freedom" then becomes an imaginary, emotive, solipsistic or entirely personal kind of preference, and is not assertable as an "ought" or a universal basis of anything, I think. We may all like it: but we can never say why we are owed it.
But if you can beat this problem, the problem of ontology inevitably already imposing limits on what is thinkable as a universal value, I'd be very interested in knowing how you would hope to pull that off. And I think you'll find that the person who can do it deserves more than 10K. They'd have solved what has been called "the major problem in modern moral philosophy." A Rosetta Stone for universal ethical legitimation is (to mix a metaphor) the Holy Grail of current moral theory...and really, that's what you'd need.
Sorry to sound like a "wet blanket," but I think that's where this goes.
So, you've mentioned a lot, so I'll try to focus on what I think your core points are and you can tell me if I'm wrong.
But it's not really separate, if you think carefully about it, and if you realize that ontology determines the range of ethical possibilities that can be sustained. In any deterministic universe, we cannot think that moral concern is anything but an epiphenomenon of some kind, an odd kind of happening or "seeming" that people have, for no readily-explicable reason. Our supposition would have to be that it corresponds to a "freedom" that is never genuine, to a situation in which no genuine "choice" ever takes place, and thus no moral judgment can possibly be realisticallly applied. The buck seems to stop with, "Whatever is, just is."First, physicalism doesn't imply determinism. Many people would also say that determinism doesn't conflict with moral realism, or indeed with freedom, though I think they're just wrong so you won't hear any such argument from me. Just because a minimal physical duplicate of our world would be a duplicate simpliciter of our world doesn't mean that world would be deterministic (depending on your definition of a world and whether that duplicate needs to be identical into the future). We can discuss that if you like, but it's rather a seperate issue.
I'd be really interested in how that case would be made.Second, on the scope of the project. I don't think the issue is determining a universal moral value. I think I've essentially done that (or at least found the best candidate for universal moral value).
I'm not accusing you of being penny-pinching, Dan. I was merely emphasizing the proper magnitude and scope of the required task. And I'd love as much as anybody else would any amount of free money. But I have to say that I think your money's safe: that is, unless something very revolutionary has suddenly been uncovered in your research or somebody else's. This has proved a very intractible problem for a lot of quite brilliant minds, so far.That being said, yes, I agree that I am aiming very high here, that I essentially trying to solve ethics, and that what I am asking for is worth more than $10,000. But that's what I have to offer, so that's what's on the table.
Oh, yes...fair enough...and no point either in resenting the voices of any criticisms that might just fortify your project against future criticisms that might appear at your thesis examination. It all has to be taken as to-the-good, does it not?There's no sense apologizing for sharing your views, especially when I asked for them.
I suggest we thing about it this way: You and I are antirealsits within the discourse of ethics, but we don't take from that the lesson that therefore we get to opt out of morality as a way of life, we just recognise some sort of limit to moral reason such that there is no absolute truth of the matter. But at the end of the day, we use argument still to recommend our preferred solutions to moral quandries. The fundamental difference is that we are limited by our ability to explain our own point of view in a way that persuades others rather than our ability to perceive or measure grand truths in a way that renders them publicly mistaken.Peter Holmes wrote: ↑Tue Jul 23, 2024 2:09 pm And it's because morality can't be objective that normative ethical and moral theories fail. What constitutes 'the good life' and 'good behaviour' can only ever be a matter of opinion.
Okay.FlashDangerpants wrote: ↑Tue Jul 23, 2024 3:05 pmI suggest we thing about it this way: You and I are antirealsits within the discourse of ethics, but we don't take from that the lesson that therefore we get to opt out of morality as a way of life, we just recognise some sort of limit to moral reason such that there is no absolute truth of the matter. But at the end of the day, we use argument still to recommend our preferred solutions to moral quandries. The fundamental difference is that we are limited by our ability to explain our own point of view in a way that persuades others rather than our ability to perceive or measure grand truths in a way that renders them publicly mistaken.Peter Holmes wrote: ↑Tue Jul 23, 2024 2:09 pm And it's because morality can't be objective that normative ethical and moral theories fail. What constitutes 'the good life' and 'good behaviour' can only ever be a matter of opinion.
IMO, put that way the difference between moral realist and antirealist is not really all that stark, and so eventually the task of carrying out moral discourse according to some sort of rationale must carry on. So then questions of how we would choose to go about describing what rationale we have for holding a moral view of some sort comes into play. So we can probably put aside the traditional bun fight over facts vs values and just look at the ways in which this theory might be a good way of doing that thing.
When it comes to explaining the generic problems with unspecified versions of consequentialism, I suggest we let IC cary that burden, if only for the fun of watching a retired English teacher try to condescend to what I suspect may be an associate lecturer in philosophy (certainly he would have defended that PhD thesis some time around 2017) on the subject of ontology.
The theory in question, although I haven't taken time to read the details yet, appears to be aimed at fixing those generic flaws. If it does so, then it would seem to be useful on a day to day basis, even for dirty scoundrels like you and I, and even if we cannot defend it against Mackie or Hume. TBH, it will be at least a few days before I have an informed opinion about anything though. I am a slow reader and my boss thinks I am doing work, so I should probably do some.