[b]Pooperscootian Utilitarianism Part 3: further explanations
Posted: Thu Jun 08, 2023 10:55 am
Pooperscootian Utilitarianism Part 3: further explanations
Please read the threads on Pooperscootian Utilitarianism parts 1 and 2 before reading this.
What is Pooperscootian Utilitarianism?:
Answer: Pooperscootian Utilitarianism is a universal brand of utilitarianism that strives to maximize the pleasure and minimize the suffering for all feeling life in all universes for all of time.
Why should I adopt Pooperscootian Utilitarianism as my moral code, and/or under what contexts should I do so?:
Answer: So far as I can tell you already want it to be your moral code. You just may not realize it yet. See the thread on Pooperscootian Utilitarianism part 1 for more details.
You claim that I already want Pooperscootian Utilitarianism to be my moral code. What madness is this? That makes absolutely no sense. I don’t even know what it is yet. Things can’t be my moral code if I’ve not heard of them before now.
Answer: I think that belief stems from an inaccurate definition of what it means for us to want things. Think about a dog who habitually chases cars. In this scenario, the dog’s owners choose not to allow the dog to run around outside freely because it chases cars. The dog may believe that it is protecting its territory and pack from a ferocious monster. It has no way of knowing that it’s, instead, pointlessly irritating its owners, while decreasing its own freedom through forcing its owners to not let it run around outside freely, to prevent the dog from chasing cars. In this way, I’d argue it’s the human owners of the dog who truly understand what the dog’s free will desires more than the dog does. So…what the dog really wants, I’d argue, although it doesn’t realize it, is likely to NOT chase cars…in order to gain more freedom…because it could chase lots of other things through that freedom – rabbits – bugs – moles – frisbees – etc, and get to be outside far more often unsupervised. In this way what our “free will” wants is quite often determined more by what decisions we’d make if we had more knowledge than what decisions we might think we currently want to make, with our limited knowledge.
The above is still not a full explanation of what the bloody hell you’re talking about dumbass.
Answer: So, the idea is that there are different degrees of free will, and the more informed you are about the decision you’re about to make the more free will you have. It’s also possible, even for people who are not you, who have more knowledge than you do about the decision you’re about to make to understand what your free will desires more than you do.
Still not a full explanation
Answer: Think about what it feels like to be some animal – let’s say a giraffe. You could have been born a giraffe…or a serial-killing human…or Nelson Mandela. Regardless of which of these figures we’re considering you might having been, were you them all your actions and thoughts would have been pre-destined by your genetics and environment and so you would have made the exact same decisions Nelson Mandela, or the serial killer, or the giraffe did. For this reason, nobody and nothing can really be accurately described as “deserving” anything better or worse than anyone or anything else. With that in mind, and the fact that you could have been born into any organism in the universe, I’d argue the most enlightened way to think about morality is to imagine how you’d behave towards an organism if you would experience everything they experience. Therefore, I’d argue that what we all really want, but we typically don’t realize it, is to behave as if we could feel the ramifications of all our action and inaction on every life form. With that in mind, I’d say it would make a lot of sense to have a moral code that strives to maximize pleasure and minimize suffering for all life in all universes throughout all of time…because I don’t know how you’d better achieve your goal than through that system.
Why is this system that strives to assist all life a utilitarian system, necessarily? Why not have some other moral code instead?
Answer: I don’t see how it could be possible to assist a life form in any relevant way that doesn’t involve increasing their pleasure or reducing their suffering…and I definitely don’t see how it would be possible to assist an organism in any way they care about that doesn’t involve increasing their pleasure or reducing their suffering. That’s what utilitarianism strives to accomplish…typically just for a defined group, but there’s nothing about utilitarianism that prevents it from striving to do so universally.
Does Pooperscootian Utilitarianism strive to determine objective morality?
Answer: I don’t know what objective morality actually is, and I’m not sure anyone else does either. It might. It definitely has the goal of creating a moral code that everyone in the universe wants to exist though, based on my previous description of “want” and “free will” in this thread, and thread #1.
How does Pooperscootian Utilitarianism deal with various obstacles utilitarianism inevitably involves? Is it act-based or rule-based or both? Does it seek to maximize pleasure and minimize suffering through Total Consequentialism, or Average Consequentialism or neither? How does it deal with future events? Have you considered the problems of concepts like “utility monsters,” such as single beings who might gain infinite joy from other being’s suffering?
Answer: I am less confident about what the answers to those questions are than the answers to the other questions I’ve described.
I suspect Pooperscootian Utilitarianism will be primarily act-based utilitarianism, but rule-based in circumstances in which the consequences of following act-based utilitarianism for society would lead to more suffering than some variant of rule-based utilitarianism…the best example of which I can think of would be regarding sex crimes, in the following way: It would seem to me that someone who follows act-based utilitarianism might note that sex crimes against a sleeping person that don’t damage the sleeping person, and that won’t go caught, would lead to no suffering for the victim, and only pleasure for the perpetrator, and therefore you could argue that act-based utilitarianism would encourage such sex crimes. However, if a moral code encourages engaging in such sex crimes, I’d suspect that would greatly frighten society, and therefore cause more suffering than if we had a rule forbidding those types of sex crimes…so, I’d argue that in that type of instance we’d temporarily use rule-based utilitarianism rather than act-based utilitarianism, with the rule forbidding those types of sex crimes. I see rule-based utilitarianism as a last resort though, and act-based utilitarianism to generally be more preferable, because act-based utilitarianism focuses on specific actions and therefore will, I’d argue, be much more nuanced and accurate in terms of maximizing pleasure and minimizing suffering, in general.
Regarding utility monsters: A “utility monster” would be a hypothetical creature that gains vastly more pleasure or vastly more suffering, or both, from actions than most other life forms would. So, an example might be a being who gains 1 billion units of pleasure from eating me, whereas I’d only lose ten units of suffering from being eaten by it, that we’ll say are worth negative ten units of pleasure for me. I don’t perceive “utility monsters” as major challenges for any form of utilitarianism…but rather, just concepts to be accepted, through they tend to be at odd with our instincts. The solution to this brand of monster I’ve described, for example, in this isolated circumstance, would seem to be for me simply to be eaten by the utility monster. It’s of course worth keeping in mind that, if I’m eaten, or if there is one being being given massive amounts more resources than all other beings, that could lead to all sorts of massive reductions in pleasure as side-effects…through jealousy, and ensuing revolts against the creature, or fear/hatred etc. That all must be taken into account for accuracy too…and so there will often be reasons to treat these “utility monsters” not so differently than anyone else, even with their greater emotional needs. Another solution, if their emotional needs result in massive suffering for them, could simply be destroying them to avoid having black holes for resources, and perhaps for their own benefit as well. Whether that is true or not course depends on the context and situation…like everything else though.
-------------------------------------------------------------------------------------------------------
Does Pooperscootian Utilitarianism use Total Consequentialism or Average Consequentialism or something else to contemplate how to maximize pleasure and minimize suffering?
Total Consequentialism = that which is best is based of what decisions increase the total amount of pleasure (with suffering functioning as negative pleasure) the most.
Average Consequentialism = that which is best is based off what decisions increases the average pleasure (with suffering functioning as negative pleasure) the most.
Answer:
So, I’d argue that Total Consequentialism and Average Consequentialism each have their obstacles to overcome. Some of the obstacles with the basic forms of each can be seen in the following scenarios:
Ron is a Total Consequentialist and a utilitarian. Ron’s goal is to increase the total amount of pleasure produced by the people of Earth. Through Ron’s innovativeness, Ron is able to produce more than enough resources for an infinite number of humans to live happy lives. Ron concludes that the more people there are, the more people exist to produce happiness. Therefore Ron begins encouraging women to have as many babies as possible. There exists a problem for Ron though. Most women don’t want to have more than a few children – usually well under 10. Some women don’t want to have any children at all. So, what’s Ron to do?
And here’s the main problem that I see as existing for Ron that totally refutes the idea that Total Consequentialism in its basic form could be rational: The goal of any sensible moral code is to help life forms that do or will exist. There is no point in trying to help organisms that will never exist. Therefore it makes no sense under any circumstances to produce new life forms, unless those life forms assist previously existing life forms, or life forms that will exist in the future, more than they harm them.
So, in other words, we should never be thinking “I should have more babies because that will produce more pleasure for society through the babies’ pleasure.” That’s because the would-be babies benefit in no way from coming into existence. You can simply not have the baby you’re considering having, and then that would-have-been-person automatically never exists, and is not factored into our equations.
The question of whether or not to have children should be totally rooted in whether or not those children would improve society more than harm it. Only after it’s clear that they would is the question of whether or not their lives would be ideal to them factored into the question of whether or not to produce new life.
So, I’d definitely argue that Total Consequentialism in its basic form is unusable...because it argues for endlessly producing new life forms regardless of whether they assist previously existing beings or beings who will exist in the future. What about average consequentialism?
----------------------------------------------------------------------------------------------
Sue is an Average Consequentialist and a Utilitarian. Sue’s goal is to engage in actions that increase the average happiness level of the people on Earth. Sue decides to go about doing this by engaging in a series of light-speed mass euthanizations that all occur before anyone notices. First, she zips around the world euthanizing the least happy people. Then she zips around the world euthanizing the next least happy people. Then, a billionth of a second later she zips around the world to euthanize the next least happy people. She engages in this process because each time she’s euthanized the least happy group of people to increase the average happiness of the people of Earth, there becomes a next least happy group of people on Earth she can euthanize to increase average happiness. Now, ordinarily we could argue that such death would cause massive terror and suffering and perhaps couldn’t be worth the costs…but in Sue’s case she’s moving so quickly that nobody can react in enough time to feel any alarm from it. Nobody has time to realize they’ve lost their relatives, or their own life, or that anything is unordinary.
I’d actually argue that Sue’s method is less clearly flawed than Ron’s. I could see benefit to all life merging into some kind of singular, euphoric organism. The ultimate survivors or survivor would all likely feel fear of being euthanized themselves, and having all their life’s efforts be torn out from under them when they die unexpectedly…but you might be able to deal with those dilemmas by them experiencing personality changes of some kind.
So, I’d say with Sue’s method the end result would presumably be some kind of singular, perpetually euphoric organism…or the extinction of all life, because death is not inherently negative, and life cannot be better than death, because once a life form dies there is no life form that could be benefited from being alive left…so death, in all circumstances is just as ideal as eternal euphoria, if there is no pain nor suffering required to experience the death.
Here's the major flaw I see with Sue’s Method though: Whether it’s worth it for me to live or not is in no way dependent on how happy people on the other side of the planet are.
Sue’s method in its basic state would only allow for the destruction of life if that life leads an inferior existence to something else. However, it would seemingly not allow for the destruction of the life living the most ideal existence of existing life if all existing life were leading a horrifyingly miserable existence, and the most ideal lives of life were only a little less horrifyingly miserable. Sue’s Method in its basic state would not allow for the death of people living down in hell in equal amounts of misery, if everyone were down in hell, but would encourage the death of cheerful medium wage workers with loving families, who merely aren’t as happy as most billionaires.
So, here’s my proposed solution, given that both Average and Total Consequentialism appear flawed in their basic forms (keeping in mind that I could be wrong about this being the best system): I think, unless someone thinks up a better idea, we should use a modified version of Total Consequentialism that is modified in the following way:
Your goal, as with the basic form of Total Consequentialism, is to increase the total pleasure produced. Don’t factor in the pleasure that would have been produced by organisms that won’t exist, only organisms that will exist or do.
So, how does the above deal with organisms that might exist? The same way it deals with any potential future pleasure increases or decreases. We’d calculate the pleasure or suffering that would have been produced by their actions over their life, and I’d argue that we should multiply that by the percentage chance that we believe it is likely to come to pass. So, if we believe there is a 25% chance of a person being born, we’d multiply the pleasure or suffering they’d add to the world through their experiences and actions by 0.25
------------------------------------------------------------------------------------------
I do think a modified version of Average Consequentialism still has its place as well though…so long as you avoid the flaws of Average Consequentialism in its basic form. Here’s how it would work:
I’d argue, in any circumstance, it is equally valid to perceive life forms as individuals, as it is to perceive all life as merely sensory appendages of the same super organism.
The modified system of Total Consequentialism I’ve described is the system that treats life forms as individuals who simply have an interest in assisting each other.
The modified system of Average Consequentialism I’m about to describe actually treats all life more as if we’re sensory appendages of the same multi-universe-and-all-of-time-spanning super organism.
As I see it we’re both individuals and sensory appendages of a single super organism composed of all life…so the question of which path to choose is largely subjective and which is better, I’d argue, will probably depend simply on which you believe to be the most neat or simplest to figure out.
That’s keeping in mind that both Average and Total consequentialism will, in many circumstances work pretty similarly. The goal either way is more happiness/less suffering. It’s just that Average Consequentialism in its basic form has more risks of irrationally destroying people, and Total Consequentialism in its basic form has more has more risks of irrational creating people.
So how might we modify Average Consequentialism to be rid of its flaws?
Well Average Consequentialism wants to devour things…so we essentially bop it on the nose with a metaphorical rolled up newspaper and tell it, “No! Bad Average Consequentialism! No devouring that utopian civilization just because they live slightly less euphoric lives than their neighboring utopian civilization!” and just don’t let it do that and instead have the following system for determining whether or not a life form’s existence is good for it:
Both our Average and Total Consequentialist strategies, we base whether or not it is good for that being to be alive based on whether or not it’s happiness level is above some kind of consistent threshold…that would work in ways I’m less than confident about at this time. We will not be destroying civilizations just because they lead less ideal lives than members of other civilizations.
Our modified Average Consequentialist system can lead to some conclusions our modified Total Consequentialist system will not though, and I find these conclusions especially interesting. I’ll describe how they can work in the following thought experiment:
Thought Experiment: Superhumans – some odd ramifications of our modified Average Consequentalist form of Pooperscootian Utilitarianism
In the following scenario, we will be focused purely on benefitting humanity, which will also be defined as humanity’s descendants, to simplify the thought experiment.
Imagine the people of Earth have an option before them. We have the option of building a species of superhumans. The superhumans would be smarter than we are, stronger, have better immune systems but fewer allergies. They wouldn’t physically age past young adulthood. They’d have better impulse control than us. They’d have drastically higher emotional intelligence. They’d be more likely to lead better lives than we do in every conceivable way due to these traits. However, if we create them, a wizard will immediately teleport all of them into another universe, and we’ll never be able to contact them again. This other universe is very similar to our own. The only difference is that Earth there is unpopulated by human beings…so far…but it has similar natural resources and life.
The downside of creating this new group of superhumans is that the cost would put the previously existing humanity in poverty for several generations.
Should we create the superhumans?
Well, if you’ve read some of the reasoning before this thought experiment, you’d have seen my statements about how there is no reason to create new life unless that life would assist previously existing life or life that will exist. With that in mind, it would certainly seem like the Superhumans would have no means of helping their parent civilization more than they harm them. Their creation might be something humanity likes the idea of…but I would suspect whatever meagre increased positive emotion stems from that would not make up for all the years of poverty and resulting suffering.
So, it would seem like the creation of the superhumans would harm existing life forms, while not assisting them or any life forms known that will exist…so according to our modified Total Consequentialist brand of thought that treats people more like individuals, we should not go this route.
Our modified Average Consequentialist option does not treat people like individuals though, but rather as sensory appendages of a single super organism.
So, our modified Average Consequentialist moral code says that all of humanity (and all life, but we’re focusing on humanity now) is one super organism, and the creation of the superhumans will have improved the average quality of life of humanity. It will have, in other words, assisted the super organism humanity comprises, and therefore there’s an argument that perhaps it should be done.
So, which is the better route? Creating the super humans or not?
I’d say that’s up to you. Maybe humanity could vote on it.
Note that, if you’re wondering why humanity life might be better thought of as sensory appendages of a super organism than individuals, refer to Pooperscootian Utilitarianism thread part 2 that emphasis the fuzziness of individuality.
Please read the threads on Pooperscootian Utilitarianism parts 1 and 2 before reading this.
What is Pooperscootian Utilitarianism?:
Answer: Pooperscootian Utilitarianism is a universal brand of utilitarianism that strives to maximize the pleasure and minimize the suffering for all feeling life in all universes for all of time.
Why should I adopt Pooperscootian Utilitarianism as my moral code, and/or under what contexts should I do so?:
Answer: So far as I can tell you already want it to be your moral code. You just may not realize it yet. See the thread on Pooperscootian Utilitarianism part 1 for more details.
You claim that I already want Pooperscootian Utilitarianism to be my moral code. What madness is this? That makes absolutely no sense. I don’t even know what it is yet. Things can’t be my moral code if I’ve not heard of them before now.
Answer: I think that belief stems from an inaccurate definition of what it means for us to want things. Think about a dog who habitually chases cars. In this scenario, the dog’s owners choose not to allow the dog to run around outside freely because it chases cars. The dog may believe that it is protecting its territory and pack from a ferocious monster. It has no way of knowing that it’s, instead, pointlessly irritating its owners, while decreasing its own freedom through forcing its owners to not let it run around outside freely, to prevent the dog from chasing cars. In this way, I’d argue it’s the human owners of the dog who truly understand what the dog’s free will desires more than the dog does. So…what the dog really wants, I’d argue, although it doesn’t realize it, is likely to NOT chase cars…in order to gain more freedom…because it could chase lots of other things through that freedom – rabbits – bugs – moles – frisbees – etc, and get to be outside far more often unsupervised. In this way what our “free will” wants is quite often determined more by what decisions we’d make if we had more knowledge than what decisions we might think we currently want to make, with our limited knowledge.
The above is still not a full explanation of what the bloody hell you’re talking about dumbass.
Answer: So, the idea is that there are different degrees of free will, and the more informed you are about the decision you’re about to make the more free will you have. It’s also possible, even for people who are not you, who have more knowledge than you do about the decision you’re about to make to understand what your free will desires more than you do.
Still not a full explanation
Answer: Think about what it feels like to be some animal – let’s say a giraffe. You could have been born a giraffe…or a serial-killing human…or Nelson Mandela. Regardless of which of these figures we’re considering you might having been, were you them all your actions and thoughts would have been pre-destined by your genetics and environment and so you would have made the exact same decisions Nelson Mandela, or the serial killer, or the giraffe did. For this reason, nobody and nothing can really be accurately described as “deserving” anything better or worse than anyone or anything else. With that in mind, and the fact that you could have been born into any organism in the universe, I’d argue the most enlightened way to think about morality is to imagine how you’d behave towards an organism if you would experience everything they experience. Therefore, I’d argue that what we all really want, but we typically don’t realize it, is to behave as if we could feel the ramifications of all our action and inaction on every life form. With that in mind, I’d say it would make a lot of sense to have a moral code that strives to maximize pleasure and minimize suffering for all life in all universes throughout all of time…because I don’t know how you’d better achieve your goal than through that system.
Why is this system that strives to assist all life a utilitarian system, necessarily? Why not have some other moral code instead?
Answer: I don’t see how it could be possible to assist a life form in any relevant way that doesn’t involve increasing their pleasure or reducing their suffering…and I definitely don’t see how it would be possible to assist an organism in any way they care about that doesn’t involve increasing their pleasure or reducing their suffering. That’s what utilitarianism strives to accomplish…typically just for a defined group, but there’s nothing about utilitarianism that prevents it from striving to do so universally.
Does Pooperscootian Utilitarianism strive to determine objective morality?
Answer: I don’t know what objective morality actually is, and I’m not sure anyone else does either. It might. It definitely has the goal of creating a moral code that everyone in the universe wants to exist though, based on my previous description of “want” and “free will” in this thread, and thread #1.
How does Pooperscootian Utilitarianism deal with various obstacles utilitarianism inevitably involves? Is it act-based or rule-based or both? Does it seek to maximize pleasure and minimize suffering through Total Consequentialism, or Average Consequentialism or neither? How does it deal with future events? Have you considered the problems of concepts like “utility monsters,” such as single beings who might gain infinite joy from other being’s suffering?
Answer: I am less confident about what the answers to those questions are than the answers to the other questions I’ve described.
I suspect Pooperscootian Utilitarianism will be primarily act-based utilitarianism, but rule-based in circumstances in which the consequences of following act-based utilitarianism for society would lead to more suffering than some variant of rule-based utilitarianism…the best example of which I can think of would be regarding sex crimes, in the following way: It would seem to me that someone who follows act-based utilitarianism might note that sex crimes against a sleeping person that don’t damage the sleeping person, and that won’t go caught, would lead to no suffering for the victim, and only pleasure for the perpetrator, and therefore you could argue that act-based utilitarianism would encourage such sex crimes. However, if a moral code encourages engaging in such sex crimes, I’d suspect that would greatly frighten society, and therefore cause more suffering than if we had a rule forbidding those types of sex crimes…so, I’d argue that in that type of instance we’d temporarily use rule-based utilitarianism rather than act-based utilitarianism, with the rule forbidding those types of sex crimes. I see rule-based utilitarianism as a last resort though, and act-based utilitarianism to generally be more preferable, because act-based utilitarianism focuses on specific actions and therefore will, I’d argue, be much more nuanced and accurate in terms of maximizing pleasure and minimizing suffering, in general.
Regarding utility monsters: A “utility monster” would be a hypothetical creature that gains vastly more pleasure or vastly more suffering, or both, from actions than most other life forms would. So, an example might be a being who gains 1 billion units of pleasure from eating me, whereas I’d only lose ten units of suffering from being eaten by it, that we’ll say are worth negative ten units of pleasure for me. I don’t perceive “utility monsters” as major challenges for any form of utilitarianism…but rather, just concepts to be accepted, through they tend to be at odd with our instincts. The solution to this brand of monster I’ve described, for example, in this isolated circumstance, would seem to be for me simply to be eaten by the utility monster. It’s of course worth keeping in mind that, if I’m eaten, or if there is one being being given massive amounts more resources than all other beings, that could lead to all sorts of massive reductions in pleasure as side-effects…through jealousy, and ensuing revolts against the creature, or fear/hatred etc. That all must be taken into account for accuracy too…and so there will often be reasons to treat these “utility monsters” not so differently than anyone else, even with their greater emotional needs. Another solution, if their emotional needs result in massive suffering for them, could simply be destroying them to avoid having black holes for resources, and perhaps for their own benefit as well. Whether that is true or not course depends on the context and situation…like everything else though.
-------------------------------------------------------------------------------------------------------
Does Pooperscootian Utilitarianism use Total Consequentialism or Average Consequentialism or something else to contemplate how to maximize pleasure and minimize suffering?
Total Consequentialism = that which is best is based of what decisions increase the total amount of pleasure (with suffering functioning as negative pleasure) the most.
Average Consequentialism = that which is best is based off what decisions increases the average pleasure (with suffering functioning as negative pleasure) the most.
Answer:
So, I’d argue that Total Consequentialism and Average Consequentialism each have their obstacles to overcome. Some of the obstacles with the basic forms of each can be seen in the following scenarios:
Ron is a Total Consequentialist and a utilitarian. Ron’s goal is to increase the total amount of pleasure produced by the people of Earth. Through Ron’s innovativeness, Ron is able to produce more than enough resources for an infinite number of humans to live happy lives. Ron concludes that the more people there are, the more people exist to produce happiness. Therefore Ron begins encouraging women to have as many babies as possible. There exists a problem for Ron though. Most women don’t want to have more than a few children – usually well under 10. Some women don’t want to have any children at all. So, what’s Ron to do?
And here’s the main problem that I see as existing for Ron that totally refutes the idea that Total Consequentialism in its basic form could be rational: The goal of any sensible moral code is to help life forms that do or will exist. There is no point in trying to help organisms that will never exist. Therefore it makes no sense under any circumstances to produce new life forms, unless those life forms assist previously existing life forms, or life forms that will exist in the future, more than they harm them.
So, in other words, we should never be thinking “I should have more babies because that will produce more pleasure for society through the babies’ pleasure.” That’s because the would-be babies benefit in no way from coming into existence. You can simply not have the baby you’re considering having, and then that would-have-been-person automatically never exists, and is not factored into our equations.
The question of whether or not to have children should be totally rooted in whether or not those children would improve society more than harm it. Only after it’s clear that they would is the question of whether or not their lives would be ideal to them factored into the question of whether or not to produce new life.
So, I’d definitely argue that Total Consequentialism in its basic form is unusable...because it argues for endlessly producing new life forms regardless of whether they assist previously existing beings or beings who will exist in the future. What about average consequentialism?
----------------------------------------------------------------------------------------------
Sue is an Average Consequentialist and a Utilitarian. Sue’s goal is to engage in actions that increase the average happiness level of the people on Earth. Sue decides to go about doing this by engaging in a series of light-speed mass euthanizations that all occur before anyone notices. First, she zips around the world euthanizing the least happy people. Then she zips around the world euthanizing the next least happy people. Then, a billionth of a second later she zips around the world to euthanize the next least happy people. She engages in this process because each time she’s euthanized the least happy group of people to increase the average happiness of the people of Earth, there becomes a next least happy group of people on Earth she can euthanize to increase average happiness. Now, ordinarily we could argue that such death would cause massive terror and suffering and perhaps couldn’t be worth the costs…but in Sue’s case she’s moving so quickly that nobody can react in enough time to feel any alarm from it. Nobody has time to realize they’ve lost their relatives, or their own life, or that anything is unordinary.
I’d actually argue that Sue’s method is less clearly flawed than Ron’s. I could see benefit to all life merging into some kind of singular, euphoric organism. The ultimate survivors or survivor would all likely feel fear of being euthanized themselves, and having all their life’s efforts be torn out from under them when they die unexpectedly…but you might be able to deal with those dilemmas by them experiencing personality changes of some kind.
So, I’d say with Sue’s method the end result would presumably be some kind of singular, perpetually euphoric organism…or the extinction of all life, because death is not inherently negative, and life cannot be better than death, because once a life form dies there is no life form that could be benefited from being alive left…so death, in all circumstances is just as ideal as eternal euphoria, if there is no pain nor suffering required to experience the death.
Here's the major flaw I see with Sue’s Method though: Whether it’s worth it for me to live or not is in no way dependent on how happy people on the other side of the planet are.
Sue’s method in its basic state would only allow for the destruction of life if that life leads an inferior existence to something else. However, it would seemingly not allow for the destruction of the life living the most ideal existence of existing life if all existing life were leading a horrifyingly miserable existence, and the most ideal lives of life were only a little less horrifyingly miserable. Sue’s Method in its basic state would not allow for the death of people living down in hell in equal amounts of misery, if everyone were down in hell, but would encourage the death of cheerful medium wage workers with loving families, who merely aren’t as happy as most billionaires.
So, here’s my proposed solution, given that both Average and Total Consequentialism appear flawed in their basic forms (keeping in mind that I could be wrong about this being the best system): I think, unless someone thinks up a better idea, we should use a modified version of Total Consequentialism that is modified in the following way:
Your goal, as with the basic form of Total Consequentialism, is to increase the total pleasure produced. Don’t factor in the pleasure that would have been produced by organisms that won’t exist, only organisms that will exist or do.
So, how does the above deal with organisms that might exist? The same way it deals with any potential future pleasure increases or decreases. We’d calculate the pleasure or suffering that would have been produced by their actions over their life, and I’d argue that we should multiply that by the percentage chance that we believe it is likely to come to pass. So, if we believe there is a 25% chance of a person being born, we’d multiply the pleasure or suffering they’d add to the world through their experiences and actions by 0.25
------------------------------------------------------------------------------------------
I do think a modified version of Average Consequentialism still has its place as well though…so long as you avoid the flaws of Average Consequentialism in its basic form. Here’s how it would work:
I’d argue, in any circumstance, it is equally valid to perceive life forms as individuals, as it is to perceive all life as merely sensory appendages of the same super organism.
The modified system of Total Consequentialism I’ve described is the system that treats life forms as individuals who simply have an interest in assisting each other.
The modified system of Average Consequentialism I’m about to describe actually treats all life more as if we’re sensory appendages of the same multi-universe-and-all-of-time-spanning super organism.
As I see it we’re both individuals and sensory appendages of a single super organism composed of all life…so the question of which path to choose is largely subjective and which is better, I’d argue, will probably depend simply on which you believe to be the most neat or simplest to figure out.
That’s keeping in mind that both Average and Total consequentialism will, in many circumstances work pretty similarly. The goal either way is more happiness/less suffering. It’s just that Average Consequentialism in its basic form has more risks of irrationally destroying people, and Total Consequentialism in its basic form has more has more risks of irrational creating people.
So how might we modify Average Consequentialism to be rid of its flaws?
Well Average Consequentialism wants to devour things…so we essentially bop it on the nose with a metaphorical rolled up newspaper and tell it, “No! Bad Average Consequentialism! No devouring that utopian civilization just because they live slightly less euphoric lives than their neighboring utopian civilization!” and just don’t let it do that and instead have the following system for determining whether or not a life form’s existence is good for it:
Both our Average and Total Consequentialist strategies, we base whether or not it is good for that being to be alive based on whether or not it’s happiness level is above some kind of consistent threshold…that would work in ways I’m less than confident about at this time. We will not be destroying civilizations just because they lead less ideal lives than members of other civilizations.
Our modified Average Consequentialist system can lead to some conclusions our modified Total Consequentialist system will not though, and I find these conclusions especially interesting. I’ll describe how they can work in the following thought experiment:
Thought Experiment: Superhumans – some odd ramifications of our modified Average Consequentalist form of Pooperscootian Utilitarianism
In the following scenario, we will be focused purely on benefitting humanity, which will also be defined as humanity’s descendants, to simplify the thought experiment.
Imagine the people of Earth have an option before them. We have the option of building a species of superhumans. The superhumans would be smarter than we are, stronger, have better immune systems but fewer allergies. They wouldn’t physically age past young adulthood. They’d have better impulse control than us. They’d have drastically higher emotional intelligence. They’d be more likely to lead better lives than we do in every conceivable way due to these traits. However, if we create them, a wizard will immediately teleport all of them into another universe, and we’ll never be able to contact them again. This other universe is very similar to our own. The only difference is that Earth there is unpopulated by human beings…so far…but it has similar natural resources and life.
The downside of creating this new group of superhumans is that the cost would put the previously existing humanity in poverty for several generations.
Should we create the superhumans?
Well, if you’ve read some of the reasoning before this thought experiment, you’d have seen my statements about how there is no reason to create new life unless that life would assist previously existing life or life that will exist. With that in mind, it would certainly seem like the Superhumans would have no means of helping their parent civilization more than they harm them. Their creation might be something humanity likes the idea of…but I would suspect whatever meagre increased positive emotion stems from that would not make up for all the years of poverty and resulting suffering.
So, it would seem like the creation of the superhumans would harm existing life forms, while not assisting them or any life forms known that will exist…so according to our modified Total Consequentialist brand of thought that treats people more like individuals, we should not go this route.
Our modified Average Consequentialist option does not treat people like individuals though, but rather as sensory appendages of a single super organism.
So, our modified Average Consequentialist moral code says that all of humanity (and all life, but we’re focusing on humanity now) is one super organism, and the creation of the superhumans will have improved the average quality of life of humanity. It will have, in other words, assisted the super organism humanity comprises, and therefore there’s an argument that perhaps it should be done.
So, which is the better route? Creating the super humans or not?
I’d say that’s up to you. Maybe humanity could vote on it.
Note that, if you’re wondering why humanity life might be better thought of as sensory appendages of a super organism than individuals, refer to Pooperscootian Utilitarianism thread part 2 that emphasis the fuzziness of individuality.