In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual's decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal's Mugging in and of itself. If you are confronted by a Mugger (or some ...
The main "protection" of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on "where you are on the curve." To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. &nbs...
TLDR: What I really want to know is:
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any "Pascal's Mugging" type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to "have" at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various uncha...
Hi, one other problem occurred to me in regards to short term decisions and bounded utility.
Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility. Maybe you're an immortal altruist who has helped quadrillions of people, maybe you're an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can't be changed.
You then confront a Pascal's Mugger who threatens to inflict a tremendous amount of disutility unless y...
Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.
The case of the immortal who becomes uncertain of the reality of their experiences is I think what that "Pascal's Mugging for Bounded Utilities" article I linked to the the OP was getting at. But it's a relief to see that it's just a subset of decisions under uncertainty, rather than a special weird problem.
the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they're very altruistic.
Ok, thanks, I get that now, I appreciate your help. The thing I am really wondering is, does this make any difference at all to how that immortal would make decisions once Ancient Egypt is in the past and cannot be changed? Assuming that they have one of those bounded utility functions where their utility is asymptotic to the bound, but never actually reaches it...
The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying "I don't know what the distribution of outcomes will be", it's phrased as "I don't know what my utility function is".
I think part of it is that I am conflating two different parts of the Egyptology problem. One part is uncertainty: it isn't possible to know certain facts about the welfare of Ancient Egyptians that might affect how "close to the bound" you are. The other part is that most people have a strong intuition that those facts aren't rele...
I really still don't know what you mean by "knowing how close to the bound you are".
What I mean is, if I have a bounded utility function where there is some value, X, and (because the function is bounded) X diminishes in value the more of it there is, what if I don't know how much X there is?
For example, suppose I have a strong altruistic preference that the universe have lots of happy people. This preference is not restricted by time and space, it counts the existence of happy people as a good thing regardless of where or when they exist...
REA doesn't help at all there, though. You're still computing U(2X days of torture) - U(X days of torture)
I think I see my mistake now, I was treating a bounded utility function using REA as subtracting the "unbounded" utilities of the two choices and then comparing the post-subtraction results using the bounded utility function. It looks like you are supposed to judge each one's utility by the bounded function before subtracting them.
...Unfortunately REA doesn't change anything at all for bounded utility functions. It only makes any difference for unbounded
Thank you for your reply. That was extremely helpful to have someone crunch the numbers. I am always afraid of transitivity problems when considering ideas like this, and I am glad it might be possible to avoid the Egyptology objection without introducing any.
Thanks a lot for the reply. That makes a lot of sense and puts my mind more at ease.
To me this sounds more like any non-linear utility, not specifically bounded utility.
You're probably right, a lot of my math is shaky. Let me try to explain the genesis of the example I used. I was trying to test REA for transitivity problems because I thought that it might have some further advantages to conventional theories. In particular, it seemed to me that by subtracting before averaging, REA could avoid the two examples those articles I refer...
That aside, relative expected value is purely a patch that works around some specific problems with infinite expected values, and gives exactly the same results in all cases with finite expected values.
That's what I thought as well. But then it occurred to me that REA might not give exactly the same results in all cases with finite expected values if one has a bounded utility function. If I am right, this could result in scenarios where someone could have circular values or end up the victim of a money pump.
For example, imagine there is a lotte...
That can be said about any period in life. It's just a matter of perspective and circumstances. The best years are never the same for different people.
That's true, but I think that for the overwhelming majority of people, their childhoods and young adulthoods were at the very least good years, even if they're not always the best. They are years that contain significantly more good than bad for most people. So if you create a new adult who never had a childhood, and whose lifespan is proportionately shorter, they will have a lower total amount of wellbeing over their lifetime than someone who had a full-length life that included a childhood.
I took a crack at figuring it out here.
I basically take a similar approach to you. I give animals a smaller -u0 penalty if they are less self-aware and less capable of forming the sort of complex eudaimonic preferences that human beings can. I also treat complex eudaimonic preferences as generating greater moral value when satisfied in order to avoid incentivizing creating animals over creating humans.
I think another good way to look at u0 that compliments yours is to look at it as the "penalty for dying with many preferences left unsatisfied." Pretty much everyone dies with some things that they wanted to do left undone. I think most people have a strong moral intuition that being unable to fulfill major life desires and projects is tragic, and think a major reason death is bad is that it makes us unable to do even more of what we want to do with our lives. I think we could have u0 represent that intuition.
If we go back to Peter Singer's or...
Point taken, but for the average person, the time period of growing up isn't just a joyless period where they do nothing but train and invest in the future. Most people remember their childhoods as a period of joy and their college years as some of the best of their lives. Growing and learning isn't just preparation for the future, people find large portions of it to be fun. So the "existing" person would be deprived of all that, whereas the new person would not be.
If someone is in a rut and could either commit suicide or take the reprogramming drug (and expects to have to take it four times before randomizing to a personality that is better than rerolling a new one), why is that worse than killing them and allowing a new human to be created?
If such a drug is so powerful that the new personality is essentially a new person, then you have created a new person whose lifespan will be a normal human lifespan minus however long the original person lived before they got in a rut. By contrast, if they commit suicide a...
So if they don't want to be killed, that counts as a negative if we do that, even if we replace them with someone happier.
I have that idea as my "line of retreat." My issue with it is that it is hard to calibrate it so that it leaves as big a birth-death asymmetry as I want without degenerating into full-blown anti-natalism. There needs to be some way to say that the new happy person's happiness can't compensate for the original person's death without saying that the original person's own happiness can't compensate for their own death, which is hard....
You can always zero out those utilities by decree, and only consider utilities that you can change. There are other patches you can apply. By talking this way, I'm revealing the principle I'm most willing to sacrifice: elegance.
It's been a long time since you posted this, but if you see my comment, I'd be curious about what some others patches one could apply are. I have pretty severe scrupulosity issues around population ethics and often have trouble functioning because I can't stop thinking about them. I dislike pure total utilitarianism, but...
You can get mind states that are ambiguous mixes of awake and asleep.
I am having trouble parsing this statement. Does it mean that when simulating a mind you could also simulate ambiguous awake/asleep in addition to simulating sleep and wakefulness? Or does it mean that a stored, unsimulated mind is ambiguously neither awake or asleep?
That makes a lot of sense, thank you.
Thanks for the reply. It sounds like maybe my mistake was assuming that unsimulated brain data was functionally and morally equivalent to an unconscious brain. From what you are saying it sounds like the data would need to be simulated even to generate unconsciousness.
And much like Vaniver below (above? earlier!), I am unsure how to translate these sorts of claims into anything testable
One thing I consider very suspicious is that deaf people often don't just deny the terminal value of hearing. They also deny its instrumental value. The instrumental values of hearing are obvious. This indicates to me that they are denying it for self-esteem reasons and group loyalty reasons, the same way I have occasionally heard multiculturalists claim behaviors of obvious instrumental value (like being on time) are merely the sub...
it may be clearer to consider counterfactual mes of every possible sexual orientation, and comparing the justifications they can come up with for why it's egosyntonic to have the orientation that they have.
I think that maybe all of them would be perfectly justifying in saying that their sexual orientation is a terminal value and the buck stops there.
On the other hand, I'm nowhere near 100% sure I wouldn't take a pill to make me bisexual.
If you kept all of my values the same and deleted my sexual orientation, what would regrow?
I think a way to hel...
It seems to me that most people lack the ability to be aroused by people--typically, their ability is seriously limited, to half of the population at most.
When I was talking about being queer I wasn't just talking about the experience of being aroused, I was talking about the desire to have that experience, and that experience being egosyntonic. It's fairly easy to rephrase any preference a person has to sound like an ability or lack thereof. For instance, you could say that I lack the ability to enjoy skinning people alive. But that's because I don'...
But since then, you've concluded that being queer isn't actually something (at least some people, like me) differentially approve of.
I'm not sure what I wrote that gave you this idea. I do think that queer people approve of being queer. What I'm talking about when I say "approval" is preferences that are ego-syntonic, that are line with the kind of person they want to be. Most queer people consider their preference to be ego-syntonic. Being queer is the kind of person they want to be and they would not change it if they could. Those who do...
Rereading your original comment keeping in mind that you're talking mostly about approval rather than desire or preference... so, would you say that Deaf people necessarily disapprove of deafness?
I'd say that a good portion of them do approve of it. There seem to be a lot of disability rights activists who seem to think that being disabled and making more disabled people is okay.
I should also mention, however, that I do think it is possible to mistakenly approve or disapprove of something. For instance I used to disapprove of pornography and voluntary...
It's not clear to me how this difference justifies the distinction in my thinking I was describing.
I believe the difference is that in the case of deaf people, you are improving their lives by giving them more abilities to achieve the values they have (in this case, an extra sense). By contrast, with queerness you are erasing a value a person has and replacing it with a different value that is easier to achieve. I believe that helping a person achieve their existing values is a laudable goal, but that changing a person's values is usually morally prob...
...I acknowledge that life is more difficult in certain readily quantifiable ways for queer people than for straight people, but it doesn't follow that I would use a reliable therapy for making me straight if such a thing existed... and in fact I wouldn't. Nor would I encourage the development of such a therapy, particularly, and indeed the notion of anyone designing such a therapy makes me more than faintly queasy. And if it existed, I'd be reluctant to expose my children to it. And I would be sympathetic to claims that developers and promoters of such a te
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there's uncertainty about it... That is, you display uncertainty about your utility function. Check mate.
Even if I was uncertain about my utility function, you're still wrong. The factor you are forgetting about is uncertainty. With a bounded utility function infinite utility scores the same as a smaller amount of utility. So you should always assume a bounded utility function, because unbounded utility functions don't offer any more utili...
It seems to me that the project of transhumanism in general is actually the project of creating artificial utility monsters. If we consider a utility monster a creature that can transmute resources into results more efficiently that's essentially what a transhuman is.
In a world where all humans have severe cognitive and physical disabilities and die at the age of 30 a baseline human would be a utility monster. They would be able to achieve far more of their life goals and desires than all other humans would. Similarly, a transhuman with superhuman cog...
I suspect that calling your utility function itself into question like that isn't valid in terms of expected utility calculations.
I think what you're suggesting is that on top of our utility function we have some sort of meta-utility function that just says "maximize your utility function, whatever it is." That would fall into your uncertainty trap, but I don't think that is the case, I don't think we have a meta-function like that, I think we just have our utility function.
If you were allowed to cast your entire utility function into doubt yo...
This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren't worth living than it is to create a large number of lives that are barely worth living.
I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one.
There are several "impossibility" theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.
If you take the underlying princi...
It's worth noting that the question of what is a better way of evaluating such prospects is distinct from the question of how I in fact evaluate them.
Good point. What I meant was closer to "which method of evaluation does the best job of capturing how you intuitively assign value" rather than which way is better in some sort of objective sense. For me #1 seems to describe how I assign value and disvalue to repeating copies better than #2 does, but I'm far from certain.
So I think that from my point of view Omega offering to extend the length ...
I think I understand your viewpoint. I do have an additional question though, which is what you think about how to to evaluate moments that have a combination of good and bad.
For instance, let's suppose you have the best day ever, except that you had a mild pain in your leg for the most of the day. All the awesome stuff you did during the day more than made up for that mild pain though.
Now let's suppose you are offered the prospect of having a copy of you repeat that day exactly. We both agree that doing this would add no additional value, the question i...
For my own part, I share your #1 and #2, don't share your #3 (that is, I'd rather Omega not reproduce the bad stuff, but if they're going to do so, it makes no real difference to me whether they reproduce the good stuff as well)
One thing that makes me inclined towards #3 is the possibility that the multiverse is constantly reproducing my life over and over again, good and bad. I do not think that I would consider it devastatingly bad news if it turns out that the Many-Worlds interpretation is correct.
If I really believed that repeated bad experiences...
I don't see anything inconsistent about believing that a good life loses values with repetition, but a bad life does not lose disvalue. It's consistent with the Value of Boredom, which I thoroughly endorse.
Now, there's a similar question where I think my thoughts on the subject might get a little weird. Imagine you have some period of your life that started out bad, but then turned around and then became good later so that in the end that period of life was positive on the net. I have the following preferences in regards to duplicating it:
I would not p
It seems like there's an easy way around this problem. Praise people who are responsible and financially well-off for having more kids. These traits are correlated with good genes and IQ, so it'll have the same effect.
It seems like we already do this to some extent. I hear others condemning people with who are irresponsible and low-income for having too many children fairly frequently. It's just that we fail to extend this behavior in the other direction, to praising responsible people for having children.
I'm not sure why this is. It could be for one...
I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment
I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goal...
I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, c...
A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.
Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to t...
Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering.
If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart.
...It's wo
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
Yes, but I would argue that the fact that they can't actually do that yet makes a difference.
...Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consu
For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men."
There is a huge difference between discriminatory favoritism, and valuing continued life over adding new people,
In discriminatory favoritism people have a property that makes them morally valuable (i.e the ability to have preferences, or to feel pleasure and pain). The...
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don't know what the answer is. I wonder if SisterY or one of the o...
What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?
I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.
The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means...
It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.
No it isn't. This can be demonstrated fairly simply. Imagine a population consisting of 100 people. 99 of those people have great lives, 1 of those people has a mediocre one.
At the time you are considering doing the killing the person with the mediocre life, he has accumulated 25 utility. If you let him live he will accumulate 5 more utility. The 99 people with great lives will accumulate 100 utility o...
I wonder what a CEV-implementing AI would do with such cases.
Even if it does turn out that my current conception of personal identity isn't the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it's similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones ...
Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:
When I read the formulation of the Sadistic Conclusion I interpreted "people with positive utility" to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives ...
The main argument I've heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where... (read more)