All of Ghatanathoah's Comments + Replies

My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.

 

The main argument I've heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where... (read more)

In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual's decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.

 

That makes sense.  So it sounds like the Egyptology Objection is almost a form of Pascal's Mugging in and of itself. If you are confronted by a Mugger (or some ... (read more)

2JBlack
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have. People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts. People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future. People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings. People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all. Decision theories based on eliminating all these real phenomena seem very much less than useful.

The main "protection" of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.

 

That makes sense. What I am trying to figure out is, does that threshold credibility change depending on "where you are on the curve."  To illustrate this, imagine two altruistic agents, A and B,  who have the same bounded utility function. &nbs... (read more)

6JBlack
Yes, I would expect that the thresholds would be different depending upon the base state of the universe. In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual's decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles. The question of bounded utility can be thought of as "is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?" If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better. My personal view is yes: there are scenarios so bad that a 99% chance of making it "good" is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.

TLDR: What I really want to know is: 

1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any "Pascal's Mugging" type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to "have" at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?

2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various uncha... (read more)

6JBlack
Yes, I'm sorry about that. I don't really think Pascal's Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that. If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That's the essence of Pascal's Mugging. The main "protection" of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them. Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.

Hi, one other problem occurred to me in regards to short term decisions and bounded utility.

Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility.  Maybe you're an immortal altruist who has helped quadrillions of people, maybe you're an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can't be changed.

You then confront a Pascal's Mugger who threatens to inflict a tremendous amount of disutility unless y... (read more)

1JBlack
If a Pascal's Mugger can credibly threaten an entire universe of people with indefinite torture, their promise to never carry out their threat for $5 is more credible than not, and you have good reason to believe that nothing else will work, then seriously we should just pay them. This is true regardless of whether utility is bounded or not. All of these conditions are required, and all of them are stupid, which is why this answer defies intuition. If there is no evidence that the mugger is more than an ordinarily powerful person, then the prior credence of their threat is incredibly low, because in this scenario the immortal has observed a universe with ~10^100 lives and none of them were able to do this thing before. What are the odds that this person, now, can do the thing they're suggesting? I'd suggest lower than 10^-120. Certainly no more than 10^-100 credence on a randomly selected person in the universe would have this power (probably substantially less), and conditional on someone having such power, it's very unlikely that they could provide no evidence for it. But even in that tiny conditional, what is the probability that giving them $5 will actually stop them using it? They would have to be not only the universe's most powerful person, but also one of the the universe's most incompetent extortionists. What are the odds that the same person has both properties? Even lower still. It seems far more likely that giving them $5 will do nothing positive at all and may encourage them to do more extortion, eventually dooming the universe to hell when someone can't or won't pay. The net marginal utility of paying them may well be negative. There are other actions that seem more likely to succeed, such as convincing them that with enormous power there are almost certainly things they could do for which people would voluntarily pay a great deal more than $5. But really, the plausibility of this scenario is ridiculously, vastly low to the point where it's not se

Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.

The case of the immortal who becomes uncertain of the reality of their experiences is I think what that "Pascal's Mugging for Bounded Utilities" article I linked to the the OP was getting at. But it's a relief to see that it's just a subset of decisions under uncertainty, rather than a special weird problem. 

the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they're very altruistic.

 

Ok, thanks, I get that now, I appreciate your help. The thing I am really wondering is, does this make any difference at all to how that immortal would make decisions once Ancient Egypt is in the past and cannot be changed? Assuming that they have one of those bounded utility functions where their utility is asymptotic to the bound, but never actually reaches it... (read more)

6JBlack
Yes, the relative scale of future utility makes no difference in short-term decisions, though noting that short-term to an immortal here can still mean "in the next 10^50 years"! It might make a difference in the case where someone who thought that they were immortal becomes uncertain of whether what they already experienced was real. That's the sort of additional problem you get with uncertainty over risk though, not really a problem with bounded utility itself.

The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying "I don't know what the distribution of outcomes will be", it's phrased as "I don't know what my utility function is".

I think part of it is that I am conflating two different parts of the Egyptology problem. One part is uncertainty: it isn't possible to know certain facts about the welfare of Ancient Egyptians that might affect how "close to the bound" you are. The other part is that most people have a strong intuition that those facts aren't rele... (read more)

2JBlack
Yes, splitting the confounding factors out does help. There still seem to be a few misconceptions and confounding things though. One is that bounded doesn't mean small. On a scale where the welfare of the entire civilization of Ancient Egypt counts for 1 point of utility, the bound might still be more than 10^100. Yes, this does imply that after 10^70 years of civilizations covering 10^30 planet-equivalents, the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they're very altruistic.

I really still don't know what you mean by "knowing how close to the bound you are".

 

What I mean is, if I have a bounded utility function where there is some value, X, and (because the function is bounded) X diminishes in value the more of it there is, what if I don't know how much X there is? 

For example, suppose I have a strong altruistic preference that the universe have lots of happy people. This preference is not restricted  by time and space, it counts the existence of happy people as a good thing regardless of where or when they exist... (read more)

2JBlack
This whole idea seems to be utterly divorced from what utility means. Fundamentally, utility is based on an ordering of preferences over outcomes. It makes sense to say that you don't know what the actual outcomes will be, that's part of decision under risk. It even makes sense to say that you don't know much about the distribution of outcomes, that's decision under uncertainty. The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying "I don't know what the distribution of outcomes will be", it's phrased as "I don't know what my utility function is". I think things will be much clearer when phrased in terms of decision making under uncertainty: "I know what my utility function is, but I don't know what the probability distribution of outcomes is".

REA doesn't help at all there, though. You're still computing U(2X days of torture) - U(X days of torture)

I think I see my mistake now, I was treating a bounded utility function using REA as subtracting the "unbounded" utilities of the two choices and then comparing the post-subtraction results using the bounded utility function. It looks like you are supposed to judge each one's utility by the bounded function before subtracting them.

Unfortunately REA doesn't change anything at all for bounded utility functions. It only makes any difference for unbounded

... (read more)
2JBlack
I really still don't know what you mean by "knowing how close to the bound you are". Utility functions are just abstractions over preferences that satisfy some particular consistency properties. If the happiness of Ancient Egyptians doesn't affect your future preferences, then they don't have any role in your utility function over future actions regardless of whether it's bounded or not.

Thank you for your reply. That was extremely helpful to have someone crunch the numbers. I am always afraid of transitivity problems when considering ideas like this, and I am glad it might be possible to avoid the Egyptology objection without introducing any.

Thanks a lot for the reply. That makes a lot of sense and puts my mind more at ease. 

To me this sounds more like any non-linear utility, not specifically bounded utility.

You're probably right, a lot of my math is shaky.  Let me try to explain the genesis of the example I used.  I was trying to test REA for transitivity problems because I thought that it might have some further advantages to conventional theories.  In particular, it seemed to me that by subtracting before averaging, REA could avoid the two examples those articles I refer... (read more)

2JBlack
Unfortunately REA doesn't change anything at all for bounded utility functions. It only makes any difference for unbounded ones. I don't get the "long lived egoist" example at all. It looks like it drags in a whole bunch of other stuff like path-dependence and lived experience versus base reality to confound basic questions about bounded versus unbounded utility. I suspect most of the "scary situations" in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description. REA doesn't help at all there, though. You're still computing U(2X days of torture) - U(X days of torture) which can be made as close to zero as you like for large enough X if your utility function is monotonic in X and bounded below.

That aside, relative expected value is purely a patch that works around some specific problems with infinite expected values, and gives exactly the same results in all cases with finite expected values.

That's what I thought as well.  But then it occurred to me that REA might not give exactly the same results in all cases with finite expected values if one has a bounded utility function.  If I am right, this could result in scenarios where someone could have circular values or end up the victim of a money pump.

For example, imagine there is a lotte... (read more)

2JBlack
I just thought I'd also comment on this: Under the conditions of this scenario and some simplifying assumptions (such as your marginal utility function depending only on how much money you have in each outcome), then they mathematically must become more valuable somewhere between spending $0.01 and spending $1. Without the simplifying assumptions, you can get counterexamples like someone who gets a bit of a thrill from buying lottery tickets, and who legitimately does attain higher utility from buying 100 tickets for $1 than one big ticket.
2JBlack
So you're talking about cases where (for example) the utility of winning is 1000, the marginal utility of winning 1/100th as much is 11, and this makes it more worthwhile to buy a partial ticket for a penny when it's not worthwhile to buy a full ticket for a dollar? To me this sounds more like any non-linear utility, not specifically bounded utility. No. REA still compares utilities of outcomes, it just does subtraction before averaging over outcomes instead of comparison after. Specifically, the four outcomes being compared are: spend $0.01 then win 0.01x (with probability y), spend $0.01 then lose (probability 1-y), spend $0.02 then win 0.02x (y), spend $0.02 then lose (1-y). The usual utility calculation is to buy another ticket when y U(spend $0.02 then win 0.02x) + (1-y) U(spend $0.02 then lose) > y U(spend $0.01 then win 0.01x) + (1-y) U(spend $0.01 and lose). REA changes this only very slightly. It says to buy another ticket when y (U(spend $0.02 then win 0.02x) - U(spend $0.01 then win 0.01x)) + (1-y) (U(spend $0.02 then lose) - U(spend $0.01 then lose)) > 0. In any finite example, it's easy to prove that they're identical. There is a difference only when there are infinitely many outcomes and the sums on the LHS and RHS of the usual computation don't converge. In some cases, the REArranged sum converges. There is no difference at all with anyone who has a bounded utility function. The averaging over outcomes always produces a finite result in that case, so the two approaches are identical.
6qwertyasdef
I think you're right that your pennies become more valuable the less you have. Suppose you start with m money and your utility function is U:money→utility. Assuming the original lottery was not worth playing, then xy+(U(m−1)−U(m))(1−y)<0, which rearranges to U(m)−U(m−1)>xy1−y. This can be though of as saying the average slope of the utility function from m−1 to m is greater than some constant xy1−y. For the second lottery, each ticket you buy means you have less money. Then the utility cost of the first lottery ticket is U(m−0.01)−U(m), the second U(m−0.02)−U(m−0.01), the thirdU(m−0.03)−U(m−0.02), and so on. If the first ticket is worth buying, then 0.01xy+(U(m−0.01)−U(m))(1−y)>0 so U(m)−U(m−0.01)0.01<xy1−y. This means the average slope of the utility function from m−0.01 to m is less than the average slope from m−1 to m, so if the utility function is continuous, there must be some other point in the interval [m−1,m] where the slope is greater than average. This corresponds to a ticket that is no longer worth buying because it's an even worse deal than the single ticket from the original lottery. Also note that the value of m is completely arbitrary and irrelevant to the argument, so I think this should still avoid the Egyptology objection.

That can be said about any period in life. It's just a matter of perspective and circumstances. The best years are never the same for different people.

 

That's true, but I think that for the overwhelming majority of people, their childhoods and young adulthoods were at the very least good years, even if they're not always the best.  They are years that contain significantly more good than bad for most people.  So if you create a new adult who never had a childhood, and whose lifespan is proportionately shorter, they will have a lower total amount of wellbeing over their lifetime than someone who had a full-length life that included a childhood.

I took a crack at figuring it out here.

I basically take a similar approach to you. I give animals a smaller -u0 penalty if they are less self-aware and less capable of forming the sort of complex eudaimonic preferences that human beings can. I also treat complex eudaimonic preferences as generating greater moral value when satisfied in order to avoid incentivizing creating animals over creating humans.

I think another good way to look at u0 that compliments yours is to look at it as the "penalty for dying with many preferences left unsatisfied."  Pretty much everyone dies with some things that they wanted to do left undone.  I think most people have a strong moral intuition that being unable to fulfill major life desires and projects is tragic, and think a major reason death is bad is that it makes us unable to do even more of what we want to do with our lives. I think we could have u0 represent that intuition.

If we go back to Peter Singer's or... (read more)

Point taken, but for the average person, the time period of growing up isn't just a joyless period where they do nothing but train and invest in the future.  Most people remember their childhoods as a period of joy and their college years as some of the best of their lives.  Growing and learning isn't just preparation for the future, people find large portions of it to be fun. So the "existing" person would be deprived of all that, whereas the new person would not be.

0Kenny
That can be said about any period in life. It's just a matter of perspective and circumstances. The best years are never the same for different people. This seems more anecdotal, and people becoming jaded as they grow older is a similar assertion in nature

If someone is in a rut and could either commit suicide or take the reprogramming drug (and expects to have to take it four times before randomizing to a personality that is better than rerolling a new one), why is that worse than killing them and allowing a new human to be created?

If such a drug is so powerful that the new personality is essentially a new person, then you have created a new person whose lifespan will be a normal human lifespan minus however long the original person lived before they got in a rut.  By contrast, if they commit suicide a... (read more)

0Kenny
It takes extra resource to grow up and learn all the stuff that you've learned like K-12 and college education. You can't guarantee that the new person will be more efficient in using resources to grow than the existing person.

So if they don't want to be killed, that counts as a negative if we do that, even if we replace them with someone happier.


I have that idea as my "line of retreat."  My issue with it is that it is hard to calibrate it so that it leaves as big a birth-death asymmetry as I want without degenerating into full-blown anti-natalism. There needs to be some way to say that the new happy person's happiness can't compensate for the original person's death without saying that the original person's own happiness can't compensate for their own death, which is hard.... (read more)

You can always zero out those utilities by decree, and only consider utilities that you can change. There are other patches you can apply. By talking this way, I'm revealing the principle I'm most willing to sacrifice: elegance.

It's been a long time since you posted this, but if you see my comment, I'd be curious about what some others patches one could apply are.  I have pretty severe scrupulosity issues around population ethics and often have trouble functioning because I can't stop thinking about them.  I dislike pure total utilitarianism, but... (read more)

7Stuart_Armstrong
Hey there! I haven't been working much on population ethics (I'm more wanting to automate the construction of values from human preferences so that an AI could extract a whole messy theory from it). My main thought on these issues is to set up a stronger divergence between killing someone and not bringing them into existence. For example, we could restrict preference-satisfaction to existing beings (and future existing beings). So if they don't want to be killed, that counts as a negative if we do that, even if we replace them with someone happier. This has degenerate solutions too - it incentivises producing beings that are very easy to satisfy and that don't mind being killed. But note that "create beings that score max on this utility scale, even if they aren't conscious or human" is a failure mode for average and total utilitarianism as well, so this isn't a new problem.
You can get mind states that are ambiguous mixes of awake and asleep.

I am having trouble parsing this statement. Does it mean that when simulating a mind you could also simulate ambiguous awake/asleep in addition to simulating sleep and wakefulness? Or does it mean that a stored, unsimulated mind is ambiguously neither awake or asleep?

2Donald Hobson
There are states that existing humans sometimes experience, like sleepwalking, microsleeps ect that are ambiguous. Whether or not a digital mind is being simulated is a much crisper definition.

Thanks for the reply. It sounds like maybe my mistake was assuming that unsimulated brain data was functionally and morally equivalent to an unconscious brain. From what you are saying it sounds like the data would need to be simulated even to generate unconsciousness.

3Donald Hobson
Yes, to get a state equivalent to sleeping, you are still simulating the neurons. You can get mind states that are ambiguous mixes of awake and asleep.

And much like Vaniver below (above? earlier!), I am unsure how to translate these sorts of claims into anything testable

One thing I consider very suspicious is that deaf people often don't just deny the terminal value of hearing. They also deny its instrumental value. The instrumental values of hearing are obvious. This indicates to me that they are denying it for self-esteem reasons and group loyalty reasons, the same way I have occasionally heard multiculturalists claim behaviors of obvious instrumental value (like being on time) are merely the sub... (read more)

0Vaniver
I think some parallels still go through, if you consider the difference between "sex is for recreation!" (the queer-friendly view) and "sex is for procreation!" (the queer-unfriendly view). I don't see anyone claiming that heterosexual sex never leads to babies, but I do see a lot of people trivializing the creation of babies.

it may be clearer to consider counterfactual mes of every possible sexual orientation, and comparing the justifications they can come up with for why it's egosyntonic to have the orientation that they have.

I think that maybe all of them would be perfectly justifying in saying that their sexual orientation is a terminal value and the buck stops there.

On the other hand, I'm nowhere near 100% sure I wouldn't take a pill to make me bisexual.

If you kept all of my values the same and deleted my sexual orientation, what would regrow?

I think a way to hel... (read more)

It seems to me that most people lack the ability to be aroused by people--typically, their ability is seriously limited, to half of the population at most.

When I was talking about being queer I wasn't just talking about the experience of being aroused, I was talking about the desire to have that experience, and that experience being egosyntonic. It's fairly easy to rephrase any preference a person has to sound like an ability or lack thereof. For instance, you could say that I lack the ability to enjoy skinning people alive. But that's because I don'... (read more)

2Vaniver
Then I see how your claim that most queers are egosyntonic flows through, but it seems like reversing the order of how things go. I visualize the typical experience as something like "id wants X -> ego understands id wants X -> superego approves of id wanting X," with each arrow representing a step that not everyone takes. I agree, but I observe that there's a difficulty in using egosyntonicity (which I would describe as both wanting X and wanting to want X) without a clear theory of meta-values (i.e. "I want to want X because wanting X is consistent with my other wants" is what it looks like to use consistency as a meta-value). I was unclear--I meant emptiness with regards to sexual orientation, not values in general. One could imagine, say, someone who wants to become a priest choosing asexuality, and someone who wants to get ahead in fashion design choosing to be gay, someone who wants to have kids naturally choosing to be heterosexual, and so on. If you kept all of my values the same and deleted my sexual orientation, what would regrow? Compare to the "if you deleted all proofs of the Pythagorean Theorem from my mind, would I be able to reinvent it?" thought experiment. (Since we are talking about values instead of beliefs, and it's not obvious that values would 'regrow' similar to beliefs, it may be clearer to consider counterfactual mes of every possible sexual orientation, and comparing the justifications they can come up with for why it's egosyntonic to have the orientation that they have. It seems some of them will have an easier time of it than others, but that all of them will have an easy enough time that it's not clear I should count my justification as worth much.)

But since then, you've concluded that being queer isn't actually something (at least some people, like me) differentially approve of.

I'm not sure what I wrote that gave you this idea. I do think that queer people approve of being queer. What I'm talking about when I say "approval" is preferences that are ego-syntonic, that are line with the kind of person they want to be. Most queer people consider their preference to be ego-syntonic. Being queer is the kind of person they want to be and they would not change it if they could. Those who do... (read more)

2TheOtherDave
I'm not sure what I wrote that gave you this idea. (nods) Months later, neither am I. Perhaps I'd remember if I reread the exchange, but I'm not doing so right now. Regardless, I appreciate the correction. And much like Vaniver below (above? earlier!), I am unsure how to translate these sorts of claims into anything testable. Also I'm wary of the tendency to reason as follows: "I don't value being deaf. Therefore deafness is not valuable. Therefore when people claim to value being deaf, they are confused and mistaken. Here, let me list various reasons why they might be confused and mistaken." I mean, don't get me wrong: I share this intuition. I just don't trust it. I can't think of anything a deaf person could possibly say to me that would convince me otherwise, even if I were wrong. Similarly, if someone were to say " I believe that being queer is not ego-syntonic. I know people say it is, but I believe that's because they're confused and mistaken, for various reasons: x, y, z" I can't think of anything I could possibly say to them to convince them otherwise. (Nor is this a hypothetical case: many people do in fact say this.)
0Vaniver
It seems to me that most people lack the ability to be aroused by people--typically, their ability is seriously limited, to half of the population at most. I suspect that most, if not all, queers have a preference to be queer (if they do) for this reason. But it's not clear to me how to even test this one way or the other--even if one asked the hypothetical question "if there were a pill to make you straight, would you take it?" that puts one into far-mode, not near-mode, and it's very possible that people will pick answers to please the community of potential romantic partners. (If you say "yes, I'd like to be straight," that'll increase your attractiveness to opposite-sex partners, but not actually increase their attractiveness to you!) (I have thought, at times, 'how convenient to be gay, since I probably would get along much better with men than women!', but I can't claim that I would choose to be gay for that reason, starting from emptiness. Why not bisexuality? Why not asexuality?)
1gjm
It looks to me as if you may be mixing up being queer and preferring to be queer. It's true that people tend to find themselves approving the way they actually are, but (as you actually acknowledge) there are queer people who would much prefer not to be queer, perhaps for very bad reasons, and I think there are also not-queer people who would prefer to be queer (I think I've seen, a few years ago on LW, a discussion of the possibility of hacking oneself to be bisexual). I would say (in terms of the want/like/approve trichotomy already referenced) that what defines a person as queer is that they want and like sexual/romantic relationships that don't fit the traditional heteronormative model. Approving or disapproving of such relationships is a separate matter. If tomorrow someone convinces Dave that fundamentalist Christianity is correct then he may start disapproving of queerness and wishing he weren't queer, but he still will be. It may well be, as you suggest, that approval is actually a more important part of your personality than wants and likes, but that doesn't mean that everything needs to be understood in terms of approval rather than wants and likes. But does not wanting to change indicate that Dave's queerness is really more about approving than about wanting and liking? I don't think so. If someone pointed a weird science-fiction-looking device at me and announced that it would rewire my brain to make me stop wanting-and-liking chocolate and start wanting-and-liking aubergines, I would want them not to do it -- but I don't (I'm pretty sure) approve of liking chocolate and disliking aubergine any more than I do of the reverse. It's just that I don't want someone rewiring my brain. It seems very plausible to me that Dave's queerness might be like my liking for chocolate in this respect. I also wonder whether we're at risk of being confused by the variety of meanings of "approve". Perhaps that trichotomy needs to be a tetrachotomy or something. In partic

Rereading your original comment keeping in mind that you're talking mostly about approval rather than desire or preference... so, would you say that Deaf people necessarily disapprove of deafness?

I'd say that a good portion of them do approve of it. There seem to be a lot of disability rights activists who seem to think that being disabled and making more disabled people is okay.

I should also mention, however, that I do think it is possible to mistakenly approve or disapprove of something. For instance I used to disapprove of pornography and voluntary... (read more)

0TheOtherDave
So, with that in mind, I go back to your original comment that there is a fundamental difference between being queer and being deaf. If I understand correctly, the difference you were seeing was that being queer was a "value," which is related to it being something that queer people differentially approve of. Whereas deafness was an ability, which was importantly different. But since then, you've concluded that being queer isn't actually something (at least some people, like me) differentially approve of. But you also believe that many Deaf people approve of deafness... you just think they're mistaken to do so. Have I got that right? I have to admit, I have trouble making all of that stuff cohere; it mostly seems to cache out as "Ghatananthoah believes being queer is different from being deaf, because Ghatananthoah disapproves of being deaf but doesn't disapprove of being queer." Which I assume is an unfair characterization. But perhaps you can understand why it seems that way to me, and thereby help me understand what I'm misunderstanding in your position?

It's not clear to me how this difference justifies the distinction in my thinking I was describing.

I believe the difference is that in the case of deaf people, you are improving their lives by giving them more abilities to achieve the values they have (in this case, an extra sense). By contrast, with queerness you are erasing a value a person has and replacing it with a different value that is easier to achieve. I believe that helping a person achieve their existing values is a laudable goal, but that changing a person's values is usually morally prob... (read more)

1TheOtherDave
Interesting. So, speaking personally, I approve of people seeking same-sex mates, I approve of us seeking opposite-sex mates, I approve of us seeking no mates at all, I approve of various other possibilities and none of this seems especially relevant to what I'm talking about when I describe myself as queer. People just as queer as I am could have completely different approval patterns. So, yes, as you say, I'm not envisioning having what I approve of modified when I talk about not being queer, merely what I "want" and "like". Straight-Dave approves of all the same things that queer-Dave does, he just desires/prefers different mates. Rereading your original comment keeping in mind that you're talking mostly about approval rather than desire or preference... so, would you say that Deaf people necessarily disapprove of deafness? It sounds that way from the way you talk about it, but I want to confirm that.

I acknowledge that life is more difficult in certain readily quantifiable ways for queer people than for straight people, but it doesn't follow that I would use a reliable therapy for making me straight if such a thing existed... and in fact I wouldn't. Nor would I encourage the development of such a therapy, particularly, and indeed the notion of anyone designing such a therapy makes me more than faintly queasy. And if it existed, I'd be reluctant to expose my children to it. And I would be sympathetic to claims that developers and promoters of such a te

... (read more)
1TheOtherDave
So, I agree that there's a difference between being queer and being deaf along the lines of what you describe. It's not clear to me how this difference justifies the distinction in my thinking I was describing. How do we tell whether what I value is to find a mate of Type A, or to find a mate I find attractive? I'm pretty sure I disagree with this completely. If I woke up tomorrow morning and I was no longer sexually attracted to men, that would be startling, and it would be decidedly inconvenient in terms of my existing marriage, but I wouldn't be someone else, any more than if I stopped being sexually attracted to anyone, or stopped liking the taste of beef, or lost my arm. Is this simply a semantic disagreement -- that is, do we just have different understandings of what the phrase "who I am" refers to? Or is there something we'd expect to observe differently in the world were you correct and I mistaken about this?

You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there's uncertainty about it... That is, you display uncertainty about your utility function. Check mate.

Even if I was uncertain about my utility function, you're still wrong. The factor you are forgetting about is uncertainty. With a bounded utility function infinite utility scores the same as a smaller amount of utility. So you should always assume a bounded utility function, because unbounded utility functions don't offer any more utili... (read more)

It seems to me that the project of transhumanism in general is actually the project of creating artificial utility monsters. If we consider a utility monster a creature that can transmute resources into results more efficiently that's essentially what a transhuman is.

In a world where all humans have severe cognitive and physical disabilities and die at the age of 30 a baseline human would be a utility monster. They would be able to achieve far more of their life goals and desires than all other humans would. Similarly, a transhuman with superhuman cog... (read more)

I suspect that calling your utility function itself into question like that isn't valid in terms of expected utility calculations.

I think what you're suggesting is that on top of our utility function we have some sort of meta-utility function that just says "maximize your utility function, whatever it is." That would fall into your uncertainty trap, but I don't think that is the case, I don't think we have a meta-function like that, I think we just have our utility function.

If you were allowed to cast your entire utility function into doubt yo... (read more)

0Armok_GoB
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there's uncertainty about it... That is, you display uncertainty about your utility function. Check mate. Also, "infinity=infinity" is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.

This tends to imply the Sadistic Conclusion: that it is better to create some lives that aren't worth living than it is to create a large number of lives that are barely worth living.

I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one.

There are several "impossibility" theorems that show it is impossible to come up with a way to order populations that satisfies all of a group of intuitively appealing conditions.

If you take the underlying princi... (read more)

It's worth noting that the question of what is a better way of evaluating such prospects is distinct from the question of how I in fact evaluate them.

Good point. What I meant was closer to "which method of evaluation does the best job of capturing how you intuitively assign value" rather than which way is better in some sort of objective sense. For me #1 seems to describe how I assign value and disvalue to repeating copies better than #2 does, but I'm far from certain.

So I think that from my point of view Omega offering to extend the length ... (read more)

I think I understand your viewpoint. I do have an additional question though, which is what you think about how to to evaluate moments that have a combination of good and bad.

For instance, let's suppose you have the best day ever, except that you had a mild pain in your leg for the most of the day. All the awesome stuff you did during the day more than made up for that mild pain though.

Now let's suppose you are offered the prospect of having a copy of you repeat that day exactly. We both agree that doing this would add no additional value, the question i... (read more)

1TheOtherDave
It's worth noting that the question of what is a better way of evaluating such prospects is distinct from the question of how I in fact evaluate them. I am not claiming that having multiple incomensurable metrics for evaluating the value of lived experience is a good design, merely that it seems to be the way my brain works. Given the way my brain works, I suspect repeating a typical day as you posit would add disvalue, for reasons similar to #2. Would it be better if I instead evaluated it as per #1? Yeah, probably. Still better would be if I had a metric for evaluating events such that #1 and #2 converged on the same answer.

For my own part, I share your #1 and #2, don't share your #3 (that is, I'd rather Omega not reproduce the bad stuff, but if they're going to do so, it makes no real difference to me whether they reproduce the good stuff as well)

One thing that makes me inclined towards #3 is the possibility that the multiverse is constantly reproducing my life over and over again, good and bad. I do not think that I would consider it devastatingly bad news if it turns out that the Many-Worlds interpretation is correct.

If I really believed that repeated bad experiences... (read more)

1TheOtherDave
Yup, that makes sense, but doesn't seem to describe my own experience. For my own part, I think the parts of my psyche that judge the kinds of negative scenarios we're talking about use a different kind of evaluation than the parts that judge the kinds of positive scenarios we're talking about. I seem to treat the "bad stuff" as bad for its own sake... avoiding torture feels worth doing, period end of sentence. But the "good stuff" feels more contingent, more instrumental, feels more like it's worth doing only because it leads to... something. This is consistent with my experience of these sorts of thought experiments more generally... it's easier for me to imagine "pure" negative value (e.g., torture, suffering, etc in isolation.) than "pure" positive value (e.g., joy, love, happiness, satisfaction in isolation). It's hard for me to imagine some concrete thing that I would actually trade for a year of torture, for example, though in principle it seems like some such thing ought to exist. And it makes some sense that there would be a connection between how instrumental something feels, and how I think about the prospect of repeating it. If torture feels bad for its own sake, then when I contemplate repetitions of the same torture, it makes sense that I would "add up the badness" in my head... and if good stuff doesn't feel good for its own sake, it makes sense that I wouldn't "add up the goodness" in my head in the same way. WRT #4, what I'm saying is that copying the good moments feels essentially valueless to me, while copying the bad moments has negative value. So I'm being offered a choice between "bad thing + valueless thing" and "bad thing", and I don't seem to care. (That said, I'd probably choose the former, cuz hey, I might be wrong.)

I don't see anything inconsistent about believing that a good life loses values with repetition, but a bad life does not lose disvalue. It's consistent with the Value of Boredom, which I thoroughly endorse.

Now, there's a similar question where I think my thoughts on the subject might get a little weird. Imagine you have some period of your life that started out bad, but then turned around and then became good later so that in the end that period of life was positive on the net. I have the following preferences in regards to duplicating it:

  1. I would not p

... (read more)
1TheOtherDave
I agree with you that my preferences aren't inconsistent, I just value repetition differently for +v and -v events. For my own part, I share your #1 and #2, don't share your #3 (that is, I'd rather Omega not reproduce the bad stuff, but if they're going to do so, it makes no real difference to me whether they reproduce the good stuff as well), and share your indifference in #4.

It seems like there's an easy way around this problem. Praise people who are responsible and financially well-off for having more kids. These traits are correlated with good genes and IQ, so it'll have the same effect.

It seems like we already do this to some extent. I hear others condemning people with who are irresponsible and low-income for having too many children fairly frequently. It's just that we fail to extend this behavior in the other direction, to praising responsible people for having children.

I'm not sure why this is. It could be for one... (read more)

I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment

I don't think that would be the case. The main intuitive advantage negative preference utilitarianism has over negative hedonic utilitarianism is that it considers death to be a bad thing, because it results in unsatisfied preferences. If it only counted immediate consciously held goal... (read more)

I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.

I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, c... (read more)

A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters.

Yes, and that is my precise point. Even if we assume a bounded utility function for human preferences, I think it's reasonable assume that it's a pretty huge function. Which means that antinatalism/negative preference utilitarianism would be willing to inflict massive suffering on existing people to prevent the birth of one person who would have a better life than anyone on Earth has ever had up to t... (read more)

1Kaj_Sotala
Is that really how preference utilitarianism works? I'm very unfamiliar with it, but intuitively I would have assumed that the preferences in question wouldn't be all the preferences that the agent's value system could logically be thought to imply, but rather something like the consciously held goals at some given moment. Otherwise total preference utilitarianism would seem to reduce to negative preference utilitarianism as well, since presumably the unsatisfied preferences would always outnumber the satisfied ones. I'm confused. How is wanting to live forever in a situation where you don't think that living forever is possible, different from any other unsatisfiable preference? That doesn't sound right. The disutility is huge, yes, but the probability is so low that focusing your efforts on practically anything with a non-negligible chance of preventing further births would be expected to prevent many times more disutility. Like supporting projects aimed at promoting family planning and contraception in developing countries, pro-choice policies and attitudes in your own country, rape prevention efforts to the extent that you think rape causes unwanted pregnancies that are nonetheless carried to term, anti-natalism in general (if you think you can do it in a way that avoids the PR disaster for NU in general), even general economic growth if you believe that the connection between richer countries and smaller families is a causal and linear one. Worrying about vanishingly low-probability scenarios, when that worry takes up cognitive cycles and thus reduces your chances of doing things that could have an even bigger impact, does not maximize expected utility. I don't know. At least I personally find it very difficult to compare experiences of such differing magnitudes. Someone could come up with a number, but that feels like trying to play baseball with verbal probabilities - the number that they name might not have anything to do with what they'd actually choose

Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering.

If I understand you correctly, the problem with doing this with negative utilitarianism is that it suggests we should painlessly kill everyone ASAP. The advantage of negative preference utilitarianism is that it avoids this because people have a preference to keep on living that killing would thwart.

It's wo

... (read more)

Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.

Yes, but I would argue that the fact that they can't actually do that yet makes a difference.

Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consu

... (read more)
0James_Miller
Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.

For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men."

There is a huge difference between discriminatory favoritism, and valuing continued life over adding new people,

In discriminatory favoritism people have a property that makes them morally valuable (i.e the ability to have preferences, or to feel pleasure and pain). The... (read more)

0James_Miller
But not in any absolute sense, just because this is consistent with your moral intuition. Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure. Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?

Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)

Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.

My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don't know what the answer is. I wonder if SisterY or one of the o... (read more)

What amount of disutility does creating a new person generate in Negative Preference Utilitarian ethics?

I need to elaborate in order to explain exactly what question I am asking: I've been studying various forms of ethics, and when I was studying Negative Preference Utilitarianism (or anti-natalism, as I believe it's often also called) I came across what seems like a huge, titanic flaw that seems to destroy the entire system.

The flaw is this: The goal of negative preference utilitarianism is to prevent the existence of unsatisfied preferences. This means... (read more)

2Kaj_Sotala
(To the extent that I'm negative utilitarian, I'm a hedonistic negative utilitarian, so I can't speak for the preference NUs, but...) Note that every utilitarian system breaks once you introduce even the possibility of infinities. E.g. a hedonistic total utilitarian will similarly run into the problem that, if you assume that a child has the potential to live for an infinite amount of time, then the child can be expected to experience both an infinite amount of pleasure and an infinite amount of suffering. Infinity minus infinity is undefined, so hedonistic total utilitarianism would be incapable of assigning a value to the act of having a child. Now saving lives is in this sense equivalent to having a child, so the value every action that has even a remote chance of saving someone's life becomes undefined as well... A bounded utility function does help matters, but then everything depends on how exactly it's bounded, and why one has chosen those particular parameters. I take it you mean to say that they don't spend all of their waking hours convincing other people not to have children, since it doesn't take that much effort to avoid having children yourself. One possible answer is that loudly advocating "you shouldn't have children, it's literally infinitely bad" is a horrible PR strategy that will just get your movement discredited, and e.g. talking about NU in the abstract and letting people piece the full implications themselves may be more effective. Also, are they all transhumanists? For the typical person (or possibly even typical philosopher), infinite lifespans being a plausible possibility might not even occur as something that needs to be taken into account. Does any utilitarian system have a good answer to questions like these? If you ask a total utilitarian something like "how much morning rush-hour frustration would you be willing to inflict to people in order to prevent an hour of intense torture, and how exactly did you go about calculating the
1RomeoStevens
Speaking personally, I don't negatively weigh non-aversive sensory experiences. That is to say, the billions of years of unsatisfied preferences are only important for that small subset of humans for whom knowing about the losses causes suffering. Death is bad and causes negative experiences. I want to solve death before we have more kids, but I recognize this isn't realistic. It's worth pointing out that negative utilitarianism is incoherent. Prioritarianism makes slightly more sense.

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.

No it isn't. This can be demonstrated fairly simply. Imagine a population consisting of 100 people. 99 of those people have great lives, 1 of those people has a mediocre one.

At the time you are considering doing the killing the person with the mediocre life, he has accumulated 25 utility. If you let him live he will accumulate 5 more utility. The 99 people with great lives will accumulate 100 utility o... (read more)

I wonder what a CEV-implementing AI would do with such cases.

Even if it does turn out that my current conception of personal identity isn't the same as my old one, but is rather I similar concept I adopted after realizing my values were incoherent, the AI might still find that the CEVs of my past and present selves concur. This is because, if I truly did adopt a new concept of identity because of it's similarity to my old one, this suggests I possess some sort of meta-value that values taking my incoherent values and replacing them with coherent ones ... (read more)

0Lukas_Gloor
Maybe, but I doubt whether "as similar as possible" is (or can be made) uniquely denoting in all specific cases. This might sink it.

Granted, negative utilitarians would prefer to add a small population of beings with terrible lives over a very large beings with lives that are almost ideal, but this would not be a proper instance of the Sadistic Conclusion. See the formulation:

When I read the formulation of the Sadistic Conclusion I interpreted "people with positive utility" to mean either a person whose life contained no suffering, or a person whose satisfied preferences/happiness outweighed their suffering. So I would consider adding a small population of terrible lives ... (read more)

1Lukas_Gloor
I agree your points on the Sadistic Conclusion issue. Arrhenius acknowledges that his analysis depends on the (to him trivial) assumption that there are "positive" welfare levels. I don't think this axiom is trivial because it interestingly implies that non-consciousness somehow becomes "tarnished" and non-optimal. Under a Buddhist view of value, this would be different. If all one person cared about was to live for at least 1'000 years, and all a second person cared about was to live for at least 1'000'000 years (and after their desired duration they would become completely indifferent), would the death of the first person at age 500 be less tragic than the death of the second person at age 500'000? I don't think so, because assuming that they value partial-progress on their ultimate goal the same way, they both ended up reaching "half" of their true and only goal. I don't think the first person would somehow care less in overall terms about achieving her goal than the second person. To what extent would this way of comparing preferences change things? I think the point you make here is important. It seems like there should be a difference between beings who have only one preference and beings who have an awful lot of preferences. Imagine a chimpanzee with a few preferences and compare him to a sentient AGI, say. Would both count equally? If not, how would we determine how much their total preference (dis)satisfaction is worth? The example I gave above seems intuitive because we were talking about humans who are (as specified by the unwritten rules of thought experiments) equal in all relevant respects. With chimps vs. AI it seems different. I'm actually not sure how I would proceed here, and this is of course a problem. Since I'd (in my preference-utilitarianism mode) only count the preferences of sentient beings and not e.g. the revealed preferences of a tree, I would maybe weight the overall value by something like "intensity of sentience". However, I suspec
Load More