Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Adriano_Mannino comments on Four Focus Areas of Effective Altruism - Less Wrong

40 Post author: lukeprog 09 July 2013 12:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

You are viewing a single comment's thread.

Comment author: Adriano_Mannino 13 July 2013 03:10:21AM 10 points [-]

Thanks, Luke, great overview! Just one thought:

Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future

This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill the universe with happy people. I think there is something true and important about the slogan "Make people happy, not happy people", although explaining that something is non-trivial.

Comment author: Nick_Beckstead 20 July 2013 06:04:24PM 2 points [-]

It's not really clear to me that negative utilitarians and people with person-affecting views need to disagree with the quoted passage as stated. These views focus primarily on the suffering aspect of well-being, and nearly all of the possible suffering is found in the astronomical numbers of people who could populate the far future.

To elaborate, in my dissertation, I assume--like most people would--that a future where humans have great influence would be a good thing. But I don't argue for that and some people might disagree. If that's the only thing you disagree with me about, it seems you actually still end up accepting my conclusion that what matters most is making humanity's long-term future development go as well as possible. It's just that you end up focusing on different aspects of making the long-term future development go as well as possible.

Comment author: Adriano_Mannino 03 August 2013 04:22:52AM *  6 points [-]

Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem to disagree with the conclusion. Unfortunately and sadly, though, the utility of talking about (affecting) the far future is a tricky issue too, given fundamental disagreements in population ethics.

I don't know that the "like most people would" parenthesis is true. (A "good thing" maybe, but a morally urgent thing to bring about, if the counterfactual isn't existence with less well-being, but non-existence?) I'd like to see some solid empirical data here. I think some people are in the process of collecting it.

Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".

The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.

Comment author: Nick_Beckstead 03 August 2013 08:34:46AM 1 point [-]

Do you not argue for that at all? I thought you were going in the direction of establishing an axiological and deontic parallelism between the "wretched child" and the "happy child".

I do some of that in chapter 4. I don't engage with speculative arguments that the future will be bad (e.g. the dystopian scenarios that negative utilitarians like to discuss) or make my case by appealing to positive trends of the sort discussed by Pinker in Better Angels. Carl Shulman and I are putting together some thoughts on some of these issues at the moment.

The quoted passage ("all potential value is found in [the existence of] the well-being of the astronomical numbers of people who could populate the far future") strongly suggests a classical total population ethics, which is rejected by negative utilitarianism and person-affecting views. And the "therefore" suggests that the crucial issue here is time preference, which is a popular and incorrect perception.

Maybe so. I think the key is how you interpret the word "value." If you interpret as "only positive value" then negative utilitarians disagree but only because they think there isn't any possible positive value. If you interpret it as "positive or negative value" I think they should agree for pretty straightforward reasons.

Comment author: Adriano_Mannino 04 August 2013 01:52:55AM 6 points [-]

One important reason they like to discuss them is the fact that many people just assume, without adequate consideration and argument, that the future will be hugely net positive. Which comes at no surprise, given the existence of relevant biases.

Whether negative utilitarians believe that "there isn't any possible positive value" is semantics, I think. The framing you suggest is probably a semantic (and thus bad) trigger of the absurdity heuristic. With equal substantial justification and more semantic charity, one could say that negative utilitarians believe that the absence of suffering/unfulfilled preferences or suffering-free world-states have positive value (and one may add either that they believe that the existence of suffering/unfulfilled preferences has negative value or that they believe there isn't any possible negative value).

Comment author: lukeprog 13 July 2013 03:17:00AM 1 point [-]

Okay, I removed the "therefore."

Comment author: Adriano_Mannino 15 July 2013 09:16:32PM 7 points [-]

OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.

It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection of presentist time preference makes the conclusions more likely.

But it's by no means sufficient for them. Bostrom and Beckstead need the further claim that bringing new people into existence is morally important and urgent.

This seems to be the crucial point. So I'd rather go for something like: "Many (Some?) EAs value/think it morally urgent to bring new people (with lives worth living) into existence, and therefore..."

The moral urgency of preventing miserable lives (or life-moments) is less controversial. People like Brian Tomasik place much more (or exclusive) importance on the prevention of lives not worth living, i.e. on ensuring the well-being of everyone that will exist rather than on making as many people exist as possible. The issue is not whether (far) future lives count as much as lives closer to the present. One can agree that future lives count equally, and also agree that far future considerations dominate the moral calculation (empirical claims enter the picture here). But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion). So it's possible to agree that future lives count equally and that far future considerations dominate but to still disagree on the importance of x-risk reduction or more particular things such as space colonization.

Comment author: lukeprog 17 July 2013 05:51:02AM *  3 points [-]

Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I've removed the "therefore."

The reason I'd rather not phrase things as "morally urgent to bring new people into existence" is because that phrasing suggests presentist assumptions. I'd rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It's also rejected by the majority of EAs with whom I've discussed the issue, but that's not actually noteworthy because it's such a biased sample of EAs.)

Comment author: Adriano_Mannino 19 July 2013 01:08:48AM 9 points [-]

Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.

Comment author: lukeprog 19 July 2013 01:51:40AM *  1 point [-]

I confess I'm not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don't understand the advantages to your phrasing, and I've provided links to more thorough discussions of these issues, for example Beckstead's dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are "doing the crucial work"? Or we could just let it be.

Comment author: Adriano_Mannino 19 July 2013 04:32:31AM 7 points [-]

Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.

If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.

Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm constantly like: "No! The crucial issue is whether the non-existence of people (where there could be some) poses a moral problem, i.e. whether it's morally urgent to fill the universe with people. I doubt it."

Comment author: lukeprog 19 July 2013 06:18:42AM 1 point [-]

Okay, so we're talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives.

My sentence mentions both of those, in sequence: "Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]..."

And you are suggesting... what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase "morally urgent to" instead of "nearly all potential value is found in..."? Or something else?

Comment author: Adriano_Mannino 03 August 2013 06:23:33AM *  5 points [-]

Sorry for the delay!

I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.

If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which would decrease the value of x-risk reduction.

By contrast, if you accept (1), it's still very much an open question whether you'll be on board.

Also, intrinsic time preference is really not an issue among EAs. The idea that spatial and temporal distance are irrelevant when it comes to helping others is a pretty core element of the EA concept. What is an issue, though, is the question of what helping others actually means (or should mean). Who are the relevant others? Persons? Person-moments? Preferences? And how are they relevant? Should we ensure the non-existence of suffering? Or promote ecstasy too? Prevent the existence of unfulfilled preferences? Or create fulfilled ones too? Can you help someone by bringing them into existence? Or only by preventing their miserable existence/unfulfilled preferences? These issues are more controversial than the question of time preference. Unfortunately, they're of astronomical significance.

I don't really know if I'm suggesting any further specific change to the wording - sorry about that. It's tricky... If you're speaking to non-EAs, it's important to emphasize the rejection of time preference. But there shouldn't be a "therefore", which (in my perception) is still implicitly there. And if you're speaking to people who already reject time preference, it's even more important to make it clear that this rejection doesn't imply "fill the universe with people". One solution could be to simply drop the reference to the (IMO non-decisive) rejection of time preference and go for something like: "Many EAs consider the creation of (happy) people valuable and morally urgent, and therefore think that nearly all potential value..."

Beckstead might object that the rejection of heavy time preference is important to his general conclusion (the overwhelming importance of shaping the far future). But if we're talking that level of generality, then the reference to x-risk reduction should probably go or be qualified. For sufficiently negative-leaning EAs (such as Brian Tomasik) believe that x-risk reduction is net negative.

Perhaps the best solution would be to expand the section and start by mentioning how the (EA-uncontroversial) rejection of time preference is relevant to the overwhelming importance of shaping the far future. Once we've established that the far future likely dominates, the question arises how we should morally affect the far future. Depending on this question, very different conclusions can result e.g. with regard to the importance and even the sign of x-risk reduction.

Comment author: lukeprog 03 August 2013 08:00:34AM 2 points [-]

I don't want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don't want to drop the point about time preference, as "intrinsic" time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead's thesis). Anyway, I appreciate your comments.

Comment author: Wei_Dai 19 July 2013 06:30:58AM 6 points [-]

But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).

A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don't understand why you think astronomical population expansion inevitably produces miserable lives.

Also, I note that space colonization can produce an astronomical number of QALYs even assuming no population growth, by letting currently existing people continue to live after all the negentropy in our solar system has been exhausted.

Comment author: Adriano_Mannino 03 August 2013 06:55:49AM *  8 points [-]

Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.

Lots of dystopian future scenarios are possible. Here are some of them.

How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, while (anticipated) miserable existence clearly does. I don't think it would have been any intrinsic problem whatsoever had I never been born; but it clearly would have been a problem had I been born into miserable circumstances.

But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive. If the number of happy people you require for one miserable existence is sufficiently great and/or if dystopian scenarios are sufficiently likely, the future will be negative in expectation. Beware optimism bias, illusion of control, etc.

Comment author: CarlShulman 03 August 2013 07:36:31AM 3 points [-]

Lots of dystopian future scenarios are possible. Here are some of them. But even if you do believe that non-existence poses a moral problem and creates an urgency to act, it's not clear yet that the value of the future is net positive.

Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.

Comment author: Brian_Tomasik 03 August 2013 10:05:25AM *  2 points [-]

The 2-50 bounds seem reasonable for EV(happiness)/EV(suffering) using normal people's pleasure-pain exchange ratios, which I think are insane. :) Something like accepting 1 minute of Medieval torture for a few days/weeks of good life.

Using my own pleasure-pain exchange ratio, the future is almost guaranteed to be negative in expectation, although I maintain ~30% chance that it would still be better for Earth-based life to colonize space to prevent counterfactual suffering elsewhere.

Comment author: Pablo_Stafforini 03 August 2013 11:44:17AM *  2 points [-]

It is worth pointing out that by 'insane', Brian just means 'an exchange rate that is very different from the one I happen to endorse.' :-) He admits that there is no reason to favor his own exchange rate over other people's. (By contrast, some of these other people would argue that there are reasons to favor their exchange rates over Brian's.)

Comment author: Adriano_Mannino 04 August 2013 02:00:33AM *  1 point [-]

Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)

Comment author: Pablo_Stafforini 04 August 2013 02:44:55PM 0 points [-]

That's incorrect. David Pearce claims that pains below a certain intensity can't be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.

Comment author: Adriano_Mannino 04 August 2013 02:07:43AM *  1 point [-]

Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.

Comment author: CarlShulman 04 August 2013 03:15:30AM 3 points [-]

I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours.

Which cases did you have in mind?

People generally don't altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don't consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.

Comment author: Adriano_Mannino 11 August 2013 02:30:36AM *  4 points [-]

Sorry for the delay. - I should have been more precise. I'll provide more precision by commenting on the cases you mention:

  • The injured pet case probably involves three complications: (1) people's belief that it's an "egoistic" case for the pet (instead of it being an "altruistic" trade-off case between pet-consciousness-moments), (2) people's suffering that would result from the death/future absence of the pet, and (3) people's intuitive leaning towards an (ideal) preference view (the pet "wants" to survive, right? - that would be compatible with a negative(-leaning) view in population ethics according to which what (mainly) matters is the avoidance of unfulfilled preferences).

  • It's clear that evolutionary beings will have a massive bias against the idea of childbirth being morally problematic (no matter its merit). Also, people would themselves be suffering/have thwarted preferences from childlessness.

  • People consider death an "egoistic" case - a harm to the being that died. I think that's confused. Death is the non-birth of other people/people-moments.

  • People usually don't "favor" it in the sense of considering it morally important/urgent/required. They tend to think it's fine if it's an "OK deal" for the being that comes into existence (here again: "ego"-case confounder). By contrast, they think it's morally important/urgent not to bring miserable children into existence. And again, we should abstract from childbirth and genealogical continuation (where massive biases are to be expected). So let's take the more abstract case of Omelas: Would people be willing to create miserable lives in conjunction with a sufficiently great number of intensely pleasurable lives from scratch, e.g. in some remote part of the universe (or in a simulation)? Many would not. With the above confounders ("egoism" and "personal identity", circumstantial suffering, and a preference-based intuitive axiology) removed and the population-ethical question laid bare, many people would not side with you. One might object: Maybe they agree that Omelas is much more intrinsically valuable than non-existence, but they accept deontological side-constraints against actively causing miserable lives, which is why they can't create it. But in that case they shouldn't prevent it if it arose naturally. Here again, though, my experience is that many people would in fact want to prevent abstract Omelas scenarios. Or they're at least uncertain about whether letting even Omelas (!) happen is OK - which implies a negative-leaning population axiology.

Comment author: Wei_Dai 03 August 2013 09:34:24AM *  1 point [-]

Thanks for the explanation. I was thrown by your usage of the word "inevitable" earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)