All of Adriano_Mannino's Comments + Replies

Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.

0davidpearce
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU. Is this too rosy a scenario?

Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.

5CarlShulman
Which cases did you have in mind? People generally don't altruistically favor euthanasia for pets with temporarily painful but easily treatable injuries (where recovery could be followed with extended healthy life). People are not eager to campaign to stop the pain of childbirth at the expense of birth. They don't consider a single instance of torture worse than many deaths depriving people of happy lives. They favor bringing into being the lives of children that will contain some pain.

Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)

0Pablo
That's incorrect. David Pearce claims that pains below a certain intensity can't be outweighed by any amount of pleasure. Both Brian and Dave agree that Dave is a (threshold) negative utilitarian whereas Brian is a negative-leaning utilitarian.

Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.

Lots of dystopian future scenarios are possible. Here are some of them.

How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, ... (read more)

1Wei Dai
Thanks for the explanation. I was thrown by your usage of the word "inevitable" earlier, but I think I understand your position now. (EDIT: Deleted rest of this comment which makes a point that you already discussing with Nick Beckstead.)
5CarlShulman
Even Brian Tomasik, the author of that page, says that if one trades off pain and pleasure at ordinary rates the expected happiness of the future exceeds the expected suffering, by a factor of between 2 and 50.

Sorry for the delay!

I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.

If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which wo... (read more)

4lukeprog
I don't want to expand the section, because that makes it stand out more than is compatible with my aims for the post. And since the post is aimed at non-EAs and new EAs, I don't want to drop the point about time preference, as "intrinsic" time-discounting is a common view outside EA, especially for those with a background in economics rather than philosophy. So my preferred solution is to link to a fuller discussion of the issues, which I did (in particular, Beckstead's thesis). Anyway, I appreciate your comments.

Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem ... (read more)

2Nick_Beckstead
I do some of that in chapter 4. I don't engage with speculative arguments that the future will be bad (e.g. the dystopian scenarios that negative utilitarians like to discuss) or make my case by appealing to positive trends of the sort discussed by Pinker in Better Angels. Carl Shulman and I are putting together some thoughts on some of these issues at the moment. Maybe so. I think the key is how you interpret the word "value." If you interpret as "only positive value" then negative utilitarians disagree but only because they think there isn't any possible positive value. If you interpret it as "positive or negative value" I think they should agree for pretty straightforward reasons.

Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.

If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.

Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm... (read more)

1lukeprog
Okay, so we're talking about two points: (1) whether current people have more value than future people, and (2) whether it would be super-good to create gazillions of super-good lives. My sentence mentions both of those, in sequence: "Many EAs value future people roughly as much as currently-living people [1], and think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future [2]..." And you are suggesting... what? That I switch the order in which they appear, so that [2] appears before [1], and is thus emphasized? Or that I use your phrase "morally urgent to" instead of "nearly all potential value is found in..."? Or something else?

Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.

1lukeprog
I confess I'm not that motivated to tweak the sentence even further, since it seems like a small semantic point, I don't understand the advantages to your phrasing, and I've provided links to more thorough discussions of these issues, for example Beckstead's dissertation. Maybe it would help if you explained what kind of reasoning you are using to identify which claims are "doing the crucial work"? Or we could just let it be.

OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.

It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection... (read more)

Wei Dai100

But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).

A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don't understand why you think astronomical population expansion inevitably produces miserable lives.

Al... (read more)

3lukeprog
Right, the first clause is there as a necessary but not sufficient part of the standard reason for focusing on the far future, and the sentence works now that I've removed the "therefore." The reason I'd rather not phrase things as "morally urgent to bring new people into existence" is because that phrasing suggests presentist assumptions. I'd rather use a sentence with non-presentist assumptions, since presentism is probably rejected by a majority of physicists by now, and also rejected by me. (It's also rejected by the majority of EAs with whom I've discussed the issue, but that's not actually noteworthy because it's such a biased sample of EAs.)

Thanks, Luke, great overview! Just one thought:

Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future

This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill... (read more)

3Nick_Beckstead
It's not really clear to me that negative utilitarians and people with person-affecting views need to disagree with the quoted passage as stated. These views focus primarily on the suffering aspect of well-being, and nearly all of the possible suffering is found in the astronomical numbers of people who could populate the far future. To elaborate, in my dissertation, I assume--like most people would--that a future where humans have great influence would be a good thing. But I don't argue for that and some people might disagree. If that's the only thing you disagree with me about, it seems you actually still end up accepting my conclusion that what matters most is making humanity's long-term future development go as well as possible. It's just that you end up focusing on different aspects of making the long-term future development go as well as possible.
1lukeprog
Okay, I removed the "therefore."

Well, if humans can't and won't act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.

If we did model ethics after particular types of agent, here's what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should ... (read more)

0fubarobfusco
Of course we can morally compare types A and B, just as we can morally compare an AI whose goal is to turn the world into paperclips and one whose goal is to make people happy. However, rather than "objectively better", we could be more clear by saying "more in line with our morals" or some such. It's not as if our morals came from nowhere, after all. See also: "The Bedrock of Morality: Arbitrary?"

But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?

So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.

I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.

1Ben Pace
This sounds a bit like religious people saying "But what if it turns out that there is no morality? That would be bad!". What part of you thinks that this is bad? Because, that is what CEV is extrapolating. CEV is taking the deepest and most important values we have, and figuring out what to do next. You in principle couldn't care about anything else. If human values wanted to self-modify, then CEV would recognise this. CEV wants to do what we want most, and this we call 'right'. This is what you value, what you chose. Don't lose sight of invisible frameworks. If we're including all decision procedures, then why not computers too? This is part of the human intuition of 'fairness' and 'equality' too. Not the hamster's one.

Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don't include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don't include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we los... (read more)

0see
Constructing an ethics that demands that a chicken act as a moral agent is obviously nonsense; chickens can't and won't act that way. Similarly, constructing an ethics that demands humans value chickens as much as they value their own children is nonsense; humans can't and won't act that way. If you're constructing an ethics for humans follow, you have to start by figuring out humans. It's not until after you've figured out how much humans should value the interests of chickens that you can determine how much to weigh the interests of chickens in how humans should act. And how much humans should weigh the value of chickens is by necessity determined by what humans are.
-1Ben Pace
Just to make clear, are you saying that we should treat chickens how humans want to treat them, or how chickens do? Because if the former, then yeah, CEV can easily find out whether we'd want them to have good lives or not (and I think it would see we do). But chickens don't (I think) have much of an ethical system, and if we incorporated their values into what CEV calculates, then we'd be left with some important human values, but also a lot of chicken feed.

It's been asserted here that "the core distinction between avoidance, pain, and awareness of pain works" or that "there is such a thing as bodily pain we're not conciously aware of". This, I think, blurs and confuses the most important distinction there is in the world - namely the one between what is a conscious/mental state and what is not. Talk of "sub-conscious/non-conscious mental states" confuses things too: If it's not conscious, then it's not a mental state. It might cause one or be caused by one, but it isn't a mental... (read more)

Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.

-1wedrifid
I look forward to hearing what you have to say about each of these fields!