Regarding "people's ordinary exchange rates", I suspect that in cases people clearly recognize as altruistic, the rates are closer to Brian's than to yours. In cases they (IMO confusedly) think of as "egoistic", the rates may be closer to yours. - This provides an argument that people should end up with Brian upon knocking out confusion.
Also, still others (such as David Pearce) would argue that there are reasons to favor Brian's exchange rate. :)
Yes, it can. But a Singleton is not guaranteed; and conditional on the future existence of a Singleton, friendliness is not guaranteed. What I meant was that astronomical population expansion clearly produces an astronomical number of most miserable, tortured lives in expectation.
Lots of dystopian future scenarios are possible. Here are some of them.
How many happy people for one miserable existence? - I take the zero option very seriously because I don't think that (anticipated) non-existence poses any moral problem or generates any moral urgency to act, ...
Sorry for the delay!
I forgot to clarify the rough argument for why (1) "value future people equally" is much less important or crucial than (2) "fill the universe with people" here.
If you accept (2), you're almost guaranteed to be on board with where Bostrom and Beckstead are roughly going (even if you valued present people more!). It's hardly possible to then block their argument on normative grounds, and criticism would have to be empirical, e.g. based on the claim that dystopian futures may be likelier than commonly assumed, which wo...
Hi Nick, thanks! I do indeed fully agree with your general conclusion that what matters most is making our long-term development go as well as possible. (I had something more specific in mind when speaking of "Bostrom's and Beckstead's conclusions" here, sorry about the confusion.) In fact, I consider your general conclusion very obvious. :) (What's difficult is the empirical question of how to best affect the far future.) The obviousness of your conclusion doesn't imply that your dissertation wasn't super-important, of course - most people seem ...
Yeah, I've read Nick's thesis, and I think the moral urgency of filling the universe with people is the more important basis of his conclusion than the rejection of time preference. The sentence suggests that the rejection of time preference is most important.
If I get him right, Nick agrees that the time issue is much less important than you suggested in your recent interview.
Sorry to insist! :) But when you disagree with Bostrom's and Beckstead's conclusions, people immediately assume that you must be valuing present people more than future ones. And I'm...
Is it the "bringing into existence" and the "new" that suggests presentism to you? (Which I also reject, btw. But I don't think it's of much relevance to the issue at hand.) Even without the "therefore", it seems to me that the sentence suggests that the rejection of time preference is what does the crucial work on the way to Bostrom's and Beckstead's conclusions, when it's rather the claim that it's "morally urgent/required to cause the existence of people (with lives worth living) that wouldn't otherwise have existed", which is what my alternative sentence was meant to mean.
OK, but why would the sentence start with "Many EAs value future people roughly as much as currently-living people" if there wasn't an implied inferential connection to nearly all value being found in astronomical far future populations? The "therefore" is still implicitly present.
It's not entirely without justification, though. It's true that the rejection of a (very heavy) presentist time preference/bias is necessary for Bostrom's and Beckstead's conclusions. So there's weak justification for your "therefore": The rejection...
But one may disagree on "Omelas and Space Colonization", i.e. on how many lives worth living are needed to "outweigh" or "compensate for" miserable ones (which our future existence will inevitably also produce, probably in astronomical numbers, assuming astronomical population expansion).
A superintelligent Singleton (e.g., FAI) can guarantee a minimum standard of living for everyone who will ever be born or created, so I don't understand why you think astronomical population expansion inevitably produces miserable lives.
Al...
Thanks, Luke, great overview! Just one thought:
Many EAs value future people roughly as much as currently-living people, and therefore think that nearly all potential value is found in the well-being of the astronomical numbers of people who could populate the far future
This suggests that alternative views are necessarily based on ethical time preference (and time preference seems irrational indeed). But that's incorrect. It's possible to care about the well-being of everyone equally (no matter their spatio-temporal coordinates) without wanting to fill...
Well, if humans can't and won't act that way, too bad for them! We should not model ethics after the inclinations of a particular type of agent, but we should instead try and modify all agents according to ethics.
If we did model ethics after particular types of agent, here's what would result: Suppose it turns out that type A agents are sadistic racists. So what they should do is put sadistic racism into practice. Type B agents, on the other hand, are compassionate anti-racists. So what they should do is diametrically opposed to what type A agents should ...
But why run this risk? The genuine moral motivation of typical humans seems to be weak. That might even be true of the people working for human and non-human altruistic causes and movements. What if what they really want, deep down, is a sense of importance or social interaction or whatnot?
So why not just go for utilitarianism? By definition, that's the safest option for everyone to whom things can matter/be valuable.
I still don't see what could justify coherently extrapolating "our" volition only. The only non-arbitrary "we" is the community of all minds/consciousnesses.
Why would the chicken have to learn to follow the ethics in order for its interests to be fully included in the ethics? We don't include cognitively normal human adults because they are able to understand and follow ethical rules (or, at the very least, we don't include them only in virtue of that fact). We include them because to them as sentient beings, their subjective well-being matters. And thus we also include the many humans who are unable to understand and follow ethical rules. We ourselves, of course, would want to be still included in case we los...
It's been asserted here that "the core distinction between avoidance, pain, and awareness of pain works" or that "there is such a thing as bodily pain we're not conciously aware of". This, I think, blurs and confuses the most important distinction there is in the world - namely the one between what is a conscious/mental state and what is not. Talk of "sub-conscious/non-conscious mental states" confuses things too: If it's not conscious, then it's not a mental state. It might cause one or be caused by one, but it isn't a mental...
Hi all, I'm a lurker of about two years and have been wanting to contribute here and there - so here I am. I specialize in ethics and have further interests in epistemology and the philosophy of mind.
Not so sure. Dave believes that pains have an "ought-not-to-be-in-the-world-ness" property that pleasures lack. And in the discussions I have seen, he indeed was not prepared to accept that small pains can be outweighed by huge quantities of pleasure. Brian was oscillating between NLU and NU. He recently told me he found the claim convincing that such states as flow, orgasm, meditative tranquility, perfectly subjectively fine muzak, and the absence of consciousness were all equally good.