Today's post, Scope Insensitivity, was originally published on 14 May 2007. A summary (taken from the LW wiki):
The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Third Alternatives for Afterlife-ism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
One does not have to be "fanatically committed" to some ad hoc conjunct of abstract moral positions in order to justifiably make the antiprediction that humanity or its descendants might (that is, high enough probability (given moral uncertainty) that it still dominates the calculation) have some use for all those shiny lights in the sky (and especially the blackholes). It seems to me that your points about mixed motivations etc. are in favor of this antiprediction given many plausible aggregation rules. Sure, most parts of me/humanity might not have any ambitions or drives that require lots of resources to fulfill, but I know for certain that some parts of me/humanity at least nominally do and at least partially act/think accordingly. If those parts end up being acknowledged in moral calculations then a probable default position would be for those parts to take over (at least) the known universe while the other parts stay at home and enjoy themselves. For this not to happen would probably require (again dependent on aggregation rule) that the other parts of humanity actively value to a significant extent not using those resources (again to a significant extent). Given moral uncertainty, I am working under the provisional assumption that some non-negligible fraction of resources in the known universe is going to matter enough to at least some parts of something that guaranteeing its future access should be a very non-negligible goal as one of the few godshatter-coalitions that is able to recognize its potential importance (i.e. comparative advantage).
(Some of a lot of other reasoning I didn't mention involves a prediction that a singleton won't be eternally confined to an aggregation rule that is blatantly stupid in a few of the many, many ways an aggregation rule can be blatantly stupid as I judge them (or at least won't be eternally confined in a huge majority of possible futures). (E.g. CEV has the annoyingly vague necessity of coherence among other things which could easily be called blatantly stupid upon implementation.))
It's meant as a guess at a potentially oft-forgotten single-step implication of standard Less Wrong (epistemic and moral) beliefs; not any kind of assertion of supremacy. I have seen enough of the subtleties, complexities, and depth of morality to know that we are much too confused for anyone (any part of one) to be asserting such supremacy.