Today's post, Scope Insensitivity, was originally published on 14 May 2007. A summary (taken from the LW wiki):
The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Third Alternatives for Afterlife-ism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
This isn't clear to me, especially given that Will only said roughly infinite.
An aggregation rule that says "follow the prescription of any moral hypothesis to which you assign at least 80% probability" might well make Will's claim go through, and yet does not "automatically hand the decision to the internal component that names the biggest number" as I understand that phrase; after all, the hypothesis won out by being 80% probable and not by naming the biggest number. Some other hypothesis could have won out by naming a smaller number (than the numbers that turn up in discussions of astronomical waste), if it had seemed true.
I don't actually endorse that particular aggregation rule, but for me to be convinced that all plausible candidates avoid Will's conclusion that the relevant value here is "roughly infinite" (or the much weaker conclusion that LW is irrationally scope-insensitive here) would require some further argument.