Today's post, Scope Insensitivity, was originally published on 14 May 2007. A summary (taken from the LW wiki):
The human brain can't represent large quantities: an environmental measure that will save 200,000 birds doesn't conjure anywhere near a hundred times the emotional impact and willingness-to-pay of a measure that would save 2,000 birds.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Third Alternatives for Afterlife-ism, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
I think the most popular form of scope insensitivity 'round these parts might be failing to remember that an existential catastrophe would be roughly infinitely worse than just losing the paltry sum of seven billion people. We'd also lose access to an entire universe's worth of resources.
I want a mantra that is in the spirit of "Reasoning correctly is the most important thing in the universe" but with more of a poetic feel. The attitude it characterizes is one I really respect and would like to make part of myself. A lot of SingInst-related folk seem to me to have this virtue (e.g. Anna Salamon, Carl Shulman, and Steve Rayhawk, though each of them in different ways). Vladimir Nesov is a non-SingInst example. Anyone have any ideas about how to mantra-ize it?
That seems like it might be true for someone fanatically committed to an unbounded aggregative social welfare function combined with a lot of adjustments to deal with infinities, etc. Given any moral uncertainty, mixed motivations, etc (with an aggregation rule that doesn't automatically hand the decision to the internal component that names the biggest number) the claim doesn't go through. Also, it's an annoying assertion of the supremacy of one's nominal values (as sometimes verbally expressed, not revealed preference) to most people.