The torture vs. dust specks quandary is a canonical one to LW. Off the top of my head, I can't remember anyone suggesting the reversal, one where the arguments taken by the hypothetical are positive and not negative. I'm curious about how it affects people's intuitions. I call it - as the title indicates - "Sublimity vs. Youtube1".
Suppose the impending existence of some person who is going to live to be fifty years old whatever you do2. She is liable to live a life that zeroes out on a utility scale: mediocre ups and less than shattering downs, overall an unremarkable span. But if you choose "sublimity", she's instead going to live a life that is truly sublime. She will have a warm and happy childhood enriched by loving relationships, full of learning and wonder and growth; she will mature into a merrily successful adult, pursuing meaningful projects and having varied, challenging fun. (For the sake of argument, suppose that the ripple effects of her sublime life as it affects others still lead to the math tallying up as +(1 sublime life), instead of +(1 sublime life)+(various lovely consequences).)
Or you can choose "Youtube", and 3^^^3 people who weren't doing much with some one-second period of their lives instead get to spend that second watching a brief, grainy, yet droll recording of a cat jumping into a box, which they find mildly entertaining.
Sublimity or Youtube?
1The choice in my variant scenario of "watching a Youtube video" rather than some small-but-romanticized pleasure ("having a butterfly land on your finger, then fly away", for instance) is deliberate. Dust specks are really tiny, and there's not much automatic tendency to emotionally inflate them. Hopefully Youtube videos are the reverse of that.
2I'm choosing to make it an alteration of a person who will exist either way to avoid questions about the utility of creating people, and for greater isomorphism with the "torture" option in the original.
Yeah why not? Once when I asked if the SIAI would consider the possibility of paying AGI researchers not to do AGI research, or kill an AGI researcher who is just days away from launching an uFAI, Yudkowsky said something along the lines that it is OK to just downvote me to -10 rather -10000. Talk about taking ideas seriously?
Never mind the above, I can't tell you why it would be wrong but I have a feeling that it is. It would lead to all kinds of bad behavior based on probabilities and expected utility calculations. I don't feel like taking that route right now...
Can I conclude that you would give in to a Pascal's Mugging scenario? If not, where do you draw the line and why? If an important part of your calculation, the part that sets the upper and lower bounds, is necessarily based on 'instinct' then why don't you disregard those calculations completely and do what you feel is right and don't harm anyone?
To answer your questions: No, I don't think you can fairly conclude that I'm subject to Pascal's Mugging, and I draw the line based on what calculations I can do and what calculations I can't do.
That is, my inability to come up with reliable estimates of the probability that Pascal's Mugger really can (and will) kill 3^^^3 people is not a good reason for me to disregard my ability to come up with reliable estimates of the probability that dropping poison in a well will kill people; I can reasonably refuse to do the latter (regardless of what I feel) on th... (read more)