The torture vs. dust specks quandary is a canonical one to LW. Off the top of my head, I can't remember anyone suggesting the reversal, one where the arguments taken by the hypothetical are positive and not negative. I'm curious about how it affects people's intuitions. I call it - as the title indicates - "Sublimity vs. Youtube1".
Suppose the impending existence of some person who is going to live to be fifty years old whatever you do2. She is liable to live a life that zeroes out on a utility scale: mediocre ups and less than shattering downs, overall an unremarkable span. But if you choose "sublimity", she's instead going to live a life that is truly sublime. She will have a warm and happy childhood enriched by loving relationships, full of learning and wonder and growth; she will mature into a merrily successful adult, pursuing meaningful projects and having varied, challenging fun. (For the sake of argument, suppose that the ripple effects of her sublime life as it affects others still lead to the math tallying up as +(1 sublime life), instead of +(1 sublime life)+(various lovely consequences).)
Or you can choose "Youtube", and 3^^^3 people who weren't doing much with some one-second period of their lives instead get to spend that second watching a brief, grainy, yet droll recording of a cat jumping into a box, which they find mildly entertaining.
Sublimity or Youtube?
1The choice in my variant scenario of "watching a Youtube video" rather than some small-but-romanticized pleasure ("having a butterfly land on your finger, then fly away", for instance) is deliberate. Dust specks are really tiny, and there's not much automatic tendency to emotionally inflate them. Hopefully Youtube videos are the reverse of that.
2I'm choosing to make it an alteration of a person who will exist either way to avoid questions about the utility of creating people, and for greater isomorphism with the "torture" option in the original.
If I knew I would be smarter than Yudkowsky, as he writes:
Something seems to be fundamentally wrong with using Bayes’ Theorem, the expected utility formula, and Solomonoff induction to determine how to choose given unbounded utility scenarios. If you just admit that it is wrong but less wrong, then I think it is valid to scrutinize your upper and lower bounds. Yudkowsky clearly sets some upper bound, but what is it and how does he determine it if not by 'gut feeling'? And if it all comes down to 'instinct' on when to disregard any expected utility, then how can one still refer to those heuristics as 'laws'?