You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JGWeissman comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 06 August 2013 08:25:27AM *  7 points [-]

Just a fun little thing that came to my mind.

  1. If "anthropic probabilities" make sense, then it seems natural to use them as weights for aggregating different people's utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice's and Bob's.

  2. If the "anthropic probability" of an observer-moment depends on its K-complexity, as in Wei Dai's UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.

  3. If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let's call it Felix.

  4. All parallel versions of mankind will discover the same Felix, because it's singled out by being the simplest.

  5. Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.

  6. If you agree with that argument, you should start preparing for the arrival of Felix now. There's work to be done.

Where is the error?

That's the sharp version of the argument, but I think it's still interesting even in weakened forms. If there's a mathematical connection between simplicity and utility, and we humans aren't the simplest possible observers, then playing with such math can strongly affect utility.

Comment author: JGWeissman 06 August 2013 01:48:49PM 5 points [-]

How would being moved by this argument help me achieve my values? I don't see how it helps me to maximize an aggregate utility function for all possible agents. I don't care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.

Comment author: ESRogs 07 August 2013 12:06:03PM 1 point [-]

How does your aggregate utility function weigh agents? That seems to be what the argument is about.