You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Armok_GoB comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 06 August 2013 08:25:27AM *  7 points [-]

Just a fun little thing that came to my mind.

  1. If "anthropic probabilities" make sense, then it seems natural to use them as weights for aggregating different people's utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice's and Bob's.

  2. If the "anthropic probability" of an observer-moment depends on its K-complexity, as in Wei Dai's UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.

  3. If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let's call it Felix.

  4. All parallel versions of mankind will discover the same Felix, because it's singled out by being the simplest.

  5. Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.

  6. If you agree with that argument, you should start preparing for the arrival of Felix now. There's work to be done.

Where is the error?

That's the sharp version of the argument, but I think it's still interesting even in weakened forms. If there's a mathematical connection between simplicity and utility, and we humans aren't the simplest possible observers, then playing with such math can strongly affect utility.

Comment author: Armok_GoB 13 August 2013 10:40:18PM 0 points [-]

A version of this that seems a bit more likely to me at least; the thing that matters is not the simplicity of the mind itself, but rather the ease of pointing it out among the rest of the universe; this'd mean that, basically, a a planet sized Babbage engine running a single human equivalent mind, would get more weight than a planet sized quantum computer running trillions and trillions of such minds. It'd also mean that all sorts of implementation details of how close the experiencing level is to raw physics would matter a lot, even if the I/O behaviour is identical. This is highly counter-intuitive.