You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on Open thread, August 5-11, 2013 - Less Wrong Discussion

3 Post author: David_Gerard 05 August 2013 06:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread.

Comment author: cousin_it 06 August 2013 08:25:27AM *  7 points [-]

Just a fun little thing that came to my mind.

  1. If "anthropic probabilities" make sense, then it seems natural to use them as weights for aggregating different people's utilities. For example, if you have a 60% chance of being Alice and a 40% chance of being Bob, your utility function is a weighting of Alice's and Bob's.

  2. If the "anthropic probability" of an observer-moment depends on its K-complexity, as in Wei Dai's UDASSA, then the simplest possible observer-moments that have wishes will have disproportionate weight, maybe more than all mankind combined.

  3. If someday we figure out the correct math of which observer-moments can have wishes, we will probably know how to define the simplest such observer-moment. Following SMBC, let's call it Felix.

  4. All parallel versions of mankind will discover the same Felix, because it's singled out by being the simplest.

  5. Felix will be a utility monster. The average utilitarians who believe the above assumptions should agree to sacrifice mankind if that satisfies the wishes of Felix.

  6. If you agree with that argument, you should start preparing for the arrival of Felix now. There's work to be done.

Where is the error?

That's the sharp version of the argument, but I think it's still interesting even in weakened forms. If there's a mathematical connection between simplicity and utility, and we humans aren't the simplest possible observers, then playing with such math can strongly affect utility.

Comment author: JGWeissman 06 August 2013 01:48:49PM 5 points [-]

How would being moved by this argument help me achieve my values? I don't see how it helps me to maximize an aggregate utility function for all possible agents. I don't care intrinsically about Felix, nor is Felix capable of cooperating with me in any meaningful way.

Comment author: ESRogs 07 August 2013 12:06:03PM 1 point [-]

How does your aggregate utility function weigh agents? That seems to be what the argument is about.

Comment author: Wei_Dai 06 August 2013 08:50:22AM 4 points [-]

Felix exists as multiple copies in many universes/Everett branches, and it's measure is the sum of the measures of the copies. Each version of mankind can only causally influence (e.g., make happier) the copy of Felix existing in the same universe/branch, and the measure of that copy of Felix shouldn't be much higher than that of an individual human, so there's no reason to treat Felix as a utility monster. Applying acausal reasoning doesn't change this conclusion either. For example all the parallel versions of mankind could jointly decide to make Felix happier, but while the benefit of that is greater (all the copies of Felix existing near the parallel versions of mankind would get happier), so would the cost.

If Felix is very simple it may be deriving most of its measure from a very short program that just outputs a copy of Felix (rather than the copies existing in universes/branches containing humans), but there's nothing humans can do to make this copy of Felix happier, so its existence doesn't make any difference.

Comment author: cousin_it 06 August 2013 09:08:19AM *  2 points [-]

the measure of that copy of Felix shouldn't be much higher than that of an individual human

Why? Even within just one copy of Earth, the program that finds Felix should be much shorter than any program that finds a human mind...

Comment author: Wei_Dai 07 August 2013 01:07:38AM 2 points [-]

Are you thinking that the shortest program that finds Felix in our universe would contain a short description of Felix and find it by pattern matching, whereas the shortest program that finds a human mind would contain the spacetime coordinates of the human? I guess which is shorter would be language dependent... if there is some sort of standard language that ought to be used, and it turns out the former program is much shorter than the latter in this language, then we can make the program that finds a human mind shorter by for example embedding some kind of artificial material in their brain that's easy to recognize and doesn't exist elsewhere in nature. Although I suppose that conclusion isn't much less counterintuitive than "Felix should be treated as a utility monster".

Comment author: cousin_it 07 August 2013 05:08:42AM *  2 points [-]

Yeah, there's a lot of weird stuff going on here. For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn't be hard), it will have larger measure anyway. But on the other hand, the shortest program that finds a particular human may also do that by pattern matching... I no longer understand what's right and what's wrong anymore.

Comment author: Wei_Dai 07 August 2013 08:35:36AM *  2 points [-]

For example, Paul said sometime ago that ASSA gives a thick computer larger measure than a thin computer, so if we run Felix on a computer that is much thicker than human neurons (shouldn't be hard), it will have larger measure anyway.

Hal Finney pointed out the same thing a long time ago on everything-list. I also wrote a post about how we don't seem to value extra identical copies in a linear way, and noted at the end that this also seems to conflict with UDASSA. My current idea (which I'd try to work out if I wasn't distracted by other things) is that the universal distribution doesn't tell you how much you should value someone, but only puts an upper bound on how much you can value someone.

Comment author: Manfred 07 August 2013 01:47:16AM *  2 points [-]

http://xkcd.com/687/

Or to put it another way - probability is not just a unit. You need to keep track of probability of what, and to whom, or else you end up like the bad dimensional analysis comic.

Comment author: Armok_GoB 13 August 2013 10:40:18PM 0 points [-]

A version of this that seems a bit more likely to me at least; the thing that matters is not the simplicity of the mind itself, but rather the ease of pointing it out among the rest of the universe; this'd mean that, basically, a a planet sized Babbage engine running a single human equivalent mind, would get more weight than a planet sized quantum computer running trillions and trillions of such minds. It'd also mean that all sorts of implementation details of how close the experiencing level is to raw physics would matter a lot, even if the I/O behaviour is identical. This is highly counter-intuitive.

Comment author: Armok_GoB 13 August 2013 10:33:44PM 0 points [-]

One flaw; Felix almost certainly resides outside our causal reach and doesn't care about what happens here.