One scheme with the properties you want is Wei Dai's UDASSA, e.g. see here. I think UDASSA is by far the best formal theory we have to date, although I'm under no delusions about how well it captures all of our intuitions (I'm also under no delusions about how consistent our intuitions are, so I'm resigned to accepting a scheme that doesn't capture them).

I think it would be more fair to call this allocation of measure part of my preferences, instead of "magical reality fluid." Thinking that your preferences are objective facts about the world seems like one of the oldest errors in the book, which is only possibly justified in this case because we are still confused about the hard problem of consciousness.

As other commenters have observed, it seems clear that you should never actually believe that the mugger can influence the lives of 3^^^^3 other folks and will do so at your suggestion, whether or not you've made any special "leverage adjustment." Nevertheless, even though you never believe that you have such influence, you would still need to pass to some bounded utility function if you want to use the normal framework of expected utility maximization, since you need to compare the goodness of whole worlds. Either that, or you would need to make quite significant modifications to your decision theory.

Comment author:drnickbone
10 May 2013 06:47:56AM
*
1 point
[-]

A note - it looks like what Eliezer is suggesting here is not the same as UDASSA. See my analysis here - and endoself's reply - and here.

The big difference is that UDASSA won't impose the same locational penalty on nodes in extreme situations, since the measure is shared unequally between nodes. There are programs q with relatively short length that can select out such extreme nodes (parties getting genuine offers from Matrix Lords with the power of 3^^^3) and so give them much higher relative weight than 1/3^^^3. Combine this with an unbounded utility, and the mugger problem is still there (as is the divergence in expected utility).

I agree that what Eliezer described is not exactly UDASSA. At first I thought it was just like UDASSA but with a speed prior, but now I see that that's wrong. I suspect it ends up being within a constant factor of UDASSA, just by considering universes with tiny little demons that go around duplicating all of the observers a bunch of times.

If you are using UDT, the role of UDASSA (or any anthropic theory) is in the definition of the utility function. We define a measure over observers, so that we can say how good a state of affairs is (by looking at the total goodness under that measure). In the case of UDASSA the utility is guaranteed to be bounded, because our measure is a probability measure. Similarly, there doesn't seem to be a mugging issue.

Comment author:drnickbone
11 May 2013 08:21:28AM
*
0 points
[-]

As lukeprog says here, this really needs to be written up. It's not clear to me that just because the measure over observers (or observer moments) sums to one then the expected utility is bounded.

Here's a stab. Let's use s to denote a sub-program of a universe program p, following the notation of my other comment. Each s gets a weight w(s) under UDASSA, and we normalize to ensure Sum{s} w(s) = 1.

Then, presumably, an expected utility looks like E(U) = Sum{s} U(s) w(s), and this is clearly bounded provided the utility U(s) for each observer moment s is bounded (and U(s) = 0 for any sub-program which isn't an "observer moment").

But why is U(s) bounded? It doesn't seem obvious to me (perhaps observer moments can be arbitrarily blissful, rather than saturating at some state of pure bliss). Also, what happens if U bears no relationship to experiences/observer moments, but just counts the number of paperclips in the universe p? That's not going to be bounded, is it?

Comment author:cousin_it
06 May 2013 06:05:48PM
*
1 point
[-]

Yeah, I like this solution too. It doesn't have to be based on the universal distribution, any distribution will work. You must have some way of distributing your single unit of care across all creatures in the multiverse. What matters is not the large number of creatures affected by the mugger, but their total weight according to your care function, which is less than 1 no matter what outlandish numbers the mugger comes up with. The "leverage penalty" is just the measure of your care for not losing $5, which is probably more than 1/3^^^^3.

## Comments (404)

Best*2 points [-]One scheme with the properties you want is Wei Dai's UDASSA, e.g. see here. I think UDASSA is by far the best formal theory we have to date, although I'm under no delusions about how well it captures all of our intuitions (I'm also under no delusions about how consistent our intuitions are, so I'm resigned to accepting a scheme that doesn't capture them).

I think it would be more fair to call this allocation of measure part of my preferences, instead of "magical reality fluid." Thinking that your preferences are objective facts about the world seems like one of the oldest errors in the book, which is only

possiblyjustified in this case because we are still confused about the hard problem of consciousness.As other commenters have observed, it seems clear that you should never actually believe that the mugger can influence the lives of 3^^^^3 other folks and will do so at your suggestion, whether or not you've made any special "leverage adjustment." Nevertheless, even though you never believe that you have such influence, you would still need to pass to some bounded utility function if you want to use the normal framework of expected utility maximization, since you need to compare the goodness of whole worlds. Either that, or you would need to make quite significant modifications to your decision theory.

*1 point [-]A note - it looks like what Eliezer is suggesting here is

notthe same as UDASSA. See my analysis here - and endoself's reply - and here.The big difference is that UDASSA won't impose the same locational penalty on nodes in extreme situations, since the measure is shared unequally between nodes. There are programs q with relatively short length that can select out such extreme nodes (parties getting genuine offers from Matrix Lords with the power of 3^^^3) and so give them much higher relative weight than 1/3^^^3. Combine this with an unbounded utility, and the mugger problem is still there (as is the divergence in expected utility).

I agree that what Eliezer described is not exactly UDASSA. At first I thought it was just like UDASSA but with a speed prior, but now I see that that's wrong. I suspect it ends up being within a constant factor of UDASSA, just by considering universes with tiny little demons that go around duplicating all of the observers a bunch of times.

If you are using UDT, the role of UDASSA (or any anthropic theory) is in the definition of the utility function. We define a measure over observers, so that we can say how good a state of affairs is (by looking at the total goodness under that measure). In the case of UDASSA the utility is guaranteed to be bounded, because our measure is a probability measure. Similarly, there doesn't seem to be a mugging issue.

*0 points [-]As lukeprog says here, this really needs to be written up. It's not clear to me that just because the measure over observers (or observer moments) sums to one then the expected utility is bounded.

Here's a stab. Let's use s to denote a sub-program of a universe program p, following the notation of my other comment. Each s gets a weight w(s) under UDASSA, and we normalize to ensure Sum{s} w(s) = 1.

Then, presumably, an expected utility looks like E(U) = Sum{s} U(s) w(s), and this is clearly bounded provided the utility U(s) for each observer moment s is bounded (and U(s) = 0 for any sub-program which isn't an "observer moment").

But why is U(s) bounded? It doesn't seem obvious to me (perhaps observer moments can be arbitrarily blissful, rather than saturating at some state of pure bliss). Also, what happens if U bears no relationship to experiences/observer moments, but just counts the number of paperclips in the universe p? That's not going to be bounded, is it?

I agree it would be nice if things were better written up; right now there is the description I linked and Hal Finney's.

If individual moments can be arbitrarily good, then I agree you have unbounded utilities again.

If you count the number of paperclips you would again get into trouble; the analogous thing to do would be to count the mesure of paperclips.

*1 point [-]Yeah, I like this solution too. It doesn't have to be based on the universal distribution, any distribution will work. You must have some way of distributing your single unit of care across all creatures in the multiverse. What matters is not the large number of creatures affected by the mugger, but their total weight according to your care function, which is less than 1 no matter what outlandish numbers the mugger comes up with. The "leverage penalty" is just the measure of your care for not losing $5, which is probably more than 1/3^^^^3.

Who might have the time, desire, and ability to write up UDASSA clearly, if MIRI provides them with resources?