Stuart_Armstrong comments on Papers framing anthropic questions as decision problems? - Less Wrong

3 Post author: jsalvatier 26 April 2012 12:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 26 April 2012 05:31:14PM *  0 points [-]

Er no - you gave me an underspecified problem. You told me the agents were selfish (good), but then just gave me anthropic probabilities, without giving me the non-anthropic probabilities. I assumed you were meaning to use SSA, and worked back from there. This may have been incorrect - were you assuming SIA? In that case the coin odds are (1/2,1/2) and (2/3,1/3), and ADT would reach different conclusions. But only because the problem was underspecified (giving anthropic probabilities without explaining the theory that goes with them is not specifying the problem).

As long as you give a full specification of the problem, ADT doesn't have an issue. You don't need to adjust free parameters or anything.

I feel like I'm missing something here. Can you explain the hole in ADT you seem to find so glaring?

Comment author: Manfred 26 April 2012 06:21:56PM *  0 points [-]

You told me the agents were selfish (good), but then just gave me anthropic probabilities, without giving me the non-anthropic probabilities.

I intended "In both of these problems there are two worlds, "H" and "T," which have equal "no anthropics" probabilities of 0.5. "

In retrospect, my example of evidence (stopping some of the experiments) wasn't actually what I wanted, since an outside observer would notice it. In order to mess with anthropic probabilities in isolation you'd need to change the structure of coinflips and people-creation.

Comment author: Stuart_Armstrong 27 April 2012 12:13:55PM 0 points [-]

In order to mess with anthropic probabilities in isolation you'd need to change the structure of coinflips and people-creation

But you can't mess with the probabilities in isolation. Suppose I were an SIA agent, for instance; then you can't change my anthropic probabilities without changing non-anthropic facts about the world.

Comment author: Manfred 27 April 2012 02:19:01PM 0 points [-]

I'm uncertain whether what you're saying is relevant. The question at hand is, is there some change to a problem that changes anthropic probabilities, but is guaranteed not to change ADT decisions? Such a change would have to conserve the number of worlds, the number of people in each world, the possible utilities, and the "no anthropics" probabilities

For example, if my anthropic knowledge says that I'm an agent at a specific point in time, a change in how long Sleeping Beauty stays awake in different "worlds" will change how likely I am to find myself there overall.

Comment author: Stuart_Armstrong 27 April 2012 04:31:09PM *  0 points [-]

The question at hand is, is there some change to a problem that changes anthropic probabilities, but is guaranteed not to change ADT decisions?

Is there? It would require some sort of evidence that would change your own anthropic probabilities, but that would not change the opinion of any outside observer if they saw it.

For example, if my anthropic knowledge says that I'm an agent at a specific point in time, a change in how long Sleeping Beauty stays awake in different "worlds" will change how likely I am to find myself there overall.

Doesn't feel like that would work... if you remember how long you've been awake, that makes you into slightly different agents, and if the duration of the awakening gives you any extra info, it would show up in ADT too. And if you forget how long you're awake, that's just sleeping beauty with more awakenings...

Define "individual impact" as the belief that your own actions have no correlations with those of your copies (the belief your decisions control all your copies is "total impact"). Then ADT basically has the following equivalences:

  • ADT + selfless or total utilitarian = SIA + individual impact (= SSA+ total impact)
  • ADT + average utilitarian = SSA + individual impact
  • ADT + selfish = SSA + individual impact + complications (e.g. with precommitments)

If those equivalences are true, it seems that we cannot vary the anthropic probabilities without varying the ADT decision.

Comment author: Manfred 28 April 2012 09:25:24PM *  0 points [-]

EDIT: Expanded first point a bit.

if you remember how long you've been awake, that makes you into slightly different agents, and if the duration of the awakening gives you any extra info, it would show up in ADT too.

Hm. One could try and fix it by splitting each point in time into different "worlds," like you suggest below. But the updating from time (let's assume there's no clock to look at, so the curves are smooth) would rely on the subjective probabilities, which you are avoiding. The update ratio is P(feels like 4 hours | heads) / P(feels like 4 hours). If P(feels like 4 hours | X) is 0.9 if X is heads and 0.8 if X is tails, then if the probabilities are 1/3 the ratio will be 1.08, while if the probabilities are 1/2, 1/4, 1/4 the update is a factor of 1.059.

This does lead to a case a bit more complicated than my original examples, though, because the people in different worlds will make different decisions. I'm not even sure how ADT would handle this situation, since it has to avoid the subjective probabilities - do you respond like an outside observer, and use 0.5, 0.5 for everything?

And if you forget how long you're awake, that's just sleeping beauty with more awakenings...

Yes, that would be reasonable.

Then ADT basically has the following equivalences:

Those only hold if things are simple. To say "these might prevent things from getting any more complicated" is to put the cart before the horse.

Comment author: Stuart_Armstrong 30 April 2012 10:52:58AM 0 points [-]

ADT does not avoid subjective probabilities - it only avoid anthropic probabilities. P(feels like 4 hours | heads) is perfectly fine. ADT only avoids probabilities that would change if you shifted from SIA to SSA or vice versa.

Comment author: Manfred 30 April 2012 04:56:58PM 0 points [-]

It is exactly one of those probabilities.

Comment author: Stuart_Armstrong 30 April 2012 06:57:43PM 0 points [-]

Can you spell out the full setup?

Comment author: Manfred 01 May 2012 03:54:33AM 0 points [-]

Okay, so let's say you're given some weak evidence which world you're in - for example, if you're asked the question when you've been awake for 4 hours if the coin was Tails vs. awake for 3.5 hours if Heads. In the Doomsday problem, this would be like learning facts about the earth that would be different if we were about to go extinct vs. if it wasn't (we know lots of these, in fact).

So let's say that your internal chronometer is telling you that if "feels like it's been 4 hours" when you're asked the question, but you're not totally sure - let's say that the only two options are "feels like it's been 4 hours" and "feels like it's been 3.5 hours," and that your internal chronometer is correctly influenced by the world 75% of the time. So P(feels like 4 | heads) = 0.25, P(feels like 3.5 | heads) = 0.75, and vice versa for tails.

A utility-maximizing agent would then make decisions based on P(heads | feels like 4 hours) - but an ADT agent has to do something else. In order to update on the evidence, an ADT agent can just weight the different worlds by the update ratio. For example, if told that the coin is more likely to land heads than tails, an ADT agent successfully updates in favor of heads.

However, what if the update ratio also depended on the anthropic probabilities (that is, SIA vs. SSA)? That would be bad - we couldn't do the same updating thing . If our new probability is P(A|B), Bayes' rule says that's P(A)*P(B|A)/P(B), so the update ratio is P(B|A)/P(B). The numerator is easy - it's just 0.75 or 0.25. Does the denominator, on the other hand, depend on the anthropic probabilities?