Jack comments on Solve Psy-Kosh's non-anthropic problem - Less Wrong

34 Post author: cousin_it 20 December 2010 09:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread.

Comment author: Jack 21 December 2010 03:36:45AM 6 points [-]

It is an anthropic problem. Agents who don't get to make decisions by definition don't really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.

Comment author: wedrifid 21 December 2010 05:02:04AM *  4 points [-]

As a decision theoretic agent being told you are not the decider is equivalent to dying.

Or more precisely it is equivalent to falling into a crack in spacetime without Amelia Pond having a crush on you. ;)

Comment author: Emile 21 December 2010 11:06:19AM 1 point [-]

It is an anthropic problem.

Depends of what you mean by "Anthropic problem". The first google result for that term right now is this post, so the term doesn't seem to have a widely-agreed upon meaning, though there is some interesting discussion on Wikipedia.

Maybe we could distinguish

  • "Anthropic reasoning", where your reasoning needs to take into account not only the facts you observed (i.e. standard bayesian reasoning) but also the fact that you are there to take the decision period.

  • "Anthropic scenarios" (ugh), where the existence of agents comes into account (ike the sleeping beauty problem, our universe, etc.)

Anthropic scenarios feature outlandish situations (teleporters, the sleeping beauty) or are somewhat hard to reproduce (the existence of our universe). So making scenarios that aren't outlandish anthropic scenarios but still require anthropic reasoning is nice for intuition (especially in an area like this where everybody's intuition starts breaking down), even if it doesn't change anything from a pure decision theory point of view.

I'm not very happy with this decomposition; seems to me "is this an anthropic problem?" can be answered by "Well it does require anthropic reasoning but doesn't require outlandish scenarios like most similar problems", but there may be a better way of putting it.

Comment author: Jack 21 December 2010 07:19:54PM 7 points [-]

It is a nice feature that Psy-kosh's problem that it pumps the confusing intuitions we see in scenarios like the Sleep Beauty problem without recourse to memory erasing drugs or teleporters-- I think it tells us something important about this class of problem. But mathematically the problem is equivalent to one where the coin-flip doesn't make nine people deciders but copies you nine times- I don't think there is a good justification for labeling these problems differently.

The interesting question is what this example tells us about the nature of this class of problem- and I'm having trouble putting my finger on just what that is.

Comment author: cousin_it 22 December 2010 06:58:53AM 4 points [-]

RIght. That's the question I wanted people to answer, not just solve the object-level problem (UDT solves it just fine).