Ezekiel comments on Problematic Problems for TDT - Less Wrong

36 Post author: drnickbone 29 May 2012 03:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (298)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ezekiel 23 May 2012 10:51:42PM *  7 points [-]

In what way is Newcomb's Problem "anti-causality"?

If you don't like the superpowerful predictor, it works for human agents as well. Imagine you need to buy something but don't have cash on you, so you tell the shopkeeper you'll pay him tomorrow. If he thinks you're telling the truth, he'll give you the item now and let you come back tomorrow. If not, you lose a day's worth of use, and so some utility.

So your best bet (if you're selfish) is to tell him you'll pay tomorrow, take the item, and never come back. But what if you're a bad liar? Then you'll blush or stammer or whatever, and you won't get your good.

A regular Causal agent, however, having taken the item, will not come back the next day - and you know it, and it will show on your face. So in order to get what you want, you have to actually be the kind of person who respects their past selves decisions - a TDT agent, or a CDT agent with some pre-commitment system.

The above has the same attitude to causality as Newcomb's Problem - specifically, it includes another agent rewarding you based that agent's calculations of your future behaviour. But it's a situation I've been in several times.

EDIT: Grammar.

Comment author: ciphergoth 24 May 2012 07:22:37AM 2 points [-]

This example is much like Parfit's Hitchhiker in less extreme form.