Gary_Drescher comments on Ingredients of Timeless Decision Theory - Less Wrong

43 Post author: Eliezer_Yudkowsky 19 August 2009 01:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (226)

You are viewing a single comment's thread. Show more comments above.

Comment author: Gary_Drescher 19 August 2009 07:58:22PM 3 points [-]

I don't think DBDT gives the right answer if the predictor's snapshot of the local universe-state was taken before the agent was born (or before humans evolved, or whatever), because the "critical point", as Fisher defines it, occurs too late. But a one-box chooser can still expect a better outcome.

Comment author: Eliezer_Yudkowsky 19 August 2009 08:57:51PM *  4 points [-]

It looks to me like DBDT is working in the direction of TDT but isn't quite there yet. It looks similar to the sort of reasoning I was talking about earlier, where you try to define a problem class over payoff-determining properties of algorithms.

But this isn't the same as a reflectively consistent decision theory, because you can only maximize on the problem class from outside the system - you presume an existing decision process or ability to maximize, and then maximize the dispositions using that existing decision theory. Why not insert yet another step? What if one were to talk about dispositions to choose particular disposition-choosing algorithms as being rational? In other words, maximizing "dispositions" from outside strikes me as close kin to "precommitment" - it doesn't so much guarantee reflective consistency of viewpoints, as pick one particular viewpoint to have control.

As Drescher points out, if the base theory is a CDT, then there's still a possibility that DBDT will end up two-boxing if Omega takes a snapshot of the (classical) universe a billion years ago before DBDT places the "critical point". A base theory of TDT, of course, would one-box, but then you don't need the edifice of DBDT on top because the edifice doesn't add anything. So you could define "reflective consistency" in terms of "fixed point under precommitment or disposition-choosing steps".

TDT is validated by the sort of reasoning that goes into DBDT, but the TDT algorithm itself is a plain-vanilla non-meta decision theory which chooses well on-the-fly without needing to step back and consider its dispositions, or precommit, etc. The Buck Stops Immediately. This is what I mean by "reflective consistency". (Though I should emphasize that so far this only works on the simple cases that constitute 95% of all published Newcomblike problems, and in complex cases like Wei Dai and I are talking about, I don't know any good fixed algorithm (let alone a single-step non-meta one).)

Comment author: Gary_Drescher 19 August 2009 11:08:11PM *  4 points [-]

Exactly. Unless "cultivating a disposition" amounts to a (subsequent-choice-circumventing) precommitment, you still need a reason, when you make that subsequent choice, to act in accordance with the cultivated disposition. And there's no good explanation for why that reason should care about whether or not you previously cultivated a disposition.

Comment author: Eliezer_Yudkowsky 19 August 2009 11:09:15PM 0 points [-]

(Though I think the paper was trying to use dispositions to define "rationality" more than to implement an agent that would consistently carry out those dispositions?)

Comment author: Gary_Drescher 19 August 2009 11:34:21PM 1 point [-]

I didn't really get the purpose of the paper's analysis of "rationality talk". Ultimately, as I understood the paper, it was making a prescriptive argument about how people (as actually implemented) should behave in the scenarios presented (i.e, the "rational" way for them to behave).

Comment author: timtyler 20 August 2009 06:44:34AM *  0 points [-]

I don't know if Justin Fisher's work exactly replicates your own conclusions. However it seems to have much the same motivations, and to have reached many of the same conclusions.

FWIW, it took me about 15 minutes to find that paper in a literature search.

Another relevant paper:

"No regrets: or: Edith Piaf revamps decision theory".

That one seems to have christened what you tend to refer to as "consistency under reflection" as "desire reflection".

I don't seem to like either term very much - but currently don't have a better alternative to offer.

Comment author: Eliezer_Yudkowsky 20 August 2009 07:14:23AM *  0 points [-]

Violation of desire reflection would be a sufficient condition for violation of dynamic consistency, which in turn is a sufficient condition to violate reflective consistency. I don't see a necessity link.

Comment author: timtyler 20 August 2009 07:03:15AM *  0 points [-]

I had a look a the Wikipedia "Precommitment" article to see whether precommitment is actually as inappropriate as it seems to be being portrayed as.

According to the article, the main issue seems to involve cutting off your own options.

Is a sensible one-boxing agent "precommitting" to one-boxing by "cutting off its own options" - namely the option of two-boxing?

On one hand, they still have the option and a free choice when they come to decide. On the other hand, the choice has been made for them by their own nature - and so they don't really have the option of choosing any more.

My assessment is that the word is not obviously totally inappropriate.

Does "disposition" have the same negative connotations as "precommitting" has? I would say not: "disposition" seems like a fairly appropriate word to me.

Comment author: timtyler 20 August 2009 06:39:09AM -2 points [-]

The most obvious reply to the point about dispositions to have dispositions is to take a behavourist stance: if a disposition results in particular actions under particular circumstances, then a disposition to have a disposition (plus the ability to self-modify) is just another type of disposition, really.