timtyler comments on Ingredients of Timeless Decision Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (226)
It looks to me like DBDT is working in the direction of TDT but isn't quite there yet. It looks similar to the sort of reasoning I was talking about earlier, where you try to define a problem class over payoff-determining properties of algorithms.
But this isn't the same as a reflectively consistent decision theory, because you can only maximize on the problem class from outside the system - you presume an existing decision process or ability to maximize, and then maximize the dispositions using that existing decision theory. Why not insert yet another step? What if one were to talk about dispositions to choose particular disposition-choosing algorithms as being rational? In other words, maximizing "dispositions" from outside strikes me as close kin to "precommitment" - it doesn't so much guarantee reflective consistency of viewpoints, as pick one particular viewpoint to have control.
As Drescher points out, if the base theory is a CDT, then there's still a possibility that DBDT will end up two-boxing if Omega takes a snapshot of the (classical) universe a billion years ago before DBDT places the "critical point". A base theory of TDT, of course, would one-box, but then you don't need the edifice of DBDT on top because the edifice doesn't add anything. So you could define "reflective consistency" in terms of "fixed point under precommitment or disposition-choosing steps".
TDT is validated by the sort of reasoning that goes into DBDT, but the TDT algorithm itself is a plain-vanilla non-meta decision theory which chooses well on-the-fly without needing to step back and consider its dispositions, or precommit, etc. The Buck Stops Immediately. This is what I mean by "reflective consistency". (Though I should emphasize that so far this only works on the simple cases that constitute 95% of all published Newcomblike problems, and in complex cases like Wei Dai and I are talking about, I don't know any good fixed algorithm (let alone a single-step non-meta one).)
I had a look a the Wikipedia "Precommitment" article to see whether precommitment is actually as inappropriate as it seems to be being portrayed as.
According to the article, the main issue seems to involve cutting off your own options.
Is a sensible one-boxing agent "precommitting" to one-boxing by "cutting off its own options" - namely the option of two-boxing?
On one hand, they still have the option and a free choice when they come to decide. On the other hand, the choice has been made for them by their own nature - and so they don't really have the option of choosing any more.
My assessment is that the word is not obviously totally inappropriate.
Does "disposition" have the same negative connotations as "precommitting" has? I would say not: "disposition" seems like a fairly appropriate word to me.