You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MrMind comments on Open thread, Nov. 24 - Nov. 30, 2014 - Less Wrong Discussion

4 Post author: MrMind 24 November 2014 08:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (317)

You are viewing a single comment's thread. Show more comments above.

Comment author: MrMind 26 November 2014 11:02:36AM 0 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Comment author: wedrifid 26 November 2014 12:34:07PM 2 points [-]

Is TDT accurately described by "CDT + acausal comunication through mutual emulation"?

Communication isn't enough. CDT agents can't cooperate in a prisoner's dilemma if you put them in the same room and let them talk to each other. They aren't going to be able to cooperate in analogous trades across time no matter how much acausal 'communicaiton' they have.

Comment author: IlyaShpitser 26 November 2014 05:18:15PM *  1 point [-]

I view TDT as a bit unnatural, UDT is more natural to me (after people explained TDT and UDT to me).

I think of UDT as a decision theory of 'counterfactually equitable rational precommitment' (?controversial phrasing?).

So you (or all counterfactual "you"s) precommit in advance to do the [optimal thing], and this [optimal thing] is defined in such a way as to not give preferential treatment to any specific counterfactual version of you. This is vague. Unfortunately the project to make this less vague is of paper length.

:)


Folks working on UDT, feel free to chime in to correct me if any of above is false.

Comment author: MrMind 27 November 2014 08:11:59AM 0 points [-]

But isn't UDT relying on perfect information about the problem at hand?

If this is so, could it be seen as the limit of TDT with complete information?