potato comments on Timeless Decision Theory: Problems I Can't Solve - Less Wrong

39 Post author: Eliezer_Yudkowsky 20 July 2009 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: potato 23 April 2012 05:02:55AM *  2 points [-]

Here's a crack at the coin problem.

Firstly TDT seems to answer correctly under one condition, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0. Then certainly, our AI shouldn't give omega any money. On the other hand, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0.5, then the expected utility =-100 + 0.5 ( 0.5 (1,000,000) + 0.5(-100)) So my general solution is this, add a node that represents the probability of repeating one of these trials, keep track of its value like any other node, carefully and gradually. Giving money would only be winning if you had the opportunity to make more money later because omega or someone else knows you give money, otherwise you shouldn't give money.