dankane comments on Timeless Decision Theory: Problems I Can't Solve - Less Wrong

39 Post author: Eliezer_Yudkowsky 20 July 2009 12:02AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (153)

You are viewing a single comment's thread.

Comment author: nawitus 20 July 2009 10:58:38AM 0 points [-]

If you're an AI, you do not have to (and shouldn't) pay the first $1000, you can just self-modify to pay $1000 in all the following coin flips (if we assume that the AI can easily rewrite/modify it's own behaviour in this way). Human brains probably don't have this capability, so I guess paying $1000 even in the first game makes sense.

Comment author: JamesAndrix 20 July 2009 07:26:33PM 0 points [-]

That assumes that you didn't expect to face problems like that in the future before omega presented you with the problem, but do expect to face problems like that in the future after omega presents you with the problem. It doesn't work at all if you only get one shot at it. (and you should already be a person who would pay, just in case you do)