paulfchristiano comments on AIXI and Existential Despair - Less Wrong

13 Post author: paulfchristiano 08 December 2011 08:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: paulfchristiano 09 December 2011 02:52:01AM 3 points [-]

I agree that Godel machines probably don't work for similar reasons, though didn't notice your post until now. Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design). I have been thinking along these lines independently, and I think resolving either problem is going to involve dealing with more fundamental issues (e.g., agents should not believe themselves to be well-calibrated). I've queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).

Comment author: cousin_it 09 December 2011 03:09:19AM *  1 point [-]

I've queued a rather long series of LW posts establishing what I consider the current state of affairs on FAI-related open problems, a few of which concern this issue (and of which the OP is the first).

Nice! Does that mean you have many new results queued for posting? What can I do to learn them sooner? :-)

Unfortunately, I also think that your non-self-referential alternative runs into similar issues (where subsequent agents use successively weaker axiom systems, if they follow the same general design).

Can you expand?