eli_sennesh comments on Naturalistic trust among AIs: The parable of the thesis advisor's theorem - Less Wrong

24 Post author: Benja 15 December 2013 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (20)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 15 December 2013 09:44:45AM *  1 point [-]

The "tiling agents" issue mostly wasn't relevant to this article, but that is the goal of getting around Lob's Theorem. If and only if you can get around Lob's Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.

Otherwise, you can't prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don't want to replace myself with an agent who will reason that the sky is green when I'm quite sure it's blue).