eli_sennesh comments on Naturalistic trust among AIs: The parable of the thesis advisor's theorem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (20)
The "tiling agents" issue mostly wasn't relevant to this article, but that is the goal of getting around Lob's Theorem. If and only if you can get around Lob's Theorem and prove things regarding formal systems as complex as yourself, then and only then can you construct a reasoning system more powerful than yourself and set it to work on your own task.
Otherwise, you can't prove that an agent with a higher-order logic than yourself will function according to your goals or, put another way, retain your beliefs (ie: I don't want to replace myself with an agent who will reason that the sky is green when I'm quite sure it's blue).