You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ShardPhoenix comments on Probabilistic Löb theorem - Less Wrong Discussion

24 Post author: Stuart_Armstrong 26 April 2013 06:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread.

Comment author: ShardPhoenix 27 April 2013 08:49:00AM *  0 points [-]

I'm still a bit vague on this Löb business. Is it a good thing or a bad thing (from a creating AI perspective) that "Löb's theorem fails" for the probabilistic version?

edit: The old post linked suggests that it's good that it doesn't apply here.

Comment author: Qiaochu_Yuan 27 April 2013 05:52:49PM 5 points [-]

It's a good thing. Löb's theorem is an obstacle (the "Löbstacle," if you will).

Comment author: Stuart_Armstrong 27 April 2013 06:52:11PM *  2 points [-]

Löb's theorem's means an agent cannot trust future copies of itself, or simply identical copies of itself, to only prove true statements.

Comment author: jsteinhardt 27 April 2013 07:05:54PM 1 point [-]

Er I don't think this is right. Lob's theorem says that an agent cannot trust future copies of itself, unless those future copies use strictly weaker axioms in their reasoning system.

Comment author: Stuart_Armstrong 27 April 2013 07:07:58PM 1 point [-]

The "can" has now been changed into "cannot". D'oh!

Comment author: hairyfigment 28 April 2013 03:56:13PM -1 points [-]

Good for creating AGI, maybe bad for surviving it. Hopefully the knowledge will also help us predict the actions of strong self-modifying AI.

It does seem promising to this layman, since it removes the best reason I could imagine for considering that last goal impossible.