You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

wwa comments on "Stupid" questions thread - Less Wrong Discussion

40 Post author: gothgirl420666 13 July 2013 02:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (850)

You are viewing a single comment's thread.

Comment author: wwa 15 July 2013 07:00:47PM *  0 points [-]

Is true precommitment possible at all?

Human-wise this is an easy question, human will isn't perfect, but what about an AI? It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit, which means at least one prior was 100% and that in turn means no update is possible for this prior.

Comment author: Qiaochu_Yuan 15 July 2013 07:41:48PM *  1 point [-]

It seems to me that "true precommitment" would require the AI to come up with a probability 100% when it arrives at the decision to precommit

Why? Of what?

Comment author: D_Malik 15 July 2013 11:53:30PM *  2 points [-]

I think wwa means 100% certainty that you'll stick to the precommitted course of action. But that isn't what people mean when they say "precommitment", they mean deliberately restricting your own future actions in a way that your future self will regret or would have regretted had you not precommitted, or something like that. The restriction clearly can't be 100% airtight, but it's usually pretty close; it's a fuzzy category.