Wei_Dai comments on A Request for Open Problems - Less Wrong

25 Post author: MrHen 08 May 2009 01:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 10 May 2009 02:50:16PM 1 point [-]

The idea of agents using UTM-based priors is a human invention, and therefore subject to human error. I'm not claiming to have an uncomputable brain, just that I've found such an error.

For a specific example of how human beings might deal with such scenarios, compared to agents using UTM-based priors, see "is induction unformalizable?".

Comment author: Vladimir_Nesov 10 May 2009 03:25:01PM *  1 point [-]

The model of environment values observations and behaviors, not statements about "uncomputability" and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label "halting oracle", that's your decision, and this is a decision you'd want any trusted AI to carry through as well.

I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I'm not competent to discuss domain-specific aspects of this question.