You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on The Inefficiency of Theoretical Discovery - Less Wrong Discussion

19 Post author: lukeprog 03 November 2013 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 07 November 2013 06:22:18AM *  0 points [-]

Of course the game is that we don’t want to prove things about the algorithms in question, we are happy to form justified beliefs about them in whatever way we can, including inductive inference. But the point is that there are things we don’t understand.

And the question is: who cares? The mechanism by which human beings predict their future behavior is not logical inference. Similar ad-hoc Bayesian extrapolation techniques can be used in any general AI without worry about Löbian obstacles. So why is it such a pressing issue?

I don't wish to take away from the magnitude of your accomplishment. It is an important achievement. But in the long run I don't think it's going to be a very useful result in the construction of superhuman AGIs, specifically. And it's reasonable to ask why MIRI is assigning strategic importance to these issues.