You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on The Inefficiency of Theoretical Discovery - Less Wrong Discussion

19 Post author: lukeprog 03 November 2013 09:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (109)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 04 November 2013 10:47:58PM 0 points [-]

The FAI-related things you would want to prove are of the form: “when given a world state characterized by X, input percepts characterized by Y, the program always generates outputs characterized by Z.” For many existing and popular AI architectures, we haven't the foggiest idea how to do this. (It's no surprise that Eliezer Yudkowsky favors Pearl's causal probabilistic graph models, where such analysis is at least known to be possible.)

Comment author: jsteinhardt 05 November 2013 03:40:17AM 0 points [-]

To the extent that X, Y, and Z can be written formally within the programming language, program analysis at least in principle is fully capable of proving such a statement. I apologize if this comes off as rude, but your statement that "we haven't the foggiest idea how to do this" is flat-out false. While there are certainly unique challenges to reasoning about the sort of code that gets written in machine learning, it seems to me that the main reason we don't have well-developed analysis tools is that most code doesn't look like machine learning code, and so there has been little pressure to develop such tools.