You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on AGI and Friendly AI in the dominant AI textbook - Less Wrong Discussion

54 Post author: lukeprog 11 March 2011 04:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 12 March 2011 07:10:09AM 1 point [-]

Gotcha.

Comment author: Eliezer_Yudkowsky 12 March 2011 08:21:28AM 9 points [-]

Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.

Comment author: cousin_it 13 March 2011 02:22:34PM *  3 points [-]

I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I'm currently trying to solve a related problem where it's easy to devise an agent that beats all humans, but difficult to devise one that's optimal in its own class.