Eliezer_Yudkowsky comments on AGI and Friendly AI in the dominant AI textbook - Less Wrong

54 Post author: lukeprog 11 March 2011 04:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 11 March 2011 08:13:34AM 4 points [-]

One thing I will note is that I'm not sure why they say AGI has its roots in Solomonoff's induction paper. There is such a huge variety in approaches to AGI... what do they all have to do with that paper?

Comment author: Eliezer_Yudkowsky 12 March 2011 06:30:35AM 8 points [-]

AIXI is based on Solomonoff, and to the extent that you regard all other AGIs as approximations to AIXI...

Comment author: lukeprog 12 March 2011 07:10:09AM 1 point [-]

Gotcha.

Comment author: Eliezer_Yudkowsky 12 March 2011 08:21:28AM 9 points [-]

Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.

Comment author: cousin_it 13 March 2011 02:22:34PM *  3 points [-]

I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I'm currently trying to solve a related problem where it's easy to devise an agent that beats all humans, but difficult to devise one that's optimal in its own class.