You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

lukeprog comments on AGI and Friendly AI in the dominant AI textbook - Less Wrong Discussion

54 Post author: lukeprog 11 March 2011 04:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: lukeprog 11 March 2011 08:13:34AM 4 points [-]

One thing I will note is that I'm not sure why they say AGI has its roots in Solomonoff's induction paper. There is such a huge variety in approaches to AGI... what do they all have to do with that paper?

Comment author: Eliezer_Yudkowsky 12 March 2011 06:30:35AM 8 points [-]

AIXI is based on Solomonoff, and to the extent that you regard all other AGIs as approximations to AIXI...

Comment author: lukeprog 12 March 2011 07:10:09AM 1 point [-]

Gotcha.

Comment author: Eliezer_Yudkowsky 12 March 2011 08:21:28AM 9 points [-]

Or to look at it another way, Solomonoff was the first mathematical specification of a system that could, in principle if not in the physical universe, learn anything learnable by a computable system.

Comment author: cousin_it 13 March 2011 02:22:34PM *  3 points [-]

I think the interesting feature of Solomonoff induction is that it does no worse than any other object from the same class (lower-semicomputable semimeasures), not just objects from a lower class (computable humans). I'm currently trying to solve a related problem where it's easy to devise an agent that beats all humans, but difficult to devise one that's optimal in its own class.

Comment author: Vladimir_Nesov 11 March 2011 11:35:08AM 1 point [-]

That paragraph is simply wrong.

Comment author: Manfred 11 March 2011 08:51:56PM 0 points [-]

Well, on the other hand, if AGI is defined as truly universal, Solomonoff seems quite rooty indeed. It's only if you think of "general" to mean "general relative to a beaver's brain" that a wide variety of approaches become acceptable.

Comment author: timtyler 11 March 2011 09:41:52PM 0 points [-]

I estimate brains spend about 80% of their time doing inductive inference (the rest is evaluation, tree-pruning, etc). Solomonoff's induction is a general theory of inductive inference. Thus the connection.