amcknight comments on MIRI's 2013 Summer Matching Challenge - Less Wrong

23 Post author: lukeprog 23 July 2013 07:05PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (122)

You are viewing a single comment's thread.

Comment author: amcknight 28 July 2013 08:54:13PM *  4 points [-]

For the goal of eventually creating FAI, it seems work can be roughly divided into making the first AGI (1) have humane values and (2) keep those values. Current attention seems to be focused on the 2nd category of problems. The work I've seen in the first category: CEV (9 years old!), Paul Christiano's man-in-a-box indirect normativity, Luke's decision neuroscience, Daniel Dewey's value learning... I really like these approaches but they are only very early starting points compared to what will eventually be required.

Do you have any plans to tackle the humane values problem? Do MIRI-folk have strong opinions on which direction is most promising? My worry is that if this problem really is as intractable as it seems, then working on problem (2) is not helpful, and our only option might be to prevent AGI from being developed through global regulation and other very difficult means.

Comment author: lukeprog 28 July 2013 10:29:33PM 8 points [-]

Do you have any plans to tackle the humane values problem?

Yes. The next open problem description in Eliezer's writing queue is in this category.