Till_Noonsome comments on The AI design space near the FAI [draft] - Less Wrong

3 Post author: Dmytry 18 March 2012 10:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (48)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 19 March 2012 09:11:26AM *  10 points [-]

I've been avoiding helping SingInst and feel guilty when I do help them because of a form of this argument. The apparent premature emphasis on CEV, Eliezer's spotty epistemology and ideology (or incredibly deep ploys to make people think he has spotty epistemology and ideology), their firing Steve Rayhawk (who had an extremely low salary) while paying Eliezer about a hundred grand a year, &c., are disturbing enough that I fear that supporting them might be the sort of thing that is obviously stupid in retrospect. They have good intentions, but sometimes good intentions aren't enough, sometimes you have to be sane. Thus I'm refraining from supporting or condemning them until I have a much better assessment of the situation. I have a similarly tentative attitude toward Leverage Research.

Comment author: Till_Noonsome 22 June 2012 05:48:34PM *  -2 points [-]

Obviously this emphasis on CEV is absurd, but I don't know what the alternatives are. Do you? And what are they? And can thinking about CEV be used to generate better alternatives?

Comment author: Will_Newsome 22 June 2012 07:00:49PM *  2 points [-]

Obviously this emphasis on CEV is absurd, but I don't know what the alternatives are. Do you? And what are they?

I'm a fan of the "just solve decision theory and the rest will follow" approach. Some hybrid of "just solve decision theory" and the philosophical intuitions behind CFAI might also do it and might be less likely to spark AGI by accident. And there's technically the oracle AI option, but I don't like that one.

And can thinking about CEV be used to generate better alternatives?

Maybe, but it seems to me that the opportunity cost is high. CEV wastes people's time on "extrapolation algorithms" and thinking about whether preferences sufficiently converge and other problems that generally aren't on the correct meta level. It also makes people think that AGI requires an ethical solution rather than a make-sure-you-solve-everything-ever-because-this-is-your-only-chance-bucko solution to all philosophy ever.