Will_Newsome comments on The Design Space of Minds-In-General - Less Wrong

19 Post author: Eliezer_Yudkowsky 25 June 2008 06:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 07 January 2011 04:25:44AM 1 point [-]

The only really hard part about making a superintelligent paperclip maximizer is the superintelligence. If you think that specifying the goal of a Friendly AI is just as easy, then making a superintelligent FAI will be just as hard. The "fragility of value" thesis argues that Friendliness is significantly harder to specify, because there are many ways to go wrong.

I suspect that Buddhahood or something close is the only real attractor in mindspace that could be construed as reflectively consistent given the vector humanity seems to be on.

I don't. Simple selfishness is definitely an attractor (in the sense that it's an attitude that many people end up adopting), and it wouldn't take much axiological surgery to make it reflectively consistent.