Will_Newsome comments on The Design Space of Minds-In-General - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (82)
The only really hard part about making a superintelligent paperclip maximizer is the superintelligence. If you think that specifying the goal of a Friendly AI is just as easy, then making a superintelligent FAI will be just as hard. The "fragility of value" thesis argues that Friendliness is significantly harder to specify, because there are many ways to go wrong.
I don't. Simple selfishness is definitely an attractor (in the sense that it's an attitude that many people end up adopting), and it wouldn't take much axiological surgery to make it reflectively consistent.