Will_Newsome comments on The Design Space of Minds-In-General - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (82)
Relatedly, I've formed the tentative intuition that paper clip maximizers are very hard to build; in fact, harder to build than FAI. What you do get out of kludge superintelligences is probably just going to be the pure universal AI drives (or something like that), or possibly some sort of approximately objective convergent decision theoretic policy, perhaps dictated by the acausal economy.
The only really hard part about making a superintelligent paperclip maximizer is the superintelligence. If you think that specifying the goal of a Friendly AI is just as easy, then making a superintelligent FAI will be just as hard. The "fragility of value" thesis argues that Friendliness is significantly harder to specify, because there are many ways to go wrong.
I don't. Simple selfishness is definitely an attractor (in the sense that it's an attitude that many people end up adopting), and it wouldn't take much axiological surgery to make it reflectively consistent.