Alexandros comments on Less Wrong: Open Thread, September 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (610)
You have merely redefined the goal from 'the benefit of humanity' to 'non dead-end goal', which may just be equally hairy.
Even more hairy. Any primary goal will, I think, eventually end up with a paperclipper. We need more research into how intelligent beings (ie, humans) actually function. I do not think people, with rare exceptions, actually have primary goals, only temporary, contingent goals to meet temporary ends. That is one reason I don't think much of utilitarianism - peoples "utilities" are almost always temporary, contingent, and self-limiting.
This is also one reason why I have said that I think provably Friendly AI is impossible. I will be glad to be proven wrong if it does turn out to be possible.