Pyramid_Head3 comments on Qualitative Strategies of Friendliness - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (54)
@Caledonian: So if an AI wants to wipe out the human race we should be happy about it? What if it wants to treat as cattle? Which/whose preferences *should* it follow? (Notice the weasel words?)
When I was a teenager I used to think just like you. A superintelligence would have better goals than ordinary humans, _because_ it is superintelligent. Then I grew up and realized that minds are not blank slates, and you can't just create a "value-free" AI and see what kinds of terminal values it chooses for itself.