Daniel_Burfoot comments on To signal effectively, use a non-human, non-stoppable enforcer - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (164)
Eliezer's point is not that a paperclip maximizer is bad for the universe, it's that a superintelligent AGI paperclip maximizer is bad for the universe. Clippy's views here seem actually more similar to Robin's ideas that there is no reason for beings with radically divergent value systems not to live happily together and negotiate through trade.