wedrifid comments on To what degree do we have goals? - Less Wrong

45 Post author: Yvain 15 July 2011 11:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 16 July 2011 04:07:58PM *  3 points [-]

but a paperclip maximizer wouldn't get upset or angry if a supernova destroyed some of its factories, for example.

I probably wouldn't either. It sounds like the sort of amortized risk that I would have accounted for when I spread the factories out through thousands of star systems. The anger would come in only when the destruction was caused by another optimising entity. And more specifically by another entity that I have modelled as 'agenty' and not one that I have intuitively objectified.