You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

MugaSofer comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread.

Comment author: MugaSofer 28 May 2013 11:45:22AM *  0 points [-]

I tend to model Paperclippers as conscious, simply because it's easier to use bits of my own brain as a black box. So naturally my instinct is to value it's existence the same as any other modified human mind (although not more than any lives it might endanger.)

However, IIRC, the original "paperclip-maximizer" was supposed to be nonsentient; probably still worth something in the absence of "life", but tricky to assign based on my intuitions (is it even possible to have a sufficiently smart being I don't value the same way I do "conscious" ones?)

In other words, I have managed to confuse my intuitions here.