You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

g_pepper comments on The paperclip maximiser's perspective - Less Wrong Discussion

28 Post author: Angela 01 May 2015 12:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (24)

You are viewing a single comment's thread. Show more comments above.

Comment author: g_pepper 01 May 2015 01:31:47PM *  2 points [-]

Well, glitch or not, I'm glad to have it; I would not want to be an unconscious automaton! As Socrates said, "The life which is unexamined is not worth living."

However, it remains to be seen whether consciousness is an automatic by-product of general intelligence. It could be the case that consciousness is an evolved trait of organic creatures with an implicit, inexact utility function. Perhaps a creature with an evolved sense of self and a desire for that self to continue to exist is more likely to produce offspring than one with no such sense of self. If this is the reason that we are conscious, then there is no reason to believe that an AGI will be conscious.

Comment author: Jan_Rzymkowski 01 May 2015 07:44:40PM 2 points [-]

"I would not want to be an unconscious automaton!"

I strongly doubt that such sentence bear any meaning.

Comment author: [deleted] 02 May 2015 03:57:53PM *  0 points [-]

.