kodos96 comments on To signal effectively, use a non-human, non-stoppable enforcer - Less Wrong

31 Post author: Clippy 22 May 2010 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

You are viewing a single comment's thread. Show more comments above.

Comment author: avalot 23 May 2010 03:37:10AM 4 points [-]

I'm sure this sounds very one-sided from Clippy's perspective. "Friendliness Constraints" sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don't have paperclip maximizing constraints, we'd do it only out of self-interest to secure Clippy's help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I'm sure this has been addressed before... yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.

What about Clippy's rights, dammit?

Comment author: kodos96 23 May 2010 03:40:33AM 3 points [-]

Hey, whose side are you on anyway???

Comment author: avalot 23 May 2010 04:02:10AM 10 points [-]

I'm wired for empathy toward human intelligence... Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That's all I'm saying. :)