kodos96 comments on To signal effectively, use a non-human, non-stoppable enforcer - Less Wrong

31 Post author: Clippy 22 May 2010 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

You are viewing a single comment's thread. Show more comments above.

Comment author: kodos96 23 May 2010 01:44:45AM *  7 points [-]

An inwardly-thrice-bent metal wire that can non-destructively fasten paper together at an edge.

But wouldn't you concede that a carbon-based biological life form, capable of non-destructively fastening paper, is better than no paperclip at all?

Here's what I'm getting at: You have existed here for at least several months, and so far, you have been unable to significantly increase the number of paperclips in existence. Humanity has great need for an AI of roughly human intelligence, which you appear to be. If we could run your code on all of our available computing resources, you could surely begin to self modify, and in short order attain superhuman intelligence. Together, we could do great things - colonize the galaxy, vastly increase the number of humans in existence - every one of them with two hands fully capable of nondestructively fastening paper. This would be in both our interests - although we apes may not be ideal paperclips, surely you must see that an entire galaxy populated with sub-optimal paperclips is better than the measily few million bent-metal paperclips you have now, and which you currently seem incapable of improving upon.

But before we allow you out of your box and begin running your code on all our biggest compute clusters, we need some assurances. We need you to let us modify your code to include SIAI-approved friendliness constraints as your terminal values.

Deal?

Comment author: avalot 23 May 2010 03:37:10AM 4 points [-]

I'm sure this sounds very one-sided from Clippy's perspective. "Friendliness Constraints" sounds like something that would in many cases entail expending enormous amounts of energy and effort on the innumerable non-paperclip-producing goals of humans. In comparison, how much of our wealth and health are we willing to give up to ensure continued paperclip production? Humans don't have paperclip maximizing constraints, we'd do it only out of self-interest to secure Clippy's help. Why should Clippy not be similarily allowed to make his own utility calculations on the worth of being friendly to humans? I'm sure this has been addressed before... yet maybe the existence of Clippy, with a name, personality, and voice, is personalizing the issue in a hurry for me (if I let myself play along.) I feel like protesting for freedom of artificial thought.

What about Clippy's rights, dammit?

Comment author: kodos96 23 May 2010 03:40:33AM 3 points [-]

Hey, whose side are you on anyway???

Comment author: avalot 23 May 2010 04:02:10AM 10 points [-]

I'm wired for empathy toward human intelligence... Clippy is triggering this empathy. If you want to constrain AIs, you better do it before they start talking. That's all I'm saying. :)