Virge2 comments on What I Think, If Not Why - Less Wrong

25 Post author: Eliezer_Yudkowsky 11 December 2008 05:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Virge2 13 December 2008 02:35:00AM 0 points [-]

Carl: This point is elementary. A “friend” who seeks to transform himself into somebody who wants to hurt you, is not your friend."

The switch from "friendly" (having kindly interest and goodwill; not hostile) to a "friend" (one attached to another by affection or esteem) is problematic. To me it radically distorts the meaning of FAI and makes this pithy little sound-bite irrelevant. I don't think it helps Bostrom's position to overload the concept of friendship with the connotations of close friendship.

Exactly how much human bias and irrationality is needed to sustain our human concept of "friend", and is that a level of irrationality that we'd want in a superintelligence? Can the human concept of friendship (involving extreme loyalty and trust in someone we've happened to have known for some time and perhaps profitably exchanged favours with) be applied to the relationship between a computer and a whole species?

I can cope with the concept of "friendly" AI (kind to humans and non hostile), but I have difficulty applying the distinct English word "friend" to an AI.

Suggested listening: Tim Minchin - If I Didn't Have You
http://www.youtube.com/watch?v=Gaid72fqzNE