NancyLebovitz comments on Open Thread June 2010, Part 2 - Less Wrong

7 Post author: komponisto 07 June 2010 08:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (534)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 13 June 2010 12:09:25AM 1 point [-]

Maybe this has been discussed before -- if so, please just answer with a link.

Has anyone considered the possibility that the only friendly AI may be one that commits suicide?

There's great diversity in human values, but all of them have in common that they take as given the limitations of Homo sapiens. In particular, the fact that each Homo sapiens has roughly equal physical and mental capacities to all other Homo sapiens. We have developed diverse systems of rules for interpersonal behavior, but all of them are built for dealing with groups of people like ourselves. (For instance, ideas like reciprocity only make sense if the things we can do to other people are similar to the things they can do to us.)

The decision function of a lone, far more powerful AI would not have this quality. So it would be very different from all human decision functions or principles. Maybe this difference should cause us to call it immoral.

Comment author: NancyLebovitz 13 June 2010 10:34:15AM 1 point [-]

It seems unlikely that an FAI would commit suicide if humans need to be protected from UAI, or if there are other threats that only an FAI could handle.