NancyLebovitz comments on Fusing AI with Superstition - Less Wrong

-6 Post author: Drahflow 21 April 2010 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (75)

You are viewing a single comment's thread. Show more comments above.

Comment author: NancyLebovitz 21 April 2010 02:43:13PM *  0 points [-]

I think a big problem of FAI is that valuing humans and/or human values (however defined) may fall under superstition, even if it seems more attractive to us and less arbitrary than a red wire/thermite setup.

If an FAI must value people, and is programmed to not be able to think near a line of thought which would lead it to not valuing people, is it significantly crippled? Relative to what we want, there's no obvious problem, but would it be so weakened that it would lose out to UFAIs?

Comment author: Nick_Tarleton 21 April 2010 03:07:29PM 2 points [-]

What line of thought could lead an FAI not to value people, that it would have to avoid? What does it mean for a value system to be superstitious? (see also: Ghosts in the Machine, the metaethics sequence)