NancyLebovitz comments on Human values differ as much as values can differ - Less Wrong

13 Post author: PhilGoetz 03 May 2010 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread.

Comment author: NancyLebovitz 03 May 2010 08:49:16PM *  2 points [-]

Do we need FAI that does as good a job of satisfying human desires as possible, or would an FAI which protects humanity against devastating threats enough?

Even devastating threats can be a little hard to define.... if people want to transform themselves into Something Very Different, is that the end of the human race, or just an extension to human history?

Still, most devastating threats (uFAI, asteroid strike) aren't such a hard challenge to identify.

Comment author: PhilGoetz 03 May 2010 09:26:35PM 1 point [-]

There's a danger, though, in building something that's superhumanly intelligent, and has goals that it acts on, that doesn't include some of our goals. You would have to make sure it's not an expectation-maximizing agent.

I think an assumption of the FAI project is that you shouldn't do what Nancy is proposing, because you can't reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.

Comment author: NancyLebovitz 03 May 2010 10:07:43PM 1 point [-]

Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.

However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren't left on a minimal reservation.