PhilGoetz comments on Human values differ as much as values can differ - Less Wrong

13 Post author: PhilGoetz 03 May 2010 07:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (205)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 03 May 2010 09:26:35PM 1 point [-]

There's a danger, though, in building something that's superhumanly intelligent, and has goals that it acts on, that doesn't include some of our goals. You would have to make sure it's not an expectation-maximizing agent.

I think an assumption of the FAI project is that you shouldn't do what Nancy is proposing, because you can't reliably build a superhumanly-intelligent self-improving agent and cripple it in a way that prevents it from trying to maximize its goals.

Comment author: NancyLebovitz 03 May 2010 10:07:43PM 1 point [-]

Is it actually more crippled than a wish-fulfilling FAI? Either sort of AI has to leave resources for people.

However, your point makes me realize that a big threat only FAI (such threats including that it might take too much from people) will need a model of and respect for human desires so that we aren't left on a minimal reservation.