You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

aaronde comments on FAI, FIA, and singularity politics - Less Wrong Discussion

12 Post author: Mitchell_Porter 08 November 2012 05:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (10)

You are viewing a single comment's thread.

Comment author: aaronde 08 November 2012 06:34:49PM 2 points [-]

I endorse this idea, but have a minor nitpick:

In such a scenario, we could speak of "FIA" - friendly intelligence augmentation. A basic idea of existing FAI discourse is that the true human utility function needs to be determined, and then the values that make an AI human-friendly would be extrapolated from that.

This certainly gets proposed a lot. But isn't it lesswrongian consensus that this is backwards? That the only way to build a FAI is to build an AI that will extrapolate and adopt the humane utility function on its own? (since human values are too complicated for mere humans to state explicitly).