You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Vladimir_Nesov comments on Interview with Singularity Institute Research Fellow Luke Muehlhauser - Less Wrong Discussion

12 Post author: MichaelAnissimov 15 September 2011 10:23AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (65)

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 15 September 2011 02:21:22PM 8 points [-]

Q: What are some of those open problems in Friendly AI theory?

A: ... When extrapolated, will the values of two different humans converge? Will the values of all humans converge? Would the values of all sentient beings converge? ...

I don't think the question about sentient beings should be considered open.

Comment author: [deleted] 15 September 2011 07:24:22PM *  3 points [-]

If we can't consider it open why do we consider the question of the values of two different human beings open? Unless we choose to define humans so as to exclude some Homo Sapiens brains that occupy certain spaces of neurodiversity and/or madness?

Comment author: Vladimir_Nesov 15 September 2011 07:39:34PM *  2 points [-]

For the question about human values, there are ways to put it so that it's interesting and non-trivial. For values of unrelated minds, the answer is clear however you interpret the question.

Comment author: [deleted] 15 September 2011 07:42:35PM *  2 points [-]

For the question about human values, there are ways to put it so that it's interesting and non-trivial.

Basically for some indeterminate but not too small fraction of all human brains?

Comment author: Vladimir_Nesov 15 September 2011 07:47:16PM 2 points [-]

Sure, brain damage and similar conditions don't seem interesting in this regard.

Comment author: fubarobfusco 17 September 2011 02:47:49AM 2 points [-]

It isn't clear that autism is brain damage, for one.

Comment author: DuncanS 17 September 2011 12:19:08AM 2 points [-]

eg Clippy. Clippy's values wouldn't converge with ours, or with an otherwise similar AI that preferred thumb tacks. So the general case is most certainly 'no'.