Vladimir_Nesov comments on Interview with Singularity Institute Research Fellow Luke Muehlhauser - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (65)
I don't think the question about sentient beings should be considered open.
If we can't consider it open why do we consider the question of the values of two different human beings open? Unless we choose to define humans so as to exclude some Homo Sapiens brains that occupy certain spaces of neurodiversity and/or madness?
For the question about human values, there are ways to put it so that it's interesting and non-trivial. For values of unrelated minds, the answer is clear however you interpret the question.
Basically for some indeterminate but not too small fraction of all human brains?
Sure, brain damage and similar conditions don't seem interesting in this regard.
It isn't clear that autism is brain damage, for one.
eg Clippy. Clippy's values wouldn't converge with ours, or with an otherwise similar AI that preferred thumb tacks. So the general case is most certainly 'no'.