Jiro comments on Open thread, Jan. 18 - Jan. 24, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (201)
Does "value the welfare of others" necessarily mean "consciously value the welfare of others"? Is it wrong to say "I know how to interpret human sounds into language and meaning" just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?
The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn't I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.
I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.
I actually read the paper.
((iii) and (iv) apply to the general case of "people behave as if they are playing with humans", but not to the specific case of "people behave as if they are playing with humans, because of empathy with the computer").