You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

mwengler comments on Open thread, Jan. 18 - Jan. 24, 2016 - Less Wrong Discussion

4 Post author: MrMind 18 January 2016 09:42AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (201)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 21 January 2016 09:49:16PM *  8 points [-]

Oh, dear. A paper in PNAS says that the usual psychological experiments which show that people have a tendency to cooperate at the cost of not maximizing their own welfare are flawed. People are not cooperative, people are stupid and cooperate just because they can't figure out how the game works X-D

Abstract:

Economic experiments are often used to study if humans altruistically value the welfare of others. A canonical result from public-good games is that humans vary in how they value the welfare of others, dividing into fair-minded conditional cooperators, who match the cooperation of others, and selfish noncooperators. However, an alternative explanation for the data are that individuals vary in their understanding of how to maximize income, with misunderstanding leading to the appearance of cooperation. We show that (i) individuals divide into the same behavioral types when playing with computers, whom they cannot be concerned with the welfare of; (ii) behavior across games with computers and humans is correlated and can be explained by variation in understanding of how to maximize income; (iii) misunderstanding correlates with higher levels of cooperation; and (iv) standard control questions do not guarantee understanding. These results cast doubt on certain experimental methods and demonstrate that a common assumption in behavioral economics experiments, that choices reveal motivations, will not necessarily hold.

Comment author: mwengler 24 January 2016 02:42:12PM 1 point [-]

Does "value the welfare of others" necessarily mean "consciously value the welfare of others"? Is it wrong to say "I know how to interpret human sounds into language and meaning" just because I can do it? Or do I have to demonstrate I know how because I can deconstruct the process to the point that I can write an algorithm (or computer code) to do it?

The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a computer (running the right program) displays some of the features that signal me that a human is something i should value, why couldn't I value the computer? We watch animated shows and value and have empathy for all sorts of animated entities. In all sorts of stories we have empathy for robots or other mechanical things. The idea that we cannot value the welfare of a computer flies in the face of the evidence that we can empathize with all sorts of non-human things fictional and real. In real life, we value and have human-like empathy for animals, fishes, and even plants in many cases.

I think the interpretations or assumptions behind this paper are bad ones. Certainly, they are not brought out explicitly and argued for.

Comment author: Jiro 25 January 2016 11:13:35PM *  0 points [-]

I actually read the paper.

It might also be argued that people playing with computers cannot help behaving as if they were playing with humans. However, this interpretation would: (i) be inconsistent with other studies showing that people discriminate behaviorally, neurologically, and physiologically between humans and computers when playing simpler games (19, 56–58), (ii) not explain why behavior significantly correlated with understanding (Fig. 2B and Tables S3 and S4)..."

((iii) and (iv) apply to the general case of "people behave as if they are playing with humans", but not to the specific case of "people behave as if they are playing with humans, because of empathy with the computer").

Comment author: Lumifer 25 January 2016 05:01:55PM 0 points [-]

The idea that we cannot value the welfare of computers seems ludicrously naive and misinterpretative.

I am always up for being ludicrous :-P

So, what is the welfare of a computer? Does it involve a well-regulated power supply? Good ventilation in a case? Is overclocking an example of inhumane treatment?

Or maybe you want to talk about software and the awful assault on its dignity by an invasive debugger...