PhilGoetz comments on Human errors, human values - Less Wrong

32 Post author: PhilGoetz 09 April 2011 02:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (135)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 08 April 2011 03:43:05PM *  4 points [-]

Saying it's encoding human irrationality is taking the viewpoint that the human reaction to the fat-man trolley problem is an error of reasoning, where the particular machinery humans use to decide what to do gives an answer that does not maximize human values.

It makes some sense to say that a human is a holistic entity that can't be divided into "values" and "algorithms". I argued that point in "Only humans can have human values". But taking that view, together with the view that you should cling to human values, means you can't be a transhumanist. You can't talk about improving humans, because implementing human values comes down to being human. Any "improvement" to human reasoning means giving different answers, which means getting "wrong" answers. And you can't have a site like LessWrong, that talks about how to avoid errors that humans systematically make - because, like in the trolley problem case, you must claim they aren't errors, they're value judgements.

Comment author: RichardKennaway 08 April 2011 04:15:15PM *  2 points [-]

You can still have a LessWrong, because one can clearly demonstrate that people avoidably draw wrong conclusions from unreliable screening tests, commit conjunction fallacies, and so on. There are agreed ways of getting at the truth on these things and people are capable of understanding the errors that they are making, and avoiding making those errors.

Values are a harder problem. Our only source of moral knowledge (assuming there is such a thing, but those who believe there is not must dismiss this entire conversation as moonshine) is what people generally do and say. If contradictions are found, where does one go for evidence to resolve them?

Comment author: PhilGoetz 08 April 2011 04:51:06PM 2 points [-]

You're right - there is a class of problems for which we can know what the right answer is, like the Monty Hall problem. (Although I notice that the Sleeping Beauty problem is a math problem on which we were not able to agree on what the right answer was, because people had linguistic disagreements on how to interpret the meaning of the problem.)

Comment author: DSimon 11 April 2011 05:24:28PM 0 points [-]

And you can't have a site like LessWrong, that talks about how to avoid errors that humans systematically make - because, like in the trolley problem case, you must claim they aren't errors, they're value judgements.

Even when holding a view that human values can't be improved, rationality techniques are still useful, because human values conflict with each other and have to be prioritized or weighted.

If value knowing the truth, and I also in the holistic sense "value" making the conjunction fallacy, then LessWrong is still helpful to me provided I value the first more than the second, or if the weighting is such that the net value score is increased even though the individual conjunction fallacy value is decreased.