You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stefan_Schubert comments on [Link] Algorithm aversion - Less Wrong Discussion

17 Post author: Stefan_Schubert 27 February 2015 07:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: Stefan_Schubert 27 February 2015 10:25:06PM *  5 points [-]

Here's an article in Harvard Business Review about algorithm aversion:

It’s not all egotism either. When the choice was between betting on the algorithm and betting on another person, participants were still more likely to avoid the algorithm if they’d seen how it performed and therefore, inevitably, had seen it err.

My emphasis.

The authors also have a forthcoming paper on this issue:

If showing results doesn’t help avoid algorithm aversion, allowing human input might. In a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm, and less likely to lose confidence after seeing how it performed.

Of course, in many cases adding human input made the final forecast worse. We pride ourselves on our ability to learn, but the one thing we just can’t seem to grasp is that it’s typically best to just trust that the algorithm knows better.

Presumably another bias, the IKEA effect, which says that people prefer products they've partially created themselves, is at play here.