You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gwillen comments on [Link] Algorithm aversion - Less Wrong Discussion

17 Post author: Stefan_Schubert 27 February 2015 07:26PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (31)

You are viewing a single comment's thread.

Comment author: gwillen 27 February 2015 11:51:32PM 9 points [-]

I would loosely model my own aversion to trusting algorithms as follows: Both human and algorithmic forecasters will have blind spots, not all of them overlapping. (I.e. there will be cases "obvious" to each which the other gets wrong.) We've been dealing with human blind spots for the entire history of civilization, and we're accustomed to them. Algorithmic blindspots, on the other hand, are terrifying: When an algorithm makes a decision that harms you, and the decision is -- to any human -- obviously stupid, the resulting situation would best be described as 'Kafkaesque'.

I suppose there's another psychological factor at work here, too: When an algorithm makes an "obviously wrong" decision, we feel helpless. By contrast, when a human does it, there's someone to be angry at. That doesn't make us any less helpless, but it makes us FEEL less so. (This makes me think of http://lesswrong.com/lw/jad/attempted_telekinesis/ .)

Comment author: torekp 28 February 2015 03:43:09PM 4 points [-]

But wait! If many of the algorithm's mistakes are obvious to any human with some common sense, then there is probably a process of algorithm+sanity check by a human, which will outperform even the algorithm. In which case, you yourself can volunteer for the sanity check role, and this should make you even more eager to use the algorithm.

(Yes, I'm vaguely aware of some research which shows that "sanity check by a human" often makes things worse. But let's just suppose.)

Comment author: gwillen 28 February 2015 08:29:33PM 0 points [-]

I do think an algorithm-supported-human approach will probably beat at least an unassisted human, and I think a lot of people would be more comfortable with it than algorithm-alone. (As long as the final discretion belongs to a human, the worst fears are ameliorated.)