GuySrinivasan comments on SotW: Avoid Motivated Cognition - Less Wrong

20 Post author: Eliezer_Yudkowsky 28 May 2012 03:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: GuySrinivasan 30 May 2012 07:49:51PM *  1 point [-]

I don't always have a problem with motivated cognition, but when I do, my brain usually makes it some or all of the way through the following steps:

  • Notice that I'm becoming more comfortable with (feeling safer about) a decision or action I'm about to make or just made.
  • Notice that the physical cause of my comfort is that I recently had a thought consisting of a reason the decision could have a good outcome.
  • Some brain process that I haven't pinned down yet that feels like and possibly is a mix of noticing optimization by proxy, feeling disdain for non-generalizable reasoning algorithms, and wondering about the true component strength of the reason.
  • Apply my bullshit detector to my comforting thought.
  • If appropriate, begin to legitimately think about the decision or action.

If this is a procedure that will work more generally, then these exercises may help:

Comment author: GuySrinivasan 30 May 2012 07:51:31PM 1 point [-]

Desire Generalizable Decision Processes Have everyone read Kahneman's rant on picking the action with the best expected outcome in Thinking, Fast and Slow (chapter 31, Risk Policies, especially the sermon). Encourage people to play enough poker and read enough poker theory to become at least close to neutral-EV in Vegas. Experiencing the concept of pot odds in a real game was my strongest "passing up positive-EV moves is leaving money on the table no matter how loss-averse you are" learning moment. One exercise might be to (several times) present a story in which someone makes a decision and ask participants to a) make up a near-mode explanation of why the decision feels legit, b) give a far-mode explanation of why the reasoning that led to the decision would be disastrous if everyone used it, and c) figure out what the broadest set of people/circumstances is that would allow that reasoning to be generalized and still work well. Concrete example: Holden focuses on charities that are neglected by traditional funding. a) This is great because his marginal actions will actually be at the margin, not pushed back by some giant philanthropist who suddenly funds the whole charity. b) This is awful because if everyone focused on neglected charities, the most valuable charities would receive less than optimal focus. c) As long as very few people are actually applying this heuristic, there's no danger of high-profile valuable charities suddenly losing all of their focus. So if Holden makes his decisions according to c), he's doing great, but if just a), then his algorithm is flawed though its output might be correct in this case.