Jonathan_Graehl comments on The Substitution Principle - LessWrong

68 Post author: Kaj_Sotala 28 January 2012 04:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 26 January 2012 09:37:43PM *  27 points [-]

This post gives what could be called an "epistemic Hansonian explanation". A normal ("instrumental") Hansonian explanation treats humans as agents that possess hidden goals, whose actions follow closely from those goals, and explains their actual actions in terms of these hypothetical goals. People don't respond to easily available information about quality of healthcare, but (hypothetically) do respond to information about how prestigious a hospital is. Which goal does this behavior optimize for? Affiliation with prestigious institutions, apparently. Therefore, humans don't really care about health, they care about prestige instead. As Anna's recent post discusses, the problem with this explanation is that human behavior doesn't closely follow any coherent goals at all, so even if we posit that humans have goals, these goals can't be found by asking "What goals does the behavior optimize?"

Similarly in this instance, when you ask humans a question, you get an answer. Answers to the question "How happy are you with your life these days?" are (hypothetically) best explained by respondents' current mood. Which question are the responses good answers for? The question about the current mood. Therefore, the respondents don't really answer the question about their average happiness, they answer the question about their current mood instead.

The problem with these explanations seems to be the same: we try to fit the behavior (actions and responses to questions both) to the idea of humans as agents, whose behavior closely optimizes the goals they really pursue, and whose answers closely answer the questions they really consider. But there seems to be no reality to the (coherent) goals and beliefs (or questions one actually considers) that fall out of a descriptive model of humans as agents, even if there are coherent goals and beliefs somewhere, too loosely connected to actions and anticipations to be apparent in them.

Comment author: Jonathan_Graehl 27 January 2012 10:17:47PM 1 point [-]

Well put.

But we can use one of these explanations (your hidden goal is to optimize status, etc.) to predict yet-unobserved behavior in other contexts.

In the case of "did I answer an easier / more accessible question than was really posed?", you may just be inventing a new just-so story in every case. So like all self-help/productivity tricks, I can use one hoping that they remind me to act more deliberately when it matters, more than they waste our energy, but I can't be sure it's more than placebo.