Conforming To Bias. If people know about status quo bias, the planning fallacy, or the endowment effect, they may feel the need to play into them in order to accomplish goals. Planners will deliberately make optimistic predictions, even when they know better, in order to appear competitive - even though the customer might prefer planners who make more realistic predictions. Product designers may deliberately sacrifice utility for familiarity, even if the unfamiliar product is actually easier to use even for a beginner than the familiar product. My guess is that the design of textbooks is an example here.
This suggests that building products and services that don't conform to biases is a positive externality, and a proper target for regulation or subsidy. For example, governments could require major construction projects to submit a time and cost estimate when the contract is signed, and give a tax credit to companies that an external auditor assesses to have achieved above-average accuracy in their estimate.
Government could offer similar subsidies to combat the endowment effect. It could offer a tax credit for selling your house, moving out of an apartment, or changing your job, perhaps after you've owned the house or worked the job for a reasonable length of time. I'm skeptical of these interventions - just brainstorming to illustrate an idea.
Teaching Styles. Teachers can't get much done if kids are being disrupted. Schools have varying populations of kids. They therefore "select" for teachers capable of managing the type and amount of disruption at their particular school. A tough teacher might be perfect for a rowdy school, but harmfully harsh in a more placid environment. A teacher who focuses on positive reinforcement but can't dish out discipline might get steamrolled by the students in a rowdy school, but do well in an elite prep academy. If the teaching styles exhibited at the best performing schools (i.e. the elite prep academy) become exemplars for teacher training, then we risk attributing to a teaching style alone what is actually a teaching style x school culture interaction effect.
Self-Editing. I write in ways that are legible to me, because during the writing process I have access only to feedback provided by the editor in my mind. Its feedback, particularly in the very beginning stages when the general tone, topic, and form of a piece is being established, is crucial in dictating the direction the post will take. Over time, the partially-written piece becomes more powerful than the editor, but in the beginning the editor is more powerful than the writing. This causes me to select for writing approaches that my internal editor is comfortable with. If I had other external standards or influences - perhaps prompts, a particular audience, or a process involving seeking external feedback on a few very brief possible approaches to an article - I might be able to achieve more variety in my writing.
Can you elaborate your first example more? How does selection incentive come into play in those situations?
First example explains why sales people have such a bad reputation. For the second example, apparently products can't actually be good or bad by themselves. So it's the producer who makes the product. Are we assuming a situation where producer is able to select one time users from repeat users?
Credit
An enormous amount of credit goes to johnswentworth who made this new post possible.
This is a framing practicum post. We’ll talk about what selection incentives are, how to recognize selection incentives in the wild, and what questions to ask when you find them. Then, we’ll have a challenge to apply the idea.
Today’s challenge: come up with 3 examples of selection incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).
Expected time: ~15-30 minutes at most, including the Bonus Exercise.
What Are Selection Incentives?
Imagine trying to find great/popular posts on LessWrong. We look for things like high karma values, high number of comments, or a well-known writer. We don’t really look at the contents of the individual posts (yet), we just look at an overall “score” that can help us to choose posts. This overall “score” mechanism encourages writers to write posts that could potentially achieve high “scores”, for instance broad-interest posts, thought-provoking posts, controversial posts, etc, regardless of what the actual purpose of the writer is.
This is a selection incentive: Something is chosen based on some criteria or a known process. For instance, posts are chosen based on an overall “score”: high karma values, high number of comments, etc. On the other hand, presenting ideas or transferring knowledge (not pursuing high karma values) might be what the writers actually want. But, the readers’ selection criteria are there regardless of what writers actually want in the first place.
Another example is corporations maximizing profits. The founder of the corporation has something in mind, for instance sending humans to space, producing the most affordable cars to the mass population, etc., and they may or may not be trying to maximize profit. What happens in the real business world, however, is that businesses live or die based on how well they maximize profits. Businesses are selected on the basis of how well they maximize profits, regardless of what the founders actually want.
What To Look For
In general, selection incentives should spring to mind whenever something is chosen based on some criteria or a known process. We want to know what factors cause something to be more or less likely chosen. A few ways this can apply:
Useful Questions To Ask
In the post selection example, posts with high karma values or high number of comments are more likely to be chosen than the ones with low karma values or low number of comments. But the post writers imagine a post that can present ideas/thoughts, transfer knowledge, or initiate a communication. High karma values may correlate with great posts, but may not align with what the writer actually wants, i.e., transfer knowledge and/or initiate conversation between the writer and the reader. What the writer actually wants diverges from what the selection criterion selects for/incentivizes.
In general, if an agent is involved, we want to know how the things the agent wants diverge from what the selection criteria “want”.
The Challenge
Come up with 3 examples of selection incentives which do not resemble any you’ve seen before. They don’t need to be good, they don’t need to be useful, they just need to be novel (to you).
Any answer must include at least 3 to count, and they must be novel to you. That’s the challenge. We’re here to challenge ourselves, not just review examples we already know.
However, they don’t have to be very good answers or even correct answers. Posting wrong things on the internet is scary, but a very fast way to learn, and I will enforce a high bar for kindness in response-comments. I will personally default to upvoting every complete answer, even if parts of it are wrong, and I encourage others to do the same.
Post your answers inside of spoiler tags. (How do I do that?)
Celebrate others’ answers. This is really important, especially for tougher questions. Sharing exercises in public is a scary experience. I don’t want people to leave this having back-chained the experience “If I go outside my comfort zone, people will look down on me”. So be generous with those upvotes. I certainly will be.
If you comment on someone else’s answers, focus on making exciting, novel ideas work — instead of tearing apart worse ideas. Yes, And is encouraged.
I will remove comments which I deem insufficiently kind, even if I believe they are valuable comments. I want people to feel encouraged to try and fail here, and that means enforcing nicer norms than usual.
If you get stuck, look for:
Bonus Exercise: for each of your three examples from the challenge, explain:
This bonus exercise is great blog-post fodder!
Motivation
Using a framing tool is sort of like using a trigger-action pattern: the hard part is to notice a pattern, a place where a particular tool can apply (the “trigger”). Once we notice the pattern, it suggests certain questions or approximations (the “action”). This challenge is meant to train the trigger-step: we look for novel examples to ingrain the abstract trigger pattern (separate from examples/contexts we already know).
The Bonus Exercise is meant to train the action-step: apply whatever questions/approximations the frame suggests, in order to build the reflex of applying them when we notice selection incentives.
Hopefully, this will make it easier to notice when a selection incentive frame can be applied to a new problem you don’t understand in the wild, and to actually use it.