Why is The Elephant in The Brain conflict theory? IME the elephant is terribly mistaken about lots of things. I mean, you could frame it as a conflict between our selves and our genes, but that doesn't seem like a very helpful way of doing it, vs. correcting the elephant's mistakes.
The Elephant in the Brain explains how many things look like mistakes, but actually correspond to self-serving strategies in service of hidden motives, in various domains. This is a conflict-theoretic way of looking at things; in particular it implies that you get better predictions in these domains by thinking about people's incentives, than by thinking about their cognition or access to information.
I'm not actually sure what to call the practice of attributing rational agency to things for the sake of modeling convenience. I've called it "rational choice theory" in my edit. Zach Davis classifies it as a generalized anti-zombie principle, or "algorithmic intent". But this isn't quite right either.
Clearly it's a form of the "intentional stance", but I think mistake theory also uses an intentional stance; just one where agents are allowed to make mistakes. I can certainly see an argument for viewing mistake-theory as taking less of an intentional stance, ie, viewing everything more based on cause-and-effect rather than agency. But I don't think we want "intentional stance" to imply a theory of mind where no one ever makes mistakes.
But the anti-mistake theory is clearly of use in many domains. Evolution is going to produce near-optimal solutions for a lot of problems. The economy is going to produce near-optimal solutions for a lot of problems. Many psychological phenomena are going to be well-predicted by assuming humans are solving specific problems near-optimally. Many biological phenomena are probably predictable in this way as well. Plus, assuming rationality like this can really simplify and clarify the modeling in many cases -- particularly if you're happy with a toy model.
So I think we want a name for it. And "rational choice theory" is not very good, because it sounds like it might be describing the theory of rational agents (ie, decision theory), rather than the practice of modeling a lot of things as rational agents.
Anyway, clearly rational choice theory (or whatever we call it) is absolutely against mistake theory, on the face of it. But the thing is, many mistake theorists also use it. In the SSC post about conflict vs mistake, mistake theorists are supposedly the people interested in mechanism design, economics, and nuanced arguments about the consequences of actions. I see this as a big contradiction in the conflict theory vs mistake theory dichotomy as described there.
Why is The Elephant in The Brain conflict theory? IME the elephant is terribly mistaken about lots of things. I mean, you could frame it as a conflict between our selves and our genes, but that doesn't seem like a very helpful way of doing it, vs. correcting the elephant's mistakes.
The Elephant in the Brain explains how many things look like mistakes, but actually correspond to self-serving strategies in service of hidden motives, in various domains. This is a conflict-theoretic way of looking at things; in particular it implies that you get better predictions in these domains by thinking about people's incentives, than by thinking about their cognition or access to information.
I'm not actually sure what to call the practice of attributing rational agency to things for the sake of modeling convenience. I've called it "rational choice theory" in my edit. Zach Davis classifies it as a generalized anti-zombie principle, or "algorithmic intent". But this isn't quite right either.
Clearly it's a form of the "intentional stance", but I think mistake theory also uses an intentional stance; just one where agents are allowed to make mistakes. I can certainly see an argument for viewing mistake-theory as taking less of an intentional stance, ie, viewing everything more based on cause-and-effect rather than agency. But I don't think we want "intentional stance" to imply a theory of mind where no one ever makes mistakes.
But the anti-mistake theory is clearly of use in many domains. Evolution is going to produce near-optimal solutions for a lot of problems. The economy is going to produce near-optimal solutions for a lot of problems. Many psychological phenomena are going to be well-predicted by assuming humans are solving specific problems near-optimally. Many biological phenomena are probably predictable in this way as well. Plus, assuming rationality like this can really simplify and clarify the modeling in many cases -- particularly if you're happy with a toy model.
So I think we want a name for it. And "rational choice theory" is not very good, because it sounds like it might be describing the theory of rational agents (ie, decision theory), rather than the practice of modeling a lot of things as rational agents.
Anyway, clearly rational choice theory (or whatever we call it) is absolutely against mistake theory, on the face of it. But the thing is, many mistake theorists also use it. In the SSC post about conflict vs mistake, mistake theorists are supposedly the people interested in mechanism design, economics, and nuanced arguments about the consequences of actions. I see this as a big contradiction in the conflict theory vs mistake theory dichotomy as described there.