‘Justice’ has got to be one of the worst commonsense concepts.
It is used to ‘prove’ the existence of free will and it is the basis of a lot of suboptimal political and economic decision making.
Taboo ‘justice’ and talk about incentive alignment instead.
Huh. That's weird. My working definition of justice is "treating significantly similar things in appropriately similar ways, while also treating significantly different things in appropriately different ways". I find myself regularly falling back to this concept, and getting use from doing so.
Also, I rarely see anyone else doing anything even slightly similar, so I don't think of myself as using a "common tactic" here? Also, I have some formal philosophic training, and my definition comes from a distillation of Aristotle and Plato and Socrates, and so it m...
"Some truths are outside of science's purview" (as exemplified by e.g. Hollywood shows where a scientist is faced with very compelling evidence of supernatural, but claims it would be "unscientific" to take that evidence seriously).
My favorite way to illustrate this would be that approximately around the end of 19th century/beginning of 20th century [time period is from memory, might be a bit off] belief in ghosts was commonplace with a lot of interest in doing spiritual seanses, etc, while rare stories of hot rocks falling from the sky were mostly dismissed as tall tales. Then scientists followed the evidence, and now most everybody knows that meteorites are real and "scientific", while ghosts are not, and are "unscientific".
I tend to agree but only to an extent. To our best understanding, cognition is a process of predictive modelling. Prediction is an intrinsic property of the brain that never stops. A misprediction (usually) causes you to attend to the error and update your model.
Suppose we define science as any process that achieves better map-territory convergence (i.e. minimise predictive error). In that case, it is uncontroversial to say that we are all, necessarily, engaged in the scientific process at all times, whether we like it or not. Defining science this way, it...
But some stuff is explicitly outside of science's purview, though not in the way you're talking about here. That is, some stuff is explicitly about, for example, personal experience, which science has limited tools for working with since it has to strip away a lot of information in order to transform it into something that works with scientific methods.
Compare how psychology sometimes can't say much of anything about things people actually experience because it doesn't have a way to turn experience into data.
Commonsense ideas rarely include information about the domain of applicability, leading to the need for explicit noting of the law of equal and opposite advice and how to evaluate what sorts of people and situations need the antidote to that commonsense advice.
The tendency towards fallacy of the single cause where explanations feel more true because they are more compact representations and thus easier to think about and generate vivid examples of for further confirmation bias. Modal fallacy also related.
Local Optimisation Leads to Global Optimisation
The idea that if everyone takes care of themselves and acts in their own parochial best interest, then everyone will be magically better off sounds commonsensical but is fallacious.
Biological evolution, as Dawkins has put it, is an example of a local optimisation process that "can drive a population to extinction while constantly favouring, to the bitter end, those competitive genes destined to be the last to go extinct."
Parochial self-interest is indirectly self-defeating, but I keep getting presented with the same commonsense-sounding and magical argument that it is somehow :waves-hands: a panacea.
Probably the most persistent and problem-causing is the common sense way to treating things as having essences.
By this I mean that people tend to think of things like people, animals, organizations, places, etc. etc. as having properties or characteristics as if they had a little file inside them with various bits of metadata set that define their behavior. But this is definitely not how the world works! The property like this is at best a useful fiction or abstraction that allows simplified reasoning about complex systems, but it also leads to lots of mistakes because most people don't seem to realize these are like aggregations over complex interactions in the world rather than real things themselves.
You might say this is mistaking the map for the territory, but I think framing it this way makes it a little clearer just what is going on. People act as if there was some essential properties of things and think that's how the world actually is and as a result make mistakes when that model fails to correspond to what actually happens.
To me some of worst commonsense ideas come from the amateur psychology school: "gaslighting", "blaming the victim", "raised by narcissists", "sealioning" and so on. They just teach you to stop thinking and take sides.
Logical fallacies, like "false equivalence" or "slippery slope", are in practice mostly used to dismiss arguments prematurely.
The idea of "necessary vs contingent" (or "essential vs accidental", "innate vs constructed" etc) is mostly used as an attack tool, and I think even professional usage is more often confusing than not.
I think it would be useful if you edited the answer to add a line or two explaining each of those or at least giving links (for example, Schelling fences on slippery slopes), cause these seem non-obvious to me.
"weird" vs "normal". This concept seems to bundle together "good" and "usual", or at least "bad" with "unusual".
I dislike when people talk about someone "deserving" something when what they mean is they would like that to happen. The word seems to imply that the person may make a demand on reality (or reality's subcategory of other people!)
I suggest we talk about what people earn and what we wish for them instead of using this word that imbues them with a sense of "having a right to" things they did not earn.
That is, of coure, not saying we should stop wishing others or ourselves well.
Just saying we should be honest that that is what we are doing and use "deserving" only in the rare cases when we want to imbue our wish of opinion with a cosmic sense of purpose or imply in some other way the now common idea. When no longer commonly used in cases where an expression of goodwill (or "badwill" for that matter) will do, it may stand out in such cases and have the proper impact.
Of course we are not going to make that change and we wouldn't, even if this reached enough people, because people LOVE to mythically "deserve" things, and it makes them a lot easier to sell to or infuriate too. We may, however, just privately notice when someone tries to sell us something we "deserve", adress the thanks to the person wishing us well instead of some nebulous "Universe" when someone tells us we "deserve" something good and consider our actual moral shortcomings when the idea creeps up we might "deserve" something bad.
Copenhagen interpretation of ethics.
The Copenhagen Interpretation of Ethics says that when you observe or interact with a problem in any way, you can be blamed for it. At the very least, you are to blame for not doing more. Even if you don’t make the problem worse, even if you make it slightly better, the ethical burden of the problem falls on you as soon as you observe it. In particular, if you interact with a problem and benefit from it, you are a complete monster. I don’t subscribe to this school of thought, but it seems pretty popular.
Other common problems with blame.
The concept of blame is not totally useless. It can play several important roles:
However, I find that blame discussions often serve none of these purposes. In such a case, you should probably question whether the discussion is useful, and try to guide it to more useful territory.
Self-Fulfilling Prophecy
The idea is that if you think about something, then it is more likely to happen because of some magical and mysterious "emergent" feedback loopiness and complex chaotic dynamics and other buzzwords.
This idea has some merit (e.g. if your thoughts motivate you to take effective actions). I don't deny the power of ideas. Ideas can move mountains. Still, I've come across many people who overstate and misapply the concept of a self-fulfilling prophecy.
I was discussing existential risks with someone, and they confidently said, "The solution to existential risks is not to think about existential risks because thinking about them will make them more likely to happen." This is the equivalent of saying, "Don't take any precautions ever because by doing so, you make the bad thing more likely to happen."
I don't want to do without the concept. I agree that it is abused, but I would simply contest whether those cases are actually self-fulfilling. So maybe what I would point to, as the bad concept, would be the idea that most beliefs are self-fulfilling. However, in my experience, this is not common enough that I would label it "common sense". Although it certainly seems to be something like a human mental predisposition (perhaps due to confirmation bias, or perhaps due to a confusion of cause and effect, since by design, most beliefs are true).
Metric words (eg "good", "better", "worse") with an implicit privileged metric. A common implicit metric is "social praise/blame", but people can also have different metrics in mind and argue past each other because "good" is pointing at different metrics. Usually, just making the metric explicit or asking "better in what way?" clears it up.
Similar for goal words ("should", "ought", "must", "need", etc) with an implicit privileged goal. Again, you can ask "You say you 'have to do it', but for what purpose?"
Btw, I'm not against vague goals/metrics that are hard to make legible, just the implicit, privileged ones.
"I like hummus" is a fact, not an opinion
Qualitative vs. quantitative differences / of kind vs. of degree
It's not like the distinction is meaningless (in some sense liquid water certainly isn't "just ice but warmer") but most of the times in my life I recall having encountered it, it was abused or misapplied in one way or another:
(1) It seems to be very often (usually?) used to downplay some difference between A and B by saying "this is just a difference of degree, not a difference of kind" without explaining why one believes so or pointing out an example of an alternative state of the world in which a difference between A and B would be qualitative.
(2) It is often ignored that differences of degree can become differences of kind after crossing some threshold (probably most, if not all, cases of latter are like that). At some point ice stops just becoming warmer and melts, a rocket stops just accelerating and reaches escape velocity, and a neutron start stops just increasing in mass and collapses into a black hole.
(3) Whenever this distinction is being introduced, it should be clear what is meant by qualitative and quantitative difference in this particular domain of discourse, either with reference to some qualitativeness/quantitativeness criteria or by having sets of examples of both. For example, when comparing intelligence between species, one could make a case that we see a quantitative difference between ravens and new Caledonian crows but qualitative between birds and hookworms. We may not have a single, robust metric for comparing average intelligence between taxa but in this case we know it when we see it and we can reasonably expect other to see the distinction as well. (TL;DR it shouldn't be based on gut feeling when gut feeling about what is being discussed is likely to differ between individuals)
Related to facts vs opinions but not quite the same is objective/subjective dichotomy, popular in conventional philosophy. I find it extremely misleading and contributing a lot to asking wrong questions and accepting ridiculous non sequiturs.
For instance, it's commonly assumed that things are either subjective or objective. Moreover, if something is subjective it's arbitrary, not real and not meaningful. To understand why this framework is wrong, one requires good understanding of map/territory distinction and correspondence. How completely real things like wings of an airplane can exist only in the map, and how maps themselves are embedded in the territory.
But this isn't part of philosophy 101 and so we get confused arguments about objectiveness of X and whole schools of philosophy, noticing that, in a sense, everything we interact with is subjective, so objectivity either doesn't exist or its existence doesn't matter to us, with all kind of implications, some of which do not add to normality.
Radical actions. The word "radical" means someone trying to find and eliminate root causes of social problems, rather than just their symptoms. Many people pursue radical goals through peaceful means (spreading ideas, starting a commune, attending a peaceful protest or boycotting would be examples), yet "radical act" is commonly used as a synonym to "violent act".
Extremism. Means having views far outside the mainstream attitude of society. But also carries a strong negative connotation, in some countries is prohibited by law and mentioned alongside "terrorism" like they're synonyms, and redefined by Wikipedia as "those policies that violate or erode international human rights norms" (but what if one's society is opposed to human rights?!) Someone disagreeing with society is not necessarily bad or violent, so this is a bad concept.
"Outside of politics". Any choice one makes affects the balance of power somehow, so one cannot truly be outside. In practice the phrase often means that supporting the status quo is allowed, but speaking against it is banned.
Fact vs opinion is taught at my kids' school (age ~7 from memory). The lesson left them with exactly the confusion that you are talking about. Talking to them I got the impression that the teacher didn't really have this sorted out in their head themself.
My way of explaining it to them was that there are matters of fact and matters of opinion but often we don't know the truth about matters of fact. We can have opinions about matters of fact but the difference is that there is a true answer to those kinds of questions even when we don't know. This seemed to help them but I couldn't help but feel that it is kind of an unhelpful dichotomy.
I think maybe teachers (and parents) teach this because it's a social tool (we need a category for "hey don't argue about that, it's fine" for peacekeeping, and another category for "but take this very seriously"). Probably we can't get people to stop using these categories without a good replacement social tool.
The fact vs opinion thing is indeed a common thing. One especially common and tricky version of it is a stand that says "What I'm saying is based on science, so it isn't an opinion, it's a fact" - I know because I used to believe and say that myself... then I read the sequences and Scott Alexander and it blew that notion out of the water for me. Scott especially because he has several good posts on how science is hard, and isn't as simple as "ask question > conduct experiment > acquire truth". After reading those posts I immediately lowered my confidence in a bunch of my beliefs.
I'd go further than "fact vs opinion" and claim that the whole concept of there being one truth out there somewhere is quite harmful, given that the best we can do is have models that heavily rely on personal priors and ways to collect data and adjust said models.
I don't understand why shminux's comment was down to -6 (as of 11/17). I think this comment is good for thinking clearly. How reality is perceived to you is based off how you collect data, update, and interpret events. You can get really different results by changing any of those( biased data collection, only updating on positive results, redefining labels in a motte and bailey, etc.)
Going from a "one truth" to a "multiple frames" model helps communicating with others. I find it easier to tell someone
from a semantics viewpoint, 'purpose' is a word created by people to describe goals in normal circumstances. From this standpoint, to ask "What's my purpose in life?" doesn't make sense since a goal doesn't make sense applied to a whole life [Note: if you believe in a purposeful god, then yes you can ask that question]
than stating more objectively (ie without the "from a semantics viewpoint").
This is also good for clarifying metrics because different frames are better at different metrics, which should be pointed out (for clear communication's sake).
Instead of denying whole viewpoints, this allows zeroing in on what exactly is being valued and why. For example, Bob is wishing people loving-kindness and imagining them actually being happy as a result of his thoughts. I can say this is bad on a predictive metric, but good on a "Bob's subjective well-being" metric.
The concept of "one truth" can be an infohazard, if people decide that they already know the truth, so there is no reason to learn anymore, and all that is left to do is to convert or destroy those who disagree.
To me this seems like an example of the valley of bad rationality. If possible, the solution is more rationality. If not possible, then random things will happen, not all of them good.
There can be a good-enough distinction between fact and opinion, even if there are deep problems with naive objectivism.
"His own stupid" - the idea that if someone is stupid, he deserves all the bad consequences of being stupid.
Disproof:
Let's assume this is true. Then there would have been at least one voluntary action that turned him from wise to stupid. But why would someone voluntarily choose to be stupid? Only because he wouldn't have known what being stupid means, so he would be already stupid. Thus there would be no such first action. (Assumtion rejected.)
Perhaps the main tool of rationality is simply to use explicity reasoning where others don't, as Jacob Falcovich suggests:
However, I also think a big chunk of the value of rationality-as-it-exists-today is in its corrections to common mistakes of explicit reasoning. (To be clear, I'm not accusing Jacob of ignoring that.) For example, bayesian probability theory is one explicit theory which helps push a lot of bad explicit reasoning to the side.
The point of this question, however, is not to point to the good ways of reasoning. The point here is, rather, to point at bad concepts which are in widespread use.
For example:
These are intended to be the sort of thing which people use unthinkingly -- IE, not popular beliefs like astrology. While astrology has some pretty bad concepts, it is explicitly bundled as a belief package which people consider believing/disbelieving. Very few people have mental categories like "fact-ist" for someone who believes in a fact/opinion divide. It's therefore useful to make explicit belief-bundles for these things, so that we can realize when we are choosing whether to use that belief-bundle.
My hope is that when you encounter a pretty bad (but common) concept out there in the wild, you'll think to return here and add it to the list as a new answer. (IE, as with all LW Questions, I hope this can become a timeless list, rather than just something people interact with once when it is on the front page.)
Properly dissolving the concept by explaining why people (mis)use it is encouraged, but not required for an entry.
Feel free to critique entries in the comments (and critique my above two proposals in the comments to this post), but as a contributor, don't stress out about responding to critiques (particularly if stressing about this makes you not post suggestions -- the voting should keep the worst ones at the top, so don't worry about submitting concepts that aren't literally the worst!).
Ideally, this would become a useful resource for beginners to come and get de-confused about some of the most common confusions.