I think is a really great post clearly explaining an important skill and giving it a nice handle.
One place I think we run into smuggled frames a lot is in conversations, like the ones we have in the comments, for example, on LessWrong. Folks often come to a topic with different perspectives or frames that get smuggled in to take on things and the art of dialogue is something like learning to find these smuggled frames and bring them out into the light (cf. the double crux method and Hegelian dialectic).
I also find smuggled frames when just trying to explain technical topics. For example, when trying to share a model of how some part of the world works, it's often necessary to work through the assumptions each person is smuggling in and may not be aware they have. Lots of times this looks like noticing one or both people are confused for reasons "below the surface" of the current conversation and taking a step back to address that.
As someone who cares a lot about what I allow to influence my beliefs and opinions, I found this post fascinating and informative, and I'm glad I read it. Strong upvote.
Cross-posted from my blog.
Status: I think the described concept would be more useful if I applied it more often — or at least noticed when I do apply it. Right now it's thinking out loud + "I wonder what will happen if I post on LW".
Intro
I think that "critical thinking" is way too often taken to mean roughly "checking the facts and logical inferences you are presented with".
In addition to focusing on facts — "what people claim to be true" — it seems useful to focus on a certain cluster of claims that are very often left implicit. These claims are related to the process of how humans make decisions in the wild, which involves mushy categories like "legitimacy" and "importance". I will call them "frames".
Those claims can be categorized as:
For example, whenever somebody start talking about "the best way to solve a problem X", you might disagree with their solutions — alright! — but you will update slightly towards "problem X exists" and perhaps "problem X is important". These might be very contentious claims, but they get smuggled in without an argument.
(Unless you have a strong emotional reaction, in which case they might rejected wholesale. Think of anything from the domain of politics, for instance.)
Example: Merriam-Webster
A lot of people believe that a) that it's important for the category of "correct spelling" to exist, and, simultaneously, b) that objective rules of spelling don't exist, i.e. grammar (and language in general) is defined by social consensus.
This has a lot of consequences. For instance, people get genuinely upset that Merriam-Webster lists "irregardless" as a word. They also spend a ton of time arguing about spelling, the logic of it, precedents, and so on.
Of course, there are many other reasons to care about spelling, such as "learning to spell, and learning to argue about spelling well, is a somewhat costly way to join certain tribes". They are more important than the beliefs I listed above.
However, I want to talk exclusively about the Merriam-Webster situation for a moment. Why? Because noticing smuggled frames is an incredibly effective way to snap yourself out of caring about unimportant things and being in a love-hate relationship with certain authorities. Sometimes it's useful. Sometimes it's very useful.
Going back to the actual spelling now:
Another example: great works
Thought experiments like these are an easy way to get rid of otherwise unsolvable emotional patterns that prevent people from thinking clearly, optimizing the right things, and achieving good lives. Here is a longer example.
You are an aspiring filmmaker.
When you were growing up, your dad was constantly referring to certain movies as "great works" and others as "eh, it's a good movie". You heavily disagreed with him about which movies are great works and which aren't, but adopted the position that the category itself is valid — some movies are great and timeless works, and some aren't.
Naturally, you feel very bad about not creating great works. You don't start any projects that don't have any chance of becoming a great work.
Counterintuitively, you would even find some consolation if you successfully argued that no movies are great works — because it would free you up from the obligation to create great works, and you could instead do what you like. Filmmaking-specific nihilism, so to speak.
However, it is very hard to argue that no movies are great works, because the category does not have a good definition. But many other categories don't have good definitions either! You can't abandon all categories that don't have good definitions, because then you wouldn't know how to make any decisions at all.
This is where smuggled frames come into play. Once you clearly see where you got the notion of "great works" from, it is much easier to discard it without having to explicitly refute it, and think: "what alternative categories could exist, and which of them do I like?".
Note that here I am going slightly beyond How An Algorithm Feels From Inside. The point is not to decide "oh, okay, art is everything at once, I will just optimize some mix of these". Don't! Good things come out of choosing a definition and sticking to it, temporarily at least, even if this definition is "fake" in some way — see In praise of fake frameworks. You can always choose another definition later if you want.
One more example: evil people
You might have grown up with a category of "good people" and "bad people". Then somebody told you: don't anthropomorphize humans. Everybody thinks they are the good guy. Evil behavior is caused by [reasons]. Etc, etc.
They have robbed you of a category. And it's good! You have been able to empathize with people more, and you've also had a blast arguing on Twitter about not anthropomorphizing people.
However, eventually you notice that this category was useful, so you consciously (or semi-consciously) bring it back. You say things like "I realize there are no evil people, but fuck you and I'm going to block you anyway". You are able to use "don't be evil" to guide your life. And so on.
"All frames are initially smuggled frames" does not necessarily mean "all frames are bad".
The Economist?
Here is The Economist telling you some facts about things that happened in the summer of 2020:
In order for these headlines to make sense, you must accept a number of assumptions. What are those assumptions?
"Justice John Roberts joins the Supreme Court's liberal wing in some key rulings"
Let's start with the Supreme Court headline.
Under the guise of a simple fact, three other claims have been smuggled into your brain. The first claim is the easiest to debate, so I'll go with it.
"The Supreme Court has a liberal wing".
You are invited to believe that there is a framework for analyzing how the Supreme Court operates (divide the justices into wings), and that it is a useful analysis tool.
What if it's not a useful analysis tool?
This single implicit claim about the existence of "the liberal wing" is shaping the kinds of discussions you are inclined to have about the Supreme Court. Brick by brick, your worldview is being built out of others' worldviews.
"Cutting American police budgets might have perverse effects"
I think the implicit claim in this headline is that by talking about the police budgets, people might change something. It's an existence claim, in a way, though it tells you what's possible and not what exists.
Why would The Economist talk about cutting police budgets if there was no point in talking about them? And indeed, if The Economist believed it was impossible to change anything, the headline would look different, e.g. "What will happen after American police budgets are cut".
Let's go further: imagine if The Economist ran a headline in 2016 that went "Accepting Trump's election as legitimate might have perverse effects". That wouldn't be wrong. But they would never phrase it like that—because it'd mean suggesting that there was a serious choice whether to accept the election as legitimate. Not a "hear my radical idea" suggestion in the opinion column, but an actual sober choice.
And there was a choice! You could just say: no, Trump is not a legitimate president. It was a thing you could do, and 23% of Clinton voters did, on the day after the election. If The Economist was, year after year, suggesting to its readers that they had a choice whether to accept the election as legitimate—and, importantly, also implying (by virtue of being discussed in The Economist!) that this choice mattered — that percentage would probably be higher.
This is how The Economist tells you what is possible and what isn't; what is worth taking seriously and what isn't. You might completely disagree with its opinions, but as long as you assume that The Economist is not being irrelevant, some of its worldview is still rubbing off.
Where to go from here?
This has a chance to grow from a sketch into a viable, life-applicable mechanism in several months of observations. So — see you in a few months.