Well, there's also the fact that "true"[1] ontological updates can look like woo prior to the update. Since you can't reliably tell ahead of time whether your ontology is too sparse for what you're trying to understand, truth-seeking requires you find some way of dealing with frames that are "obviously wrong" without just rejecting them. That's not simply a matter of salvaging truth from such frames.
Separate from that:
I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.
I for one am totally against this kind of policy. People taking responsibility for one another's epistemic states is saviorist fuckery that makes it hard to talk or think. It wraps people's anxiety around what everyone else is doing and how to convince or compel them to follow rules that keep one from feeling ill at ease.
I like the raising of awareness here. That this is a dynamic that seems worth noticing. I like being more aware of the impact I have on the world around me.
I don't buy the disrespectful frame though. Like people who "aren't hyper-analytical programmers" are on par w...
Well, there's also the fact that "true" ontological updates can look like woo prior to the update.
Do you think they often do, and/or have salient non-controversial examples? My guess prior to thinking about it is that it's rare (but maybe the feeling of woo differs between us).
Past true ontological updates that I guess didn't look like woo:
Past true ontological updates that seem like they could have looked like woo, details welcome:
- 'force fields' like gravity
AFAIK gravity was indeed considered at least woo-ish back in the day, e.g.:
Newton’s theory of gravity (developed in his Principia), for example, seemed to his contemporaries to assume that bodies could act upon one another across empty space, without touching one another, or without any material connection between them. This so-called action-at-a-distance was held to be impossible in the mechanical philosophy. Similarly, in the Opticks he developed the idea that bodies interacted with one another by means of their attractive and repulsive forces—again an idea which was dismissed by mechanical philosophers as non-mechanical and even occult.
One example, maybe: I think the early 20th century behaviorists mistakenly (to my mind) discarded the idea that e.g. mice are usefully modeled as having something like (beliefs, memories, desires, internal states), because they lumped this in with something like "woo." (They applied this also to humans, at least sometimes.)
The article Cognition all the way down argues that a similar transition may be useful in biology, where e.g. embryogenesis may be more rapidly modeled if biologists become willing to discuss the "intent" of a given cellular signal or similar. I found it worth reading. (HT: Adam Scholl, for showing me the article.)
I think "you should one-box on Newcomb's problem" is probably an example. By the time it was as formalized as TDT it was probably not all that woo-y looking, but prior to that I think a lot of people had an intuition along the lines of "yes it would be tempting to one-box but that's woo thinking that has me thinking that."
I would amend the OP by saying that “salvage epistemology” is a bad idea for everyone, including “us” (for any value of “us”). I don’t much like labeling things as “infohazards” (folks around here are much too quick to do that, it seems to me), which obfuscates and imbues with an almost mystical air something that is fairly simple: epistemically, this is a bad idea, and reliably doesn’t work and makes our thinking worse.
As I’ve said before: avoiding toxic, sanity-destroying epistemologies and practices is not something you do when you’re insufficiently rational, it is how you stay sufficiently rational.
If you think that some kinds of ideas are probably harmful for some people to hear, is acting on that belief always saviorist fuckery or does there exist a healthy form of it?
It seems to me that, just as one can be mindful of one's words and avoid being intentionally hurtful but also not take responsibility for other people's feelings... one could also be mindful of the kinds of concepts one is spreading and acknowledge that there are likely prerequisites for being able to handle exposure to those concepts well, without taking responsibility for anyone's epistemic state.
This post seems to be implying that "salvage epistemology" is somehow a special mode of doing epistemology, and that one either approaches woo from a frame of uncritically accepting it (clearly bad) or from a frame of salvage epistemology (still possibly bad but not as clearly so).
But what's the distinction between salvage epistemology and just ordinary rationalist epistemology?
When I approach woo concepts to see what I might get out of them, I don't feel like I'm doing anything different than when I do when I'm looking at a scientific field and seeing what I might get out of it.
In either case, it's important to remember that hypotheses point to observations and that hypotheses are burdensome details. If a researcher publishes a paper saying they have a certain experimental result, then that's data towards something being true, but it would be dangerous to take their interpretation of the results - or for that matter the assumption that the experimental results are what they seem - as the literal truth. In the same way, if a practitioner of woo reports a certain result, that is informative of something, but that doesn't mean the hypothesis they are offering to explain it is true.
In...
I think the incentives in science and woo are different.
I agree, though I'm not sure how that observation relates to my comment. But yes, certainly evaluating the incentives and causal history of a claim is an important part of epistemology.
Someone who wants to "salvage" e.g. Buddhism is privileging a source that has a replication rate way below 50%.
I'm not sure if it really makes sense to think in terms of salvaging "Buddhism", or saying that it has a particular replication rate (it seems pretty dubious whether the concept of replication rate is well-defined outside a particular narrow context in the first place). There are various claims associated with Buddhism, some of which are better-supported and potentially valuable than others.
E.g. my experience is that much of meditation seems to work the way some Buddhists say it works, and some of their claims seem to be supported by compatible models and lines of evidence from personal experience, neuroscience, and cognitive science. Other claims, very much less so. Talking about the "replication rate of Buddhism" seems to suggest taking a claim and believing it merely on the basis of Buddhism having made such a claim, but that w...
I would guess that a lot (perhaps most) of time, "salvage epistemology" is a rationalization to give to rationalists to justify their interest in woo, as opposed to being the actual reason they are interested in the woo. (I still agree that the concept is likely hazardous to some people.)
I agree with this.
There is also a related phenomenon: when a community that otherwise/previously accepted only people who bought into that community’s basic principles (aspiration to rationality, belief in the need for clear reasoning, etc.) adopts “salvage epistemology”, that community now opens itself up to all manner of people who are, shall we say, less committed to those basic principles, or perhaps not committed at all. This is catastrophic for the community’s health, sanity, integrity, ability to accomplish anything, and finally its likelihood of maintaining those very basic principles.
In other words, there is a difference between a community of aspiring rationalists of whom some have decided to investigate various forms of woo (to see what might be salvaged therefrom)—and the same community which has a large contingent of woo-peddlers and woo-consumers, of whom none believe in rationalist principles in the first place, but are only there to (at best) hang out with fellow peddlers and consumers of woo. The former community might be able to maintain some semblance of sanity even while they make their salvage attempts; the latter community is doomed.
It is difficult to distinguish between (1) people who think that there may be some value in a woo, and it is worth exploring it and separating the wheat from the chaff, and (2) people who believe that the woo is useful, and their only question is how to make it more palatable for the rationalist community. Both these groups are together opposed to people who would refuse to touch the woo in principle.
The subtle difference between those two groups is the absence or presence of motivated reasoning. If you are willing to follow evidence wherever it may lead you, you are open to the possibility that the horoscopes may actually correlate with something useful, but you are also open to the possibility that they might not. While the "salvage at all costs" group already knows that the horoscopes are useful, and are useful in more or less the traditional way, the only question is how to convince the others, who are immune to the traditional astrological arguments, but it seems mostly like the question of using the right lingo, so perhaps if we renamed Pisces to "cognitive ichthys", the usual objections would stop and rationalists would finally accept that Pisces might actually be cognitivel...
"Salvage" seems like a very strong frame/asserting what you're trying to prove. Something that needs salvaging has already failed, and the implication is you're putting a bunch of work into fixing it.
An alternate frame would be "mining", where it's accepted that most of the rock in a mine is worthless but you dig through it in the hopes of finding something small but extremely valuable. It might need polishing or processing, but the value is already in it in a way it isn't for something that needs salvaging.
My guess is that you (Jim) would agree with the implications of "salvage", but I wanted to make them explicit.
Do you have a principled model of what an "epistemic immune system" is and why/whether we should have one?
To elaborate a bit where I'm coming from here: I think the original idea with LessWrong was basically to bypass the usual immune system against reasoning, to expect this to lead to some problems, and to look for principles such as "notice your confusion," "if you have a gut feeling against something, look into it and don't just override it," "expect things to usually add up to normality" that can help us survive losing that immune system. (Advantage of losing it: you can reason!)
My guess is that that (having principles in place of a reflexive or socially mimicked immune system) was and is basically still the right idea. I didn't used to think this but I do now.
An LW post from 2009 that seems relevant (haven't reread it or its comment thread; may contradict my notions of what the original idea was for all I know): Reason as Memetic Immune Disorder
I don't have a complete or principled model of what an epistemic immune system is or ought to be, in the area of woo, but I have some fragments.
One way of looking at it is that we look at a cluster of ideas, form an outside view of how much value and how much crazymaking there is inside it, and decide whether to engage. Part of the epistemic immune system is tracking the cost side of the corresponding cost/benefit. But this cost/benefit analysis doesn't generalize well between people; there's a big difference between a well-grounded well-studied practitioner looking at their tenth fake framework, and a newcomer who's still talking about how they vaguely intend to read the Sequences.
Much of the value, in diving into a woo area, is in the possibility that knowledge can be extracted and re-cast into a more solid form. But the people who are still doing social-mimicking instead of cost/benefit are not going to be capable of doing that, and shouldn't copy strategies from people who are.
(I am trying not to make this post a vagueblog about On Intention Research, because I only skimmed it and I don't know the people involved well, so I can't be sure it fits the pattern, but the parts of it...
I want to state more explicitly where I’m coming from, about LW and woo.
One might think: “LW is one of few places on the internet that specializes in having only scientific materialist thoughts, without the woo.”
My own take is more like: “LW is one of few places on the internet that specializes in trying to have principled, truth-tracking models and practices about epistemics, and on e.g. trying to track that our maps are not the territory, trying to ask what we’d expect to see differently if particular claims are true/not-true, trying to be a “lens that sees its own flaws.””
Something I don’t want to see on LW, that I think at least sometimes happens under both the headings of “fake frameworks” and the headings of “woo” (and some other places on LW too), is something like “let’s not worry about the ultimate nature of the cosmos, or what really cleaves nature at the joints right now. Let’s say some sentences because saying these sentences seems locally useful.”
I worry about this sort of thing being on LW because, insofar as those sentences make truth-claims about the cosmos, deciding to “take in” those sentences “because they’re useful,” without worrying about the nature of th...
Are people here mostly materialists?
Okay, since you seem interested in knowing why people are materialists. I think it's the history of science up until now. The history of science has basically been a constant build-up of materialism.
We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn't since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thrown about. Planets being driven by angels? Nope, just gravitation and inertia. Okay, so comets don't have comet spirits, but surely living things have spirits. Turns out no, molecular biology is a bit tricky, but it seems to still paint a (very small) rocks thrown about picture that convincingly gets you a living tree or a cat. Human minds looked unique until people started building computers. The same story is repeating again, people point human activities as proofs of the indomitable human spirit, then someone builds an AI to do it. Douglas Hofstadter was still predicting that mastering chess would have to involve encompassing t...
This sounds a bit harsher than I really intend but... Self described rationalists and post rationalists could mostly use a solid course in something like Jonathan Baron's Thinking and Deciding, ie obtaining a broad and basic grounding in practical epistemology in the first place.
This seems roughly on point, but is missing a crucial aspect - whether or not you're currently a hyper-analytical programmer is actually a state of mind which can change. Thinking you're on one side when actually you've flipped can lead to some bad times, for you and others.
Go to such people not for their epistemology, which is junk, but for whatever useful ground-level observations can be separated from the fog.
This is a bit off-topic with respect to the OP, but I really wish we’d more often say “aspiring rationalist” rather than “rationalist.” (Thanks to Said for doing this here.) The use of “rationalist” in parts of this comment thread and elsewhere grates on me. I expect most uses of either term are just people using the phrase other people use (which I have no real objection to), but it seems to me that when we say “aspiring rationalist” we at least sometimes remember that to form a map that is a better predictor of the territory requires aspiration, effort, forming one’s beliefs via mental motions that’ll give different results in different worlds. While when we say “rationalist”, it sounds like it’s just a subculture.
TBC, I don’t object to people describing other people as “self-described rationalists” or similar, just to using “rationalist” as a term to identify with on purpose, or as the term for what LW’s goal is. I’m worried that if we intentionally describe ourselves as “rationalists,” we’ll aim to be a subculture (“we hang with the rationalists”; “we do things the way this subculture does them”) instead of actually asking the question of how we can form accurate beliefs.
I...
I dutifully tried to say "aspiring rationalist" for awhile, but in addition to the syllable count thing just being too much of a pain, it... feels like it's solving the wrong problem.
An argument that persuaded me to stop caring about it as much: communities of guitarists don't call themselves "aspiring guitarists". You're either doing guitaring, or you're not. (in some sense similar for being a scientist or researcher).
Meanwhile, I know at least some people definitely meet any reasonable bar for "actually a goddamn rationalist". If you intentionally reflect on and direct your cognitive patterns in ways that are more likely to find true beliefs and accomplish your goals, and you've gone off into the world and solved some difficult problems that depended on you being able to do that... I think you're just plain a rationalist.
I think I myself am right around the threshold where I think it might reasonably make sense to call myself a rationalist. Reasonable people might disagree. I think 10 years ago I was definitely more like "a subculture supporting character." I think Logan Strohl and Jim Babcock and Eliezer Yudkowsky and Elizabeth van Nostrand and Oliver Habryka each have some clea...
I agree that "aspiring rationalist" captures the desired meaning better than "rationalist", in most cases, but... I think language has some properties, studied and documented by linguists, which define a set of legal moves, and rationalist->aspiring rationalist is an invalid move. That is: everyone using "aspiring rationalist" is an unstable state from which people will spontaneously drop the word aspiring, and people in a mixed linguistic environemnt will consistently adopt the shorter one. Aspiring Rationalist just doesn't fit within the syllable-count budget, and if we want to displace the unmodified term Rationalist, we need a different solution.
FWIW, I would genuinely use the term 'aspiring rationalist' more if it struck me as more technically correct — in my head 'person aspiring to be rational' ≈ 'rationalist'. So I parse aspiring rationalist as 'person aspiring to be a person aspiring to be rational'.
'Aspiring rationalist' makes sense if I equate 'rationalist' with 'rational', but that's exactly the thing I don't want to do.
Maybe we just need a new word here. E.g., -esce is a root meaning "to become" (as in coalesce, acquiesce, evanesce, convalescent, iridescent, effervescent, quiescent). We could coin a new verb "rationalesce" and declare it means "to try to become more rational" or "to pursue rationality", then refer to ourselves as the rationalescents.
Like adolescents, except for becoming rational rather than for becoming adult. :P
Crossposted from Facebook:
The term used in the past for a concept close to this was "Fake frameworks" -- see for instance Val's post in favor of it from 2017: https://www.lesswrong.com/.../in-praise-of-fake-frameworks
Unfortunately I think this proved to be a quite misguided idea in practice, and one that was made more dangerous by the fact that it seems really appealing in principle. As you imply, the people most interested in pursuing these frameworks are often not I think the ones who have the most sober and evenhanded evaluations of such, which can lead...
But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves.
I... don't think "hyper-analytical programmers" are a thing. We are all susceptible the the risk of "falling into crazy" to a larger degree than we think we are. There is something in the brain where openness, being necessary for Bayesian updating, also means suspending your critical faculties to consider a hypothetical model seriously, and so one runs a risk that the hypothetical takes hold, ...
for me it mostly felt like I and my group of closest friends were at the center of the world, with the last hope for the future depending on our ability to hold to principle. there was a lot of prophesy of varying qualities, and a lot of importance placed suddenly on people we barely knew then rapidly withdrawn when those people weren't up for being as crazy as we were.
Hmm, I agree that the thing you describe is a problem, and I agree with some of your diagnosis, but I think your diagnosis focuses too much on a divide between different Kinds Of People, without naming the Kinds Of People explicitly but kind of sounding (especially in the comments) like a lot of what you're talking about is a difference in how much Rationality Skill people have, which I think is not the right distinction? Like I think I am neither a hyper-analytic programmer (certainly not a programmer) nor any kind of particularly Advanced rationalist, an...
This sort of “salvage epistemology” can also turn “hyper-analytical programmers”[1] into crazy people. This can happen even with pure ideas, but it’s especially egregious when you apply this “salvage epistemology” approach to, say, taking drugs (which, when I put it like that, sounds completely insane, and yet is apparently rather common among “rationalists”…).
To the extent that such people even exist; actually, I mostly agree with shminux that they basically do not. ↩︎
I know several rationalists who have taken psychedelics, and the description does seem to match them reasonably well.
There's a selection bias in that the people who use psychedelics the least responsibly and go the most crazy are also the ones most likely to be noticed. Whereas the people who are appropriately cautious - caution which commonly also involves not talking about drug use in public - and avoid any adverse effects go unnoticed, even if they form a substantial majority.
it would have to be the case that use of such drugs is, among rationalists, extremely common.
It is, in fact, extremely common, including among sane stable people who don't talk about it.
Both.
(One additional clarification: the common version of psychedelic use is infrequent, low dose and with a trusted sober friend present. Among people I know to use psychedelics often, as in >10x/year, the outcomes are dismal.)
It's been my experience that many more people think they're immune to woo than actually are. I'm not sure the risk is worth the reward.
I dunno... IME, when someone not capable of steelmanning him reads e.g. David Icke, what usually happens is that they just think he must be crazy or something and dismiss him out of hand, not that they start believing in literal reptilian humanoids.
That post increased the propability that I will overcome my laziness and finally write a post about the concept of "bright doublethink" in English. Thanks.
I'm not sure how the right decision process on whether to do salvage epistemology on any given subject should look like. Also, if you see or suspect that this woo-ish thingy X "is a mix of figurative stuff and dumb stuff" but decide that it's not worth salvaging because of infohazard, how do you communicate it? "There's 10% probability that the ancient master Changacthulhuthustra discovered something instrumentally useful about human condition but reading his philosophy may mess you up so you shouldn't." How many novices do you expect to follow a general c...
A funny thing happens with woo sometimes, in the rationality community. There's a frame that says: this is a mix of figurative stuff and dumb stuff, let's try to figure out what the figurative stuff is pointing at and salvage it. Let's call this "salvage epistemology". Unambiguous examples include the rationality community's engagement with religions, cold-reading professions like psychics, bodywork, and chaos magic. Ambiguous examples include intensive meditation, Circling, and many uses of psychedelics.
The salvage epistemology frame got locally popular in parts of the rationality community for awhile. And this is a basically fine thing to do, in a context where you have hyper-analytical programmers who are not at risk of buying into the crazy, but who do need a lens that will weaken their perceptual filters around social dynamics, body language, and muscle tension.
But there's a bad thing happens when you have a group that are culturally adjacent to the hyper-analytical programmers, but who aren't that sort of person themselves. They can't, or shouldn't, take for granted that they're not at risk of falling into the crazy. For them, salvage epistemology disarms an important piece of their immune system.
I think salvage epistemology is infohazardous to a subset of people, and we should use it less, disclaim it more, and be careful to notice when it's leading people in over their heads.