Yvain's blog: Epistemic learned helplessness.

A friend in business recently complained about his hiring pool, saying that he couldn't find people with the basic skill of believing arguments. That is, if you have a valid argument for something, then you should accept the conclusion. Even if the conclusion is unpopular, or inconvenient, or you don't like it. He told me a good portion of the point of CfAR was to either find or create people who would believe something after it had been proven to them.

And I nodded my head, because it sounded reasonable enough, and it wasn't until a few hours later that I thought about it again and went "Wait, no, that would be the worst idea ever."

I don't think I'm overselling myself too much to expect that I could argue circles around the average high school dropout. Like I mean that on almost any topic, given almost any position, I could totally demolish her and make her look like an idiot. Reduce her to some form of "Look, everything you say fits together and I can't explain why you're wrong, I just know you are!" Or, more plausibly, "Shut up I don't want to talk about this!"

New Comment
39 comments, sorted by Click to highlight new comments since: Today at 7:32 PM

My summary / take: believing arguments if you're below a certain level of rationality makes you susceptible to bad epistemic luck. Status quo bias inoculates you against this. This seems closely related to Reason as memetic immune disorder.

[-][anonymous]11y320

.

Mistakenly believing that you're above that level of rationality is, then, really bad.

Yep. That is the primary reason I haven't yet signed up for cryonics: I can't tell if I want to because I actually think it's a good idea or because I just believe things that people I like say.

You and I and lots of people will sign up for cryonics when it's the normal thing to do in our social group.

Bad epistemic luck, and also adverse selection. Once you become known as persuadable, you tend to attract some unsavory characters.

"I can't explain why you are wrong. But, honestly, I can't fully explain why you are right either. Until I can, I'm going to go with the trusty old combination of tradition and gut instinct."

When I understand a valid argument, I do believe it. I don't have a choice in the matter. I'm not sure if anyone has a choice in the matter, although some people may be better at ignoring that the inner voice that requires you to act in accordance with your beliefs.

But I think there is a big difference between being understanding and being unable to counter. Accepting arguments which you are merely unable to counter opens you up to all kinds of manipulation.

I don't think I'm overselling myself to think that I could wield questions accurately enough to knock down false arguments given by someone who is significantly smarter than me, so long as they aren't allowed to fabricate evidence.

The key is that one must recognize when one has not yet understood something. I think most of the posts on lesswrong are devoted to honing this very skill, even if it is not explicitly mentioned. I'd go so far as to say that a rationalist's self evaluation is essentially equivalent to the extent to which she trusts herself to accurately assign certainty to statements.

I suppose I'm saying the same things as Yvain in different words... here is my phrasing, which I think is better: be careful about accepting beliefs. But please do take your beliefs seriously.

So, I just spent a few hours today rereading Moldbug, and am amused by the relevance of these paragraphs (from here:

[W]e might say that whether they teach the truth or not, churches are just a bad idea, period. People should think for themselves. They should not have thoughts broadcast into a little antenna in the back of the skull. Therefore, the state should separate itself from the church, just because a good state should separate itself from all evil things.

But fortunately or unfortunately, there is no kingdom of philosophers. Most people do not think for themselves, should not think for themselves, and cannot be expected to think for themselves. They do exactly what they should be doing, and trust others to work out the large philosophical truths of the world for them. This trust may be well-placed or not, but surely this mechanism of delegation is an essential aspect of human society - at least with the humans we have now.

To derail slightly, this is a great point that I repeatedly try to emphasize to fervent anti-theists.

For most people, organized religion is the closest brush with philosophy that they will ever have. Currently, there is no other social institution that makes people ponder what it means to be good, or to seek truths beyond the practical matters of every day life. College education comes the closest, but not everyone gets that privilege.

I've got to say though...with the disclaimer that this is the only post I've read from here - Moldbug's post is entertaining, but it has a pronounced pseudo-intellectual feel about it. Pretty writing, but he's tying a lot of separate concepts together into one big picture from very sparse premises.

Although, I suppose that is the state of most political commentary. It's all hollow all the way through, until you get into specifics and data.

"'Cos when their eloquence escapes you
Their logic ties you up and rapes you"

It's a problem. Besides the fact that most arguments are rationalizations and not motivations for a position, you can see why people aren't convinced by arguments. There are smarty pants on both sides of every issue with arguments most can't refute. Argumentation just isn't a reliable means for them to come to the truth.

One thing I've noticed is that in nearly any controversy where the adherents of the heterodox position show signs of basic mental stability, the arguments for heterodoxy are stronger than the arguments for orthodoxy. In the rare cases where this is not true - for instance, creationism - I can take this as a strong indicator of orthoxy (at least against the particular heresy in question.) but how am I to take the general pattern? Should I be more skeptical of orthodoxy in general - of the likelihood of truth coming to orthodoxy given the standards of public truth evaluation which now prevail - or more trusting of it - given that heterodox positions appear to be stronger regardless of context, and are thus likely stronger for reasons other than their truth? My rough conclusion is that I should either look for me-specific biases in this matter, or else look with greater skepticism of orthodoxy in matters I have not yet investigated and greater trust in orthodoxy in matters I have investigated that the strength of arguments would otherwise lead me to believe. But I haven't thought this through fully.

[-]satt11y140

One thing I've noticed is that in nearly any controversy where the adherents of the heterodox position show signs of basic mental stability, the arguments for heterodoxy are stronger than the arguments for orthodoxy.

Is this true? A priori I could see this go either way, and my personal experiences don't add much evidence here (I can't recall many controversies where I've probed deeply enough to conclusively weigh orthodoxy against heterodoxy).

A weaker statement I'm more sure of: the arguments for orthodoxy one hears from most people are weaker than the arguments for heterodoxy, because most people have little reason to actually look up whatever factual basis the orthodoxy might have. (I've seen someone make this point somewhere on Yvain's blog but can't remember who.) For example, I haven't bothered to look up the precise scientific arguments that'd justify my belief in plate tectonics, but a shrinking earth theorist probably has, if only to launch a counterattack on them. (Corollary: I'd have a good chance of losing an argument with a shrinking earth theorist, even though plate tectonics is, well, true.)

Of course, this means the supporters of orthodoxy are in the worst position to judge when they should be updating their position based on new evidence.

You'll want to read an earlier Yvain blog post, then, explaining "many reasons to expect that arguments for socially dominant beliefs (which correlate highly with truth) to be worse than the arguments for fringe beliefs (which probably correlate highly with falsehood)".

Why would you expect the social dominance of a belief to correlate with truth? Except in the most trivial cases, society has no particular mechanism that selects for true beliefs in preference to false ones.

The Darwinian competition of memes selects strongly for those that provide psychological benefits, or are politically useful, or serve the self-interest of large segments of the population. But truth is only relevant if the opponents of a belief can easily and unambiguously disprove it, which is only possible in rare cases.

Or if acting on the damage caused by having a bad model of reality is worse than the signaling benefit of the false belief.

If the arguments for orthodoxy are stronger, then you dismiss contrarians entirely: they are obviously wrong! So do other people, so you don't get to hear about them to begin with. And so do most potential contrarians themselves.

So by selection effect, we mostly see contrarian arguments which at least appear to be better than the orthodoxy.

One thing I've noticed is that in nearly any controversy where the adherents of the heterodox position show signs of basic mental stability, the arguments for heterodoxy are stronger than the arguments for orthodoxy.

See this and this.

I think it's a version of Berkson's paradox: if a position is both heterodox and not supported by any strong arguments, it's very unlikely that people with “basic mental stability” will embrace it in the first place. See also: “The Majority Is Always Wrong” by EY.

The real problem is the phrase "the skill of taking ideas seriously" - by which they do not mean "can deftly sling remarkable quantities of hypotheticals and work out what they would imply", but "being moved to action by the ideas."

The trouble is that there is a name for this in the normal world - it is the defect of "gullibility" or "being easily led".

If CFAR selects for people prone to this defect - it really, really isn't a "skill" - you will be actively selecting for people who will add 2+2+2 ... and get 666.

This may be a problem.

The writer says "If you insist on telling me anyway, I will nod, say that your argument makes complete sense..." despite knowing perfectly well they can't tell if the argument makes sense or not.

If, even knowing specifically in this case that you can't tell if an argument is correct or not, you feel the need to announce that "your argument makes complete sense" your problem is that you believe things without understanding them. Fixing that bad habit might remove the need to not take arguments seriously.

"Your argument makes complete sense" may be a polite way of saying "I don't see any obvious holes and am not willing to look for the non-obvious ones. Please stop talking to me now."

Why no one wants their brain cut into pieces and preserved chemically, squishy pieces in a jar style?

They don't? I do. Well, after I'm clinically dead, preferably. And it's an actual proposed alternative tech to cryonics. I don't think people who are hedging on the pattern theory of identity to be correct in going for cryonics care much about how much the physical brain substrate gets sliced and diced during preservation, as long as the information about its structure remains reconstructible.

There are a bunch of cryonicists who are adamantly opposed to anything that messes with the biological brain staying as a single intact body though.

It'd be a normal thing if water didn't crystallize even at very high cooling rates. World not being convenient, you can cut brain into pieces and store in fixatives like good ol formaldehyde, or you can freeze it whole with parts vitrifying after being damaged by solvents and parts getting shredded into pieces by ice and everything cracking apart.

Can you name any 'normal' thing at all where people invest a good sum of money for their personal benefit based on highly uncertain projections of continued scientific progress because the expected value seems good?

Because otherwise, I don't think the more convenient world in which water doesn't crystallize would look very different...

[-]TimS11y20

From the article:

If I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way, and I should ignore it and stick with my prior.

That's true, but it's just a restatement of your ignorance of a topic. When one is sufficiently ignorant of a topic, one isn't capable of evaluating the arguments.

But Yvain suggests that continued education left him unable to differentiate the quality of arguments. How much of that was that he was reading only nonsense. Reading competing Timecube-quality arguments on a particular topic doesn't add to one's understanding - but so what? That doesn't imply that learning how to recognize good arguments is a strange quality - one can still aspire to become better at it, and reasonably expect to achieve that goal.

In short, unwillingness to take ideas seriously sounds like a terrible idea. Unwillingness to take bad ideas seriously is worthwhile, but skipping over the mechanisms for filtering good ideas from bad leaves me confused about the point of the post.

skipping over the mechanisms for filtering good ideas from bad leaves me confused about the point of the post.

The point of the post is that most people, in most domains, should not trust that they are good at filtering good ideas from bad.

And the point of CFAR is to help people become better filtering good ideas from bad. It is plainly not to produce people who automatically believe the best verbal argument anyone presents to them without regard for what filters that argument has been through, or what incentives the Skilled Arguer might have to utter the Very Convincing Argument for X instead of the Very Very Convincing Argument for Y. And certainly not to have people ignore their instincts; e.g. CFAR constantly recommends Thinking Fast and Slow by Kahneman, and teaches exercises to extract more information from emotional and physical senses.

The point of the post is that post people, in most domains, should not trust that they are good at filtering good ideas from bad.

Or good courses from the bad courses. People should rely on empirical evidence more, that is to say, need more empiricism over rationalism. E.g. here, in rationalist community (quoting verbatim from article linked on About page): "Epistemic rationality is about forming true beliefs, about getting the map in your head to accurately reflect the territory of the world. We can measure epistemic rationality by comparing the rules of logic and probability theory to the way that a person actually updates their beliefs.", whereas just about anyone else would measure that kind of thing by predicting something hidden, then checking for correctness, which is more empiricist than rationalist.

Excellent post by Yvain... your excerpt really doesn't do it justice.

testing this symbol: ∃

[This comment is no longer endorsed by its author]Reply

There's a sandbox you can use for such, below the box in which you write a new comment click "Show help", then there's a link taking you there on the bottom right.

[-]gwern11y-10

I correct my assertion; damage may begin in a minute or two such as going unconscious, but the more extreme permanent levels of damage take a bit longer:

In severe cases it is extremely important to act quickly. Brain cells are very sensitive to reduced oxygen levels. Once deprived of oxygen they will begin to die off within five minutes.

-- http://en.wikipedia.org/wiki/Cerebral_hypoxia

The longest human survival without breathing is 80 minutes.

If you're referring to Anna Bågenholm, you're wrong; she survived in an air pocket and did not freeze but was hypothermic. Hypothermic techniques are already used in medicine, with no visible uptick in cryonics support.

How badly would the brain have to be shredded at microscale until cryonicists wouldn't sign up?

I don't think anyone bothers past a day or so post-death, by which point decay processes have set in.

Why no one would cut brain into pieces and preserve it chemically, squishy pieces in a jar style?

Why would you do that? We don't know where the exact crossing line is, so every additional level of degradation and poor preservation increases the chance of failure.

If you mean chemopreservation or plastination, the answer is, I think, historical convenience: fast freezing and then vitrification were developed long before fast versions of either of the former. Existing techniques of chemopreservation or plastination still don't scale to an entire brain the way cooling can; although Darwin's been working on a proposal for plastination+cryonics, and the Brain Preservation Prize should be getting evidence allowing direct comparison, so 'brain in a jar' methods may yet work out. (Cold comfort for anyone who already has died or will soon die, however.)

[-]gwern11y-20

Not what I asked, and I suspect that it's not very likely because you would still have problems reheating everything and avoiding anoxia - if the brain dies in a minute or two without any oxygen, icing a living person sounds quite dicy.

More importantly: that point is basically 'well if cryonics already worked, then maybe it'd be more popular'. Yes, one would rather hope so! But that would be a very convenient world indeed, and tells us nothing.

[+]gwern11y-50