by [anonymous]
6 min read

9

Note: this has started as a comment reply, but I thought it got interesting (and long) enough to deserve its own post.

Important note: this post is likely to spark some extreme reactions, because of how human brains are built. I'm including warnings, so please read this post carefully and in order written or don't read it at all.

I'm going to attempt to describe my subjective experience of progress in rationality.

Important edit: I learned from the responses to this post that there's a group of people which whom this resonates pretty well, and there's also a substantial group with whom it does not at all resonate, to the degree they don't know if what I'm saying even makes sense and is correlated to rationality in any meaningful way. If you find yourself in the second group, please notice that trying to verify if I'm doing "real rationality" or not is not a way to resolve your doubts. There is no reason why you would need to feel the same. It's OK to have different experiences. How you experience things is not a test of your rationality. It's also not a test of my rationality. All in all, because of publishing this and reading the comments, I've found out some interesting stuff about how some clusters of people tend to think about this :)

Also, I need to mention that I am not an advanced rationalist, and my rationality background is mostly reading Eliezer's sequences and self-experimentation.

I'm still going to give this a shot, because I think it's going to be a useful reference for a certain level in rationality progress.

I even expect myself to find all that I write here silly and stupid some time later.

But that's the whole point, isn't it?

What I can say about how rationality feels to me now, is going to be pretty irrelevant pretty soon.

I also expect a significant part of readers to be outraged by it, one way or the other.

If you think this is has no value, maybe try to imagine a rationality-beginner version of you that would find a description such as this useful. If only as a reference that says, yes, there is a difference. No, rationality does not feel like a lot of abstract knowledge that you remember from a book. Yes, it does change you deeply, probably deeper than you suspect.

In case you want to downvote this, please do me a favour and write a private message to me, suggesting how I could change this so that it stops offending you.

Please stop any feeling of wanting to compare yourself to me or anyone else, or to prove anyone's superiority or inferiority.

If you can't do this please bookmark this post and return to it some other time.

...

...

Ready?

So, here we go. If you are free from againstness and competitiveness, please be welcome to read on, and feel free to tell me how this resonates, and how different it feels inside your own head and on your own level.


Part 1. Pastures and fences

Let's imagine a vast landscape, full of vibrant greenery of various sorts.

Now, my visualization of object-level rationality is staking out territories, like small parcels of a pasture surrounded by fences.

Inside of the fences, I tend to gave more of neat grass than anything else. It's never perfect, but when I keep working on an area, it's slowly improving. If neglected, weeds will start growing back sooner or later.

Let's also imagine that the ideas and concepts I generalize as I go about my work become seeds of grass, carried by the wind.

What the work feels like, is that I'm running back and forth between object level (my pastures) and meta-level (scattering seeds).

As result of this running back and forth I'm able to stake new territories, or improve previous ones, to have better coverage and less weeds.

The progress I make in my pastures feeds back into interesting meta-level insights (more seeds carried by the wind), which in turn tend to spread to new areas even when I'm not helping with this process on purpose.

My pastures tend to concentrate in clusters, in areas that I have worked on the most.

When I have lots of action in one area, the large amounts of seeds generated (meta techniques) are more often carried to other places, and at those times I experience the most change happening in other, especially new and unexplored, areas.

However even if I can reuse some of my meta-ideas (seeds), then still to have a nice and clear territory I need to go over there, and put in the manual work of clearing it up.

As I'm getting better and more efficient at this, it becomes less work to gain new territories and improve old ones.

But there's always some amount of manual labor involved.


Part 2. Tells of epistemic high ground

Disclaimer: not using this for the Dark Side requires a considerable amount of self-honesty. I'm only posting this because I believe most of you folks reading this are advanced enough not to shoot yourself in the foot by e.g. using this in arguments.

Note: If you feel the slightest urge to flaunt your rationality level, pause and catch it. (You are welcome.) Please do not start any discussion motivated by this.

So, what clues do I tend to notice when my rationality level is going up, relative to other people?

Important note: This is not the same as "how do I notice if I'm mistaken" or "how do I know if I'm on the right path". These are things I notice after the fact, that I judge to be correlates, but they are not to be used to choose direction in learning or sorting out beliefs. I wrote the list below exactly because it is the less talked about part, and it's fun to notice things. Somehow everyone seems to have thought this is more than I meant it to be.

Edit: check Viliam's comment for some concrete examples that make this list better.

In a particular field:

  • My language becomes more precise. Where others use one word, I now use two, or six.
  • I see more confusion all around.
  • Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.
  • I start getting strong impulses that tell me to educate people who I now see are clearly confused, and could be saved from their mistake in one minute if I could tell them what I know... (spoiler alert, this doesn't work).

Rationality level in general:

  • I stop having problems in my life that seem to be common all around, and that I used to have in the past.
  • I forget how it is to have certain problems, and I need to remind myself constantly that what seems easy to me is not easy for everyone.
  • Writings of other people move forward on the path from intimidating to insightful to sensible to confused to pitiful.
  • I start to intuitively discriminate between rationality levels of more people above me.
  • Intuitively judging someone's level requires less and less data, from reading a book to reading ten articles to reading one article.

Important note: although I am aware that my mind automatically estimates rationality levels of various people, I very strongly discourage anyone (including myself) from ever publishing such scores/lists/rankings. If you ever have an urge to do this, especially in public, think twice, and then think again, and then shut up. The same applies to ever telling your estimates to the people in question.

Note: Growth mindset!


Now let's briefly return to the post I started out replying to. Gram_Stone suggested that:

You might say that one possible statement of the problem of human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.

Now after everything I've seen until now, my intuition suggests Gram_Stone's idealized method wouldn't work from inside a human brain.

A generalized meta-technique could become one of the many seeds that help me in my work, or even a very important one that would spread very widely, but it still wouldn't magically turn raw territory into perfect grassland.


Part 3. OK or Cancel?

The closest I've come to Gram_Stone's ideal is when I witnessed a whole cycle of improving in a certain area being executed subconsciously.

It was only brought to my full attention when an already polished solution in verbal form popped into my head when I was taking a shower.

It felt like a popup on a computer screen that had "Cancel" and "OK" buttons, and after I chose OK the rest continued automatically.

After this single short moment, I found a subconscious habit was already in place that ensured changing my previous thought patterns, and it proved to work reliably long after.


That's it! I hope I've left you better off reading this, than not reading this.

Meta-note about my writing agenda: I've developed a few useful (I hope) and unique techniques and ideas for applied rationality, which I don't (yet) know how to share with the community. To get that chunk of data birthed out of me, I need some continued engagement from readers who would give me feedback and generally show interest (this needs to be done slowly and in the right order, so I would have trouble persisting otherwise). So for now I'm writing separate posts noncommittally, to test reactions and (hopefully) gather some folks that could support me in the process of communicating my more developed ideas.

New Comment
50 comments, sorted by Click to highlight new comments since:

what clues do I tend to notice when my rationality level is going up, relative to other people?

How do you distinguish your rationality going up from you becoming ossified in your beliefs with the increased conviction that other people are wrong and stupid?

This I expect to be pretty universal, so if you think about how you do it you'll have a good idea. I'm still going to answer though. Briefly, it seems to be a combination of:

  • monitoring effectiveness, increase in ability to solve actual problems and make predictions,

  • intuition, sense of elegance, feeling that the theory "clicks",

  • checking against other people, both by listening to them and penalizing any solution that breaks existing rules/trends.

so if you think about how you do it you'll have a good idea

The problem is, if I go solely by internal perceptions/feelings I can't reliably distinguish the cases where I'm a beacon of light and reason and where I'm an arrogant self-deluding idiot. What I need is real-life testing.

So yes, I agree with the "effectiveness" point, but at least in my case I have doubts about elegance and "clicks". To figure out whether something "clicks" is easy for me, so that's an early threshold an idea/theory/explanation has to pass. And "checking against other people" is not terribly useful because if I'm right then they are doing it wrong so the check will only confirm that we see things differently.

All of this is true. Though in many cases when people "are doing it wrong" you find not that they have opinions opposed to you, you find that they don't have any consistent opinion at all. Which makes it OK to stick with your version until you find something better.

I'd mention that in many cases the best thing to do might be to lay off the topic for some time, work on other problems, improve your overall thinking, check facts known from respectable science, wait for your feelings of attachment to die, and revisit the original topic with a fresh perspective much later.

This can be repeated many times, and I guess it's actually the core of my description of caring about "pastures". This is a kind of a meta-technique that seems to be central to not becoming "stuck" in stupidity.

you find that they don't have any consistent opinion at all.

Well, they might not be expressing any consistent opinion, but if they are doing the same thing over and over, then there is a clear implied position (similar to revealed preferences).

the best thing to do might be to lay off the topic for some time

Might be -- unless you need to make a decision in the near future. If the topic is something you can ponder for a long time without needing to come to any conclusions, well, the question that comes to my mind is "Are you sure it's important?" :-/ (yes, I know that's not applicable to science)

intuition, sense of elegance, feeling that the theory "clicks",

That's also frequently happening with people adopting wrong beliefs.

Yes. I'm not claiming to be infallible, but I also suppose that having done a lot of abstract math helps me to know good thinking when I see it. Especially in cases when I can go deep enough and follow the whole thing from "first principles".

Being convinced that a single theory derived from first principles explains everything about a complex domain seems to me like having a hedgehog perspective on the domain.

That means you are unlikely to be very good at predicting over the domain by the findings of Tedlock.

You are jumping to assumptions about what I do, and how I think.

Well, thanks for the warning anyway. It's good to keep it in mind.

You are jumping to assumptions about what I do, and how I think.

That's part of trying to understand what somebody else thinks. It's good to make assumptions to prevent a statement to be to vague to be wrong. If you think I made incorrect assumptions feel free to say to correct mistaken assumptions.

[-]gjm30

I'm not SquirrelInHell, but I'll point out what looks to me like one substantial misunderstanding.

SIH said that s/he finds that mathematical training gives a good sense of good versus bad thinking in cases of the "rigorous reasoning from first principles" kind. You responded as if SIH were claiming to be explaining everything about a complex domain using such reasoning, but s/he made no such claim.

Perhaps this analogy will help. Suppose I write something about improving my abilities in graphic design, and am asked how I distinguish genuine improvements from (say) mere increases in arrogance. I list a number of criteria for distinguishing one from the other, and one of them is something like "When the design has a strong short-term commercial focus, like an advertisement or a page on a merchant's website, we can measure actual sales or conversions and see whether I've successfully increased them". And then you object that it's wrong to reduce everything to counting money. So it is, but that doesn't mean that when something is about money and it can be counted you shouldn't do so.

The situation here is just the same. Not everything is about careful logical reasoning from first principles, but when things are a good sense of when they're correct is helpful. And yes, mathematicians are good at this. (I don't know how much of that is selection and how much is training.)

SIH said that s/he finds that mathematical training gives a good sense of good versus bad thinking in cases of the "rigorous reasoning from first principles" kind.

That's not the only claim. If you look at the post there the claim that there's polarization. That being rational makes him see less shades of gray. two sensible sounding ideas become one great idea and one stupid idea For that to happen he has to call those ideas that are in line with his first principle derived theory great and ideas that are not in line with it stupid.

Let us take an example. An aspriring rationalist finds that status is important for social interactions. He then rethinks all of his thinking about social interactions based on the first principle of status. That person will see the signs that SquirrelInHell described in the OP as the signs for increased rationality about the domain.

Or take one of those libertarians who try to boil down all of politics to being about violence. That produces those signs that SquirrelInHell describes but has nothing to do with real rationality.

[-]gjm30

That's not the only claim.

It's the one I thought you were responding to.

For that to happen he has to call those ideas that are in line with his first principle derived theory great and ideas that are not in line with it stupid.

My interpretation was that all those signs are potentially separate; in a given place, some will apply and some won't. The situation you describe applies, at most, to those cases that (a) SquirrelInHell thinks are resolvable from first principles and (b) SquirrelInHell now feels more polarized about.

So let's suppose we're only talking about those cases -- but note, first, that there's no reason to think that they're very common. (If SquirrelInHell finds that most cases are like that, then I agree that may be a bad sign.)

In that case, I agree that it is possible to go wrong by leaping into some oversimple crackpot theory. But so what? SIH listed intuition/elegance/"clicking" as just one of several signs to distinguish real from fake improvements. Any one of them may lead you astray sometimes. (All of them collectively may lead you astray sometimes. Sometimes the world just screws you over.) The question is not "can I think of counterexamples?" -- of course you can -- but "will this heuristic, overall, make you more or less accurate?".

I don't know whether SquirrelInHell has watched to see whether that sense of elegance does actually correlate with correctness (either globally or in some particular cases -- heuristics can work better in some situations than others). For that matter, I don't know whether you have (but SIH's sense of elegance might differ from yours).

Suppose, as per your first example, someone runs across the notion of social status and completely reframes his thinking about social interactions in terms of status. They may, as you say, feel that "everything makes sense now", even though in fact their thinking about social interactions may have become less effective. So let's look at the other signs SquirrelInHell lists. Does our hypothetical would-be-rationalist become more effective in interacting with others after this status epiphany? (If so, I would take that as evidence that "it's all status" is a better theory than whatever s/he was working with before. Wouldn't you?) Does discussion with other people throw up obvious problems with it -- especially obvious problems that the previous theory didn't have? (If so, then again I would take that as evidence in favour; wouldn't you?)

Note that for "it's all status" to be an improvement in rationality it's not necessary for "it's all status" to be correct. Only that it be more correct than whatever our hypothetical would-be-rationalist thought before. (Kepler noticed that planets seem to move in approximately elliptical orbits with certain nice properties. This was wrong -- because of the gravitational effects of other bodies besides the sun and the planet, and because Newtonian physics is wrong -- but it was a lot better than what had come before.)

Thank you for arguing calmly and patiently. I don't trust myself to do this, seeing how I have already failed once to keep my composure in my line of discussion with ChristianKl.

If it helps, I can imagine how it feels.

It looks to me that you tried to answer a question that is really complex and subjective. Of course you don't have a simple equation where you could just put numbers and say "well, if the result x is positive, it means my rationality has increased; if it is zero, it stayed the same; and if it is negative, it has actually decreased". Instead you looked into your mind and noticed a few patterns that frequently appear in situations where you believe you have become more rational. And then you put it on paper.

In return, instead of discussion like "wow, it feels the same to me, I am so surprised, I thought I was the only person who feels like this" or "for me it is completely different; I usually don't notice anything immediately, but later other people start telling me that I have become smarter, or the smart people whom I respect a lot suddenly become interested at meeting me and talking with me"... in other words, instead of repaying your introspection and sharing with other people's introspection and sharing... you got hit by a full-speed Vulcan train. "Your evidence is not 100% reliable, and we are going to assume that you are an idiot unaware of this." You exposed your sensitive belly, and you got kicked there. (It's not a coincidence that the critics have carefully avoided saying anything about how improving rationality feels to them, and only focused on dissecting you. That's how one plays it safe.)

Yeah, it sucks.

EDIT: And then it's funny to scroll the page down and see a comment saying it's "ordinary and uncontroversial".

Wow. You are good at empathy.

It's the one I thought you were responding to.

I'm responding to a mental model of his position based on what he wrote. No single statement is responsible for the full model.

In that case, I agree that it is possible to go wrong by leaping into some oversimple crackpot theory.

I don't think the concern is simple about crackpot theories. It's about trying to explain everything with one theory. You can do that successfully in physics but in many contexts it's you can't do everything with one theory.

The question is not "can I think of counterexamples?" -- of course you can -- but "will this heuristic, overall, make you more or less accurate?".

Yes. I think the heuristic of following the Superforcasting principles is better. That means developing more shades of gray and thinking foxy instead of thinking like a hedgehog.

Does our hypothetical would-be-rationalist become more effective in interacting with others after this status epiphany?

The status-hedgehog might be better at a few interactions at the cost of not being able to have genuine connections with others anymore. He would be more effective if he would be foxy and would say: Status is important, but there are also other important factors.

I don't think that looking for positive real world effects or looking at whether discussion with other people throw up obvious problems are filter that successful protect from hedgehog thinking.

There nothing wrong with using first-principle thinking. If you however use it to come up with a view and then call all ideas that align with that view great and all that don't align stupid you are making a mistake. You are using a bad heuristic.

[-]gjm00

I don't think the concern is simply about crackpot theories

No, it isn't. I traded precision for vividness. Sorry if that caused confusion.

but in many contexts you can't do everything with one theory

I agree. I see no sign that SIH is any less aware of this, but you're writing as if you're confident s/he is.

I think the heuristic of following the Superforecasting principles is better.

These are heuristics that apply in different situations, and not alternatives to one another. Perhaps we're at cross purposes. The heuristic I have in mind is "in situations where first-principles deductive reasoning seems appropriate, trust my sense of good reasoning that's been trained by doing mathematics", and not anything like "in general, expect to find good deductive first-principles models that explain everything". The latter would be a terrible heuristic; but, again, I see no reason to think that SquirrelInHell is either using or advocating it.

In any case, I think you are making the same mistake as before. SIH says "here are some signs of improving rationality", and you object that you could exhibit those signs while shifting to a position that's suboptimal. But a position can be both suboptimal and better than what came before it.

If [...] you are making a mistake. You are using a bad heuristic.

Sure. And it looks to me as if you are taking SquirrelInHell to be either advocating that heuristic or admitting to using it regularly, and that just doesn't seem to me to be true.

Actually, I'm going to qualify that "sure" a bit. I use first-principles thinking to determine that there is no integer whose square ends in 2 when written in decimal notation. If someone thinks otherwise then I call them wrong (I'm usually to polite to use words like "stupid", but I might think them). There is nothing wrong with this.

I agree. I see no sign that SIH is any less aware of this, but you're writing as if you're confident s/he is.

SIH writes about himself getting polarized and starting to judge ideas as either great and stupid and then feeling the each to preach to people about how wrong they are.

That's usually what happens with someone who focuses on one theory. It's a sign that's what he's doing. It's not useful to see either of those two factors as signs of increased rationality because that means you orient yourself in a way of becoming a hedgehog in more domains.

At the moment he or you haven't provided a justiciation why the heuristic of seeing those things as a sign of increased rationality is useful. Instead he tries to dodge having a real discussion in various creative ways.

I use first-principles thinking to determine that there is no integer whose square ends in 2 when written in decimal notation.

If you read what I wrote I consciously added the word "complex" to indidate that I don't object to that usage.

[-]gjm00

haven't provided a justification why the heuristic of seeing those things as a sign of increased rationality is useful.

I think that's a fair criticism. But you're making it deep in a subthread that started with an entirely different and much less fair criticism of a different part of what he said.

he tries to dodge having a real discussion in various creative ways.

From the outside, it looks to me as if you're looking more for a status-fight than for a real discussion with SIH. I find it unsurprising if he responds defensively.

(My perception could be dead wrong, of course. The possibility of such errors is, I take it, one reason why the conventions of polite discussion in many societies include measures to make things look less like status-fights.)

I consciously added the word "complex" to indicate that I don't object to that usage.

Being more explicit might have helped. I, and I'm guessing also SquirrelInHell, took you to be saying not "This may work well in some relatively simple and clear-cut domains, but in more complex ones it can cause trouble" (with which I guess SIH would have agreed) but something more like "Obviously you're using this heuristic in complex domains where it doesn't belong; how silly of you".

As for its application to my comment: your insertion of the word "complex" was 8 comments upthread and a major theme of the intervening discussion has been the possibility that you assumed SIH was intending to apply the "feels simple and elegant" heuristic to a wide range of complex human situations when in fact he was intending to apply it only to simpler situations more amenable to first-principles analysis. So I really don't think it is reasonable for you to suggest that when you now say (without any qualification as to complexity) "if you do X you are making a mistake and using a bad heuristic", I should just assume you are only referring to the class of situations in which, so far as I can tell, we all agree that doing X is likely to be a bad idea.

I think that's a fair criticism. But you're making it deep in a subthread that started with an entirely different and much less fair criticism of a different part of what he said.

I agree 100% that I'm not giving a "justification why the heuristic of seeing those things as a sign of increased rationality is useful".

My answer is that I never intended for what I'm writing to be useful in this way.

I think it becomes anti-useful if you use it as a set of pointers about what is more "rational".

I indicated this much in my "notes", as clearly as I could.

From the outside, it looks to me as if you're looking more for a status-fight than for a real discussion with SIH.

If you look at this thread you see that the first post I wrote was explicitely thanking him and far away from status-fight. With increased attempts of him to dodge debate, I used more strong language.

when in fact he was intending to apply it only to simpler situations more amenable to first-principles analysis.

If that's the case than SIH should be criticized for not making it clear in his OP that he talks about simple situations. For me treating the OP as being about complex situations and noting it explicitely, is completely reasonable.

If he writes a vague post that doesn't make it clear whether he means complex or simply domains it's very reasonable for me to say: "I'm assuming you mean complex domains, and here's what follows from that..." That brings him in a discussion to clarify what he means if the assumption doesn't apply. I'm bringing the discussion forward by making that assumption. In this case he instead tried to dodge the debate.

[-]gjm00

[Before I say anything else, an entirely separate thing: I have consistently been typing your name as "ChristianKI" when in fact it's "ChristianKl" (the two are pixel-for-pixel identical on my screen, but others using different fonts may see the difference -- the wrong one has a capital eye at the end, the right one a lowercase ell). My apologies for getting your name wrong.]

the first post I wrote was [...] far away from status-fight.

OK, I agree. I'd either not read that one, or forgotten it (it was in a separate thread starting from a different top-level comment on SIH's post).

With increased attempts of him to dodge debate, I used more strong language.

Maybe I'm missing something, but this doesn't look like an accurate description. The actual sequence appears to be (times as displayed to me by LW -- I'm not sure what it does about timezones):

  • 18th, 11:43: friendly comment from CK (which gets a friendly response from SIH; no further discussion there; everything else is in a different thread).
  • 18th, 15:52: challenge from Lumifer (increased rationality versus increased ossification).
  • 18th, 22:24: SIH replies to L listing indications (observed better effectiveness, sense-of-elegance, consonance with others' opinions).
  • 19th, 11:41: CK picks out one of SIH's indications (sense-of-elegance) and says "That is also frequently happening with people adopting wrong beliefs".
  • 19th, 12:20: SIH replies (not claiming infallibility; mathematical experience hones one's sense of elegance, especially in first-principles cases).
    • So far, nothing is notably either hostile or evasive, at least to my eyes.
  • 19th, 12:39: CK replies ("seems like having a hedgehog perspective", "you are unlikely to be very good at predicting").
    • This is where I first get the impression of status-fighting. You seem to leap to the assumption that SIH wants to use first-principles reasoning where it doesn't belong, with (so I still think) no actual justification; you express your criticisms personally ("you are unlikely ...").
  • 19th, 13:12: SIH says CK is jumping to conclusions, and thanks you for the warning.
    • Doesn't seem to me either hostile or evasive (though I think it would have been better if he'd said what wrong conclusions he thought you were jumping to).
  • 19th, 13:46: CK defends conclusion-jumping and invites SIH to say what wrong conclusions.
    • FWIW I tend to disagree with the idea that conclusion-jumping is a good way to find out what someone means, but I don't see anything either hostile or evasive here.
  • 19th, 21:31: SIH says CK is making a fully general counterargument and challenges CK to argue against his own position.
    • That's a weird move, and SIH himself has said (see his edit to that comment) that it was a mistake.
    • From this point I think the prospects of useful discussion were very poor because both parties were trying to win rather than to understand and arrive jointly at truth.

19th, 12:39: CK replies ("seems like having a hedgehog perspective" This is where I first get the impression of status-fighting.

"Seems" is a word to make the statement less strong.

The statement provides two productive ways for the discussion to continue:

a) He says that I misunderstand that he advocates hedgehog-style thinking. b) He defends hedgehog-style thinking as good.

Both of those alternatives lead the discussion to a more substantive place that's less vague. Not wanting to take either of those positions but instead criticizing the fact that there's an assumption is evasive.

Maybe I'm missing something

You are certainly missing sent direct messages started by SIH.

[-]gjm00

Obviously I can't comment on any private messages between the two of you.

Has anyone noticed that ChristianKl is explaining everything with one theory that says it's bad to explain everything with one theory? ;)

ChristianKl is explaining everything with one theory

There you are wrong. I'm not drawing from a single theory in this discussion. It the lesson from BPS debating that smart people can find good arguments for any position. It's Tedlocks theory of Superforcasting. It's Eliezer's "Policy Debate Shouldn't be One-Sided". It's the general case for scientific pluralism as made by Kuhn and other HPS people.

That's four theories that I'm thinking about actively and there are likely more if I would spent more time to dig.

Lastly, this thread isn't "everything". I write a lot. It's a mistake for you to assume that the tiny bit of my writing that you have read is everything.

That's part of trying to understand what somebody else thinks. It's good to make assumptions to prevent a statement to be to vague to be wrong. If you think I made incorrect assumptions feel free to say to correct mistaken assumptions.

Now you have made a general point that can be easily argued both ways.

Tell me the strongest counter-arguments you can think of against what you just said.

(I predict you to agonize over this, produce strawmans, and have a strong impulse to dodge my request. Am I wrong?)

Edit: This was a bad way to handle this on my part, and I regret it. The flip side to ChrisitanKl's statement is probably obvious to anyone reading this (confirmed with a neutral third party), and I wanted to somehow make ChrisitanKl see it too. I don't know a good way to do this, but what I wrote here was certainly not it.

Tell me the strongest counter-arguments you can think of against what you just said.

Why do you think that would be helpful?

It seem to me like you don't want to engage with discussion. As a result it doesn't me to try to find counter-arguments against what I'm saying.

Notice how I made a successful prediction that you will try to dodge my request.

It would be helpful to you, if you want to improve your rationality, as opposed to feeling good.

Edit: I retract this, since it is not a helpful way to advance the discussion.

[This comment is no longer endorsed by its author]Reply

Notice how I made a successful prediction that you will try to dodge my request.

That happen to be false. You predicted something related but different. But predicting that people won't go along with unreasonable requests doesn't require much skill.

It's also intersting that you call it dodgin when I ask you to provide reasons for why you think what you recommend is good.

It would be helpful to you, if you want to improve your rationality, as opposed to feeling good.

I don't see how going along with people who are evasive generally increases my rationality. In general the sequences also recommend against playing devils advocate and don't see it as raising rationality.

My language becomes more precise. Where others use one word, I now use two, or six.

I have recently read "Science and Sanity" (the book written by the "a map is not the territory" guy), and I got the impression that in author's opinion the most frequent cause of "insanity" (in the LW sense) is using the same word for two or more different things, and then implicitly treating those things as the same thing.

So yeah, using two labels for two different things is an improvement in situations where the differences matter.

I see more confusion all around.

Reading LW for me almost completely ruined reading online political debates. Now I look there and only see long lists of logical fallacies. I briefly think about saying something, then I remember the thing about inferential distances, and then I just sigh and close the browser tab.

Similarly, when my non-rationalist friends share something about "quantum physics" on facebook.

Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.

I suspect this is because where you previously had "two ideas that sound sensible, but I have no idea about the details", now you have "an idea that sounds sensible, and the details seem correct" and "an idea that may sound sensible, but the details are completely wrong". That is, the 'polarization' is caused by seeing inside the previously black boxes.

I start getting strong impulses that tell me to educate people who I now see are clearly confused, and could be saved from their mistake in one minute if I could tell them what I know... (spoiler alert, this doesn't work).

Same here. Seemingly educatable people never fail to disappoint. When I think they have become smarter, it's usually because they didn't have time to write me a proper reply yet.

Now I feel like it's best to model people as if their thinking and behavior never changes. It's probably partially wrong in long term (think: years or decades), but in short term it is usually much better than imagining that people can learn. (Not conductive to growth mindset, though. Should I hypocritically assume that I can grow, but most people can't?)

I stop having problems in my life that seem to be common all around, and that I used to have in the past.

I would describe it that I stoped generating some kinds problems for myself. Because I was partially responsible for a lot of chaos. (On the other hand, this is difficult to disentangle from problems that went away for other reasons, e.g. because I became adult, because I have more money, or because I met the right people.)

I forget how it is to have certain problems, and I need to remind myself constantly that what seems easy to me is not easy for everyone.

I usually don't write diaries. But I found one that I kept writing for a few weeks, many years ago. I looked there, shivered, and quickly destroyed the evidence.

Writings of other people move forward on the path from intimidating to insightful to sensible to confused to pitiful.

I usually don't get to the last step, because I stop reading at the "confused" moment.

I start to intuitively discriminate between rationality levels of more people above me.

My experience is that I start seeing difference between other people (a) having an expertise in one domain, and (b) being good thinkers in general. (Two independent scales. Or perhaps, being a good thinker usually implies expertise in some domain, because smart people usually have some hobby. But being an expert in some domain does not imply good thinking.) Some people who seemed "smart" in the past were reclassified as merely "domain experts".

Intuitively judging someone's level requires less and less data, from reading a book to reading ten articles to reading one article.

Sometimes an article is enough to illustrate the typical mistakes the person makes. On the other hand, I would expect even a smart person to write a crappy article once in a while; the level of rational behavior may fluctuate, especially when the person is tired or annoyed, etc.

Thank you for putting in the work to write this article.

Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.

That doesn't necessarily have to happen. Policy Debates Should Not Appear One-Sided

Tedlock writes in Superforcasting that Superforcasters are able to make more distinctions then people who are less good at forcasting.

I appreciate your feedback :)

That doesn't necessarily have to happen.

Certainly. That's why I treat trends in these things only as curious observations, and take care never to feed them to inputs of my mapping/decision processes.

[-][anonymous]10

I don't know if this has any bearing on the question, but periodically I have the feeling 'oh come on, I have already moved on and stopped double-counting this piece of evidence and you still keep harping' when dealing with my household. Increasing rationality then becomes finding ways to power through people's repeated cashed answers without sticking to a cashed one myself, which is...hopeless.

What I want from the essay is more detail of what you're being rational about-- it's possible you're thinking more clearly, and it's possible you're kidding yourself, and I can't form an opinion about which it is without more information.

Thank you for articulating this clearly and without being aggressive. This is remarkable and I've started to pay more attention to this recently, seeing how the discussion culture on LW could use some niceness (which is not to say, losing the openness about pointing out mistakes etc. - just not being a jerk about it).

Unfortunately I can't give you what you want just yet, or at least not in satisfying quality and quantity. Writing about serious rationality stuff is hard, and I'm in the process of experimenting to figure it out. (As a temporary and poor stand-in, I can point to this post which contains some claims about dual process theory based on literature I never knew existed, but it overlaps very closely with what I generated independently - as indicated in my comment).

Apparently, many people reading my post wanted to know the same thing as you. It is slightly strange to me, because my intention was specifically to write only about the "impressions" side of this, which seemed to be neglected and as far I have seen, no one has ever pointed it out clearly.

So I'm not sure why everyone has this approach - I would be grateful if you could give me your thoughts on why do you think it is useful to know what excactly I thought I was being more rational about. Does it change something about how you interpret my description, if you assume I was wrong about object-level stuff, or if you assume I was right about it?

[-]gjm70

I'm not sure why everyone has this approach

If you say "Here is how it feels to me when I get more rational" and it turns out that what you're actually describing is how it feels to you when you get less rational but fool yourself, other people may want to use the signs you describe as warnings that they may be going astray.

If you say "Here is how it feels to me when I get more rational" and it turns out that you're right, other people may want to use the signs you describe as indications that they're doing something right.

(Of course neither of those will work well for people whose minds are too different from yours.)

Ugh. Thing is, I strongly discourage people from using those signs (from my main post) as indications that they're doing something right.

I wouldn't want myself to use those signs in that way.

I predict it would harm my attempts to be more rational if I did that.

In all of this I just wanted to share my subjective experience, because um, it's fun to share subjective experiences? Or is it something people that are not me do not typically like?'

Edit: also see this reply to NancyLebovitz's comment. It's more clear to me now what has happened here.

As I said, at this point, all I know is that you think you're becoming more rational. I can't begin to tell whether yourfeelings are about becoming more rational unless I know in more detail about how your thinking has changed.

As for me, the most obvious change is that I'm less likely to go "Cool new thing that fits with my preconceptions! It must be true!" and more likely to think "Check on whether it actually makes sense and has sufficient evidence".

I can't begin to tell whether yourfeelings are about becoming more rational

This extremely interesting!

It means your own feelings are so different from mine you can't just be like "check, mostly check, check, not check, check" (see e.g. Villiam's comment).

I didn't anticipate a large part of the readers to feel so differently from me, that they literally can't tell if what I'm saying correlates positively with rationality or not.

This was the source of my confusion, I guess.

Fun!

Learning stuff!

Progressing from specific to abstract is the recommended way to teach. Ignoring this typically leads to "memorizing passwords" (the student memorizes that "X is Y" and can repeat it successfully, but has actually no idea which parts of the territory correspond to X or Y) or "double illusions of transparency" (the teacher tries to say X, the student thinks the teacher said Y, the teacher believes the student understood X, both leave satisfied without noticing that the transfer of knowledge failed).

Also, stories are easier to remember for human brains.

If I tried to rewrite your article... frankly, I would remove most (not all) of the text before and after the bullet points; and then add a few specific examples, preferably from real life but modified to protect anonymity, illustrating the individual points. (Here is an example of the technique that got upvoted despite being unnecessarily long and violating a local taboo. It doesn't have the bullet points because the whole article has only one point.)

Thanks a lot for this. When I'm explaining something hard, I do tend to start with examples but this time it didn't trigger for me, because it felt like I'm sharing some experiences, so there's nothing to "understand" about them.

In retrospect, I was horribly wrong.

From now on, whenever I feel like I want to share an experience, I will start with stories.

This all seems pretty ordinary and uncontroversial to me. It's about what I'd expect when 'doing it right'.

Well, I was trying to err on the safe side, with warnings and all. So maybe I just succeeded.

Still, I would expect someone to say what you just did also in the world in which I was getting a lot of extreme reactions. (There would certainly be some folks who happened to be in a similar place.)

Updating somewhat.

[-][anonymous]00

Are there any other feelings which are generated as your rationality increases? (At least part of what you listed seems like more clearly articulated impulses, heuristics or however you call it. Which is a good thing in itself, I think, but how do you establish its correspondence to 'real life set-ups'?)

Are there any other feelings which are generated as your rationality increases?

I don't think many feelings caused by rationality stuff directly, but when I e.g. feel satisfied about a good result I have achieved, some of it propagates back to feeling satisfied with my progress in rationality. Is this what you mean? Can you clarify your question?

At least part of what you listed seems like more clearly articulated impulses, heuristics or however you call it.

It's good that it seems like that, because that's precisely the part I was trying to describe.

Which is a good thing in itself, I think, but how do you establish its correspondence to 'real life set-ups'?)

See my response to Lumifer's comment.

[-][anonymous]00

No, I got that; I want to know, rather, how do you know whether you feeling something you associate with increasing rationality exactly in the subset of cases where you can be said to act rationally.

It should be testable, probably, with some kind of mood-tracking device (smart watch? Sorry, don't know a thing about it.)

If I understand your question correctly, I don't know that and I don't think I could know it. What I wrote is about how it generally feels and this is definitely not concrete enough to guide me in rationality.

We've had enough historical cases of people following what feels right and ending up believing stuff willy nilly.

The only answer is, I guess, you just don't depend on this stuff to guide you, ever. I treat it like a curious post facto observation, not a rationality technique of any kind.

BTW, I'm a sort of a maniac of recording high quality data about my moods/feelings/energy etc. in daily life and doing statistics on them. I found I can get a lot of value out of it, but probably not in the sense you mean. I'll write about it some time.