The main danger for LW is that it could become rationalist-porn for daydreamers.

I suggest a pattern of counterattack:

  1. Find a nonrational aspect of your nature that is hindering you right now.

  2. Determine privately to fix it.

  3. Set a short deadline. Do the necessary work.

  4. Write it up on LW at the deadline. Whether or not it worked.

(This used to be a comment, here.)
New Comment
38 comments, sorted by Click to highlight new comments since: Today at 3:14 AM
  1. Find a nonrational aspect of your nature that is hindering you right now.

  2. Determine privately to fix it.

  3. Set a short deadline. Do the necessary work.

  4. Write it up on LW at the deadline. Whether or not it worked.

I would add a step 1.5 or 2.5 -- define in advance what criteria you will use to determine "whether or not it worked". Ideally, select criteria that are based on your automatic responses in the relevant context, rather than what you can do about the problem when you're focused and paying attention.

Otherwise, you run an extremely high risk of false-positive "success" reports. (And I speak from experience.)

Yes, totally agreed. Be precise, define a goal that's both reachable and testable.

"Fix the automatic response" is an interesting criterion. Am I right you're saying "it doesn't count if you can only do it with a special effort?" That's an interestingly subtle point. The improvement has to be pervasive in your life. It agrees with my preference for a private intent - you can't always rely on a gun to your head to make you work at peak ability.

But contrariwise, it's true that the way you learn stuff in general is to do it many many times deliberately, and it gets cached and you can do it automatically. So fixing an improvement in automation could take a long while, longer than a one shot quick commitment.

I wonder what would be the best criterion that would capture the ideal of ongoing use even if not yet automation?

Am I right you're saying "it doesn't count if you can only do it with a special effort?"

Doing it with effort is fine; needing to make an effort to do (or remember to do) it in the first place is not, or you're going to forget as soon as it drifts out of your sphere of attention/interest.

But contrariwise, it's true that the way you learn stuff in general is to do it many many times deliberately, and it gets cached and you can do it automatically.

How many times do you have to practice non-belief in Santa Claus, before you stop trying to stay up and watch the chimney?

Some learning works better fast than slow. Case in point, btw, the book "The Four-Day Win" presents a very strong case that it only takes us four days of something to get used to it and treat it as habitual, not the 21-30 days propounded by most self-help material.

The catch of course is that it has to be something that you're not demotivated by, and doesn't conflict with other goals of yours. But then, if it has one of those problems, 21-30 days won't make it a "real" habit, either. In effect, 21-30 days is just a survivor-bias test: if you manage to make it that long without giving up, you probably didn't have any conflicts or demotivations, so you "succeeded". Woohoo. And if you didn't make it that far, then obviously you didn't do it long enough to turn it into a habit, so it's your fault.

That's why I think automation improvement via extended-duration repetition is a joke. If your automatic response to something is negative, doing it over and over will only change the response if the response had something to do with reality in the first place.

Whereas, for example, if the real reason you don't exercise is because you believe only shallow people do it, then actually exercising won't change that belief, no matter how long you do it!

At best, it will convince you that you're shallow. Or, more likely, you will make a big deal to yourself about how much of a struggle it is and how much you hate it, because you need to justify that you're not shallow for doing it.

Bleah. Anyway, this entire rat's nest of self-confusion is why I emphasize testing automatic response as a success criterion: automatic responses are repeatable, and they are the result you're really after in the first place! (Who wants to have MORE things to personally, consciously, monitor and control in their life?)

Good call! Though I'd say that for those who know what they're doing, that's part of step 1 and/or 2. But actually writing it down might be essential.

A tangential note: I am currently so biased against anecdotal evidence that I read "And I speak from experience" as making this comment less convincing. Surely that's overcorrecting?

I am currently so biased against anecdotal evidence

You mean, as opposed to personal experience? ISTM that all evidence is either personal experience or anecdotal, i.e., something that someone else told you about.

I read "And I speak from experience" as making this comment less convincing. Surely that's overcorrecting?

I don't know. What's your experience with that? ;-)

More seriously: I wonder, does that mean you now believe that you don't run a risk of false-positives if you don't define your criteria in advance? How does that match your own experience?

That is, does someone else having an experience actually make your own experience less valid? (It wouldn't surprise me, btw, I've believed much weirder things with worse side-effects before.)

You mean, as opposed to personal experience?

No, as opposed to empirical data with some of the usual bias-correction measures like proper sampling.

That is, does someone else having an experience actually make your own experience less valid?

No, just internally flagged the comment as "not worthwhile" because it relied upon anecdotes where clearly data would be more appropriate. But a comment with no such mention should not be more valuable, so this seems to be an overcorrection.

No, as opposed to empirical data with some of the usual bias-correction measures like proper sampling.

My point is that so-called "empirical data" is also either anecdotal or experiential. If you didn't collect it yourself, it's still an anecdote. (Not that all anecdotes come from equally reliable sources, of course -- just pointing out that it's a non-useful/false dichotomy to divide the world into "anecdotes" and "data".)

No, just internally flagged the comment as "not worthwhile" because it relied upon anecdotes where clearly data would be more appropriate.

WTF? Seriously, how in the seven hells is an experience not data?

It's a common-enough and useful distinction (I might grant that it's something of a false dichotomy, but I think that's besides the point). Just to put a point on it:

"My doctor told me I had two weeks to live, so my church prayed for me to get better, and my cancer went away!"

Tells you effectively nothing about the efficacy of prayer. Multiplying this into a thousand anecdotes also tells you effectively nothing about the efficacy of prayer. By contrast, putting together a good study about the efficacy of prayer with the appropriate controls, tells you quickly that prayer isn't better than chance.

Heh, this reminds me of some study I read about here (or on OB), about how when you give people a fact, they tend to first accept it before evaluating whether it's true, and that if you distract them before they have time to evaluate it, they'll "remember it as true".

Seems like something similar is going on here, where by default you don't check how reliable a piece of information is, unless you're prompted by something that makes you think about it.

Interesting bug! I probably have it too ...

Asking for good bias-correction is an absurd standard of evidence. You don't ask that of most information you use. Moreover, I bet you're very biased on when you think to apply this standard.

It's not entirely clear what pjeby means. If it's just self-experimentation, it's basically a single anecdote and not terribly useful. But I assume that he's talking about his clients, still a biased sample, but as good as it's going to get.

It's not entirely clear what pjeby means. If it's just self-experimentation, it's basically a single anecdote and not terribly useful.

The supreme irony of this train of thought is that my original suggestion was for people to apply good evidentiary standards to their self-experiments. So we are now debating whether I have a good standard of evidence for recommending the use of good standards of evidence. ;-)

But I assume that he's talking about his clients, still a biased sample, but as good as it's going to get.

Sort of. I noticed that if I didn't define what I was testing before I tested it, it was easy to end up thinking I'd changed when I hadn't. And I tend to notice that when my clients aren't moving forward in their personal change efforts, it's usually because they're straying off-process, most commonly in not defining what they are changing and sticking to that definition until they produce a result. (As opposed to deciding midstream that "something else" is the problem.)

"not terribly useful" was wrong. It should have been something more like "not generalizable to other people." We certainly agreed with your standard of evidence, but there's a big gap between a failure mode likely enough to be worth adding steps to fix and an "extremely high risk."

This post makes it sounds like there's a lot of room for confirmation bias, but that doesn't bother me so much; in particular, it is a lot better than if it's just you.

Well, I'm not looking for anything other than "rationalist-porn for daydreamers", not really, so...

The path that this conversation took is interesting, and may reflect the fact that this community is taking form around the sharing of ideas, rather than, say, collaborating on experiments. When I look back at the original post here, I see an invitation to privately attempt something, and then share the conclusions. This is valid and interesting but not the only possible approach. I've been talking to Wired, for which I often write, about doing a story on "rationality as a martial art," and the editors' immediate reaction was: if it is about real tools to become more rational, yes; if it is about people merely sharing and confirming their social identity as rationalists, no.

This isn't to say there is anything wrong or embarrassing about identity as a basis for community - that's probably the heart of the matter in most cases. But there are are other ways to hold communities together, and my original take on Less Wrong was that it also involved the application of ideas.

"Paranoid debating," at the last meetup, was actually quite instructive in this regard. I think it helped me become more rational to learn that I was the member of our group whose best intentions led the answer to be more wrong, and whose attempt to subvert actually pushed in the correct direction. I don't think this was through stupidity (the questions did not require much intelligence; only knowledge and realism about one's knowledge). Instead, several questions related to geography, and to places I had been, reporting on things that were unrelated to the questions. (Detroit: population; Alaska: size) Had I reported on these topics, I would have had real information. But since they were peripheral, I only experienced them passively, and formed vivid visual impressions. (Detroit: empty; Alaska: huge) My vivid visual impressions led to highly exaggerated estimates, and with no error checking in place, I argued for estimates that were wildly off.

Even though there were no stakes, I felt sorry - probably to just the right degree to make me remember the experience. I am still thinking about it several weeks later, and I am now more skeptical of something that research also confirms is a source of bias: visual impressions formed in the past without (much) rational analysis. This is not a scientific procedure, but an excellent example of rationality training in practice. It is not a private experiment, but a group collaboration.

[-][anonymous]15y20

Good points (and personally the word 'rationalist' irks me). Please use paragraphs.

The main danger for LW is that it could remain rationalist-porn for daydreamers.

I think this is a bit more accurate.

Are we developing a new art of akrasia-fighting, or is this just repackaged garden-variety self-help?

Edit: I don't mean to disparage anyone's efforts to improve themselves. (My only objection to the field of "self-help" is that it's dominated by charlatans.) But there is an existing body of science here, and I fear that if we go down this road the Art of Rationality will turn into nothing more than amateur behavioral psychology.

Simpleton - your comment struck me as right on target except I would give this a positive value rather than a negative one. A lot of self help takes the form of akrasia-fighting; the question of course is whether it works. Amateur behavioral psychology would be one of the tools for separating effective from ineffective akrasia-fighting, yes?

The word amateur could perhaps use some re-valuing, especially in this context. The amateur, the non-professional, the person who wants to solve this problem for the personal benefit of enhancing his or her own decision making power, rather than for advancing within an established discipline or business: isn't this very category already subtly implied by the idea of rationality as a martial art?

Up with behavioral psychology + up with amateurs = up with amateur behavioral psychology.

"Amateur" shouldn't have the negative connotation it has. Using science to improve your life and increase your ability to achieve your goals is in no way a bad thing even if you aren't an expert in that field of science (pretending to be an expert is another thing entirely).

Do engineers rely on amateur physics to do their jobs?

If we aren't relying on amateur behavioral psychology in our personal lives aren't we relying on folk psychology instead?

This was more along the lines of "a kick in the pants" plus anecdotal evidence gathering. Advancement and usage should happen together, for obvious reasons.

I think that this would be a weak community if going down that road would turn into a non-scientific amateur fest, which is how I understand Simpleton's concern.

So how can we avoid LW becoming rationalist-porn for daydreamers, and leverage the site to actually improve our rationality?

So far LW has been more about discussing the "theory" side of rationality - what constitutes rationality, and what can be done to become more rational. My suggestion is for most posts on the "application" side.

Specifically, posts that outline real-world examples of failures of rationality (or any other cognitive failures) that their author has found.

To illustrate, this morning I saw an article about the serious problem of drug-resistant bacteria in India. Villagers, in particular, tend to stop their courses of antibiotics as soon as their symptoms dissapear. The article didn't consider that there might be underlying cognitive factors behind their actions.

I could write a post about that article explaining why I think heuristics and biases are a significant factor behind the villager’s actions. Perhaps I could also apply my understanding of heuristics and biases to suggest what I think might be the best strategies for getting the villagers to avoid this dangerous behavior. If anyone disagreed with me and didn’t think heuristics and biases play much of a role in this problem, they could outline their views in the comments and hopefully we could all learn something.

There's heaps of fodder on the web for this kind of analysis: articles, blog posts, news stories, advertisement copy, etc. There’s cases where we can point out that heuristics and biases are underlying causes of particular problems - where the original article is likely to have overlooked this possibility. We could also be pointing out cases where out where people are exploiting heuristics and biases in order to manipulate people.

Writing such posts would give people practice at trying to spot, and trying to analyse, failures of rationality. I think that'd useful for developing people's capabilities. And I think it'd be a good way for new-comers to be initiated into the subject matter and start developing their skills.

Such posts would also be 'putting our money where our mouth is'. We could be demonstrating that heuristics and biases are actually in important issue for socity, by showing that they're an underlying factor in important and topical issues.

"...rationalist-porn for daydreamers..." -- exactly what is wrong with this?

For that matter, what would be wrong with "...rationalist-porn..." or even just "...porn..."? (Well, to be serious, I would understand objections to LW becoming that.)

Books of military jet-aircraft filled with pictures and stats are compendiums of factual material and are perfectly valid reference materials. That they are also used as "aerospace porn" by a certain fandom are only objectionable insofar as this interferes with their function. (Perhaps by greedy publishers pushing to include so many pictures that content suffers.)

Does my analogy apply to LW?

Porn is fine, it's the daydreaming that will bite you.

Walter Mitty is not a famous aviator.

How are the two much different? I find that I sometimes regret the countless hours that I've devoted to stimulating myself with porn instead of becoming a better musician. Yet I've spent far less mental energy regretting my daydreaming, which is about as big a time-sink. I like daydreaming. I don't regret any of my daydreaming concerning sci-fi topics. And why is it that I don't think I'll ever regret all of the time I've "wasted" playing music?

Daydreaming is entertainment and animal-style practice play. That sort is beneficial.

It's also a way to wirehead on the pseudo-utility of an achievement you haven't actually attained. That sort is no damn use at best.

[-][anonymous]15y40

26 upvotes. Plenty of insightful comments. What I'm interested to observe is whether anyone actually does it. I wonder if even Julian has a privately determined short deadline that he is working on. I give the latter about 50/50.

This prescription feels like a Good Thing. That is, it feels like it is the kind of thing we should passionately approve of, and publicly support and generally try to force onto others without actually doing ourselves.

I've got a hunch that I have a little Akrasia demon in my brain somewhere going "Hey! That's a Good Thing and it sounds like it would take work. Usually Good Things that take work are egalitarian in nature and best left to other people to the extent that reputation considerations allow. In fact, if I am the one who is seen expending effort changing my behavior to be more Good then I will look like someone with low status! Hell, I'll never get laid if I look like I've got to go around suppressing my instincts and being Good. Screw that! I'm going to flaunt my ability to be stubornly irrational. The girls dig it."

I've got a hunch that I have a little Akrasia demon in my brain somewhere going "Hey! That's a Good Thing and it sounds like it would take work. Usually Good Things that take work are egalitarian in nature and best left to other people to the extent that reputation considerations allows. In fact, if I am the one who is seen expending effort changing my behavior to be more Good then I will look like someone with low status! Hell, I'll never get laid if I look like I've got to go around suppressing my instincts and being Good. Screw that! I'm going to flaunt my ability to be stubornly irrational. The girls dig it."

It's not that complicated. It's simply this: action is not an abstraction. The idea, as proposed, is abstract. To make it concrete, you would have to actually pick something, or at least figure out what process or criteria you would use to determine what to pick. You'd also need to know how you would record it, track it, etc. etc.

This stuff is all "near" thinking, and you're not doing it, which leaves this whole matter in the "far" system by default... where nothing is going to happen.

In order for it to actually count as akrasia, you'd have to first have DONE some of the near-system thinking needed to make it a concrete possibility for action. THEN, if you still resisted, it might be for reasons along the lines of what you mentioned... but in that case, it would not be by some complex string of reasoning, but because of a single, cached emotional response of shame (or a related status-lowering emotion).

That's because the "near" system doesn't reason in complex chains: it just spits out query results, and aggressively caches by reducing A-B-C chains to A-C shortcuts. That way, it never has to think things through, it just responds in a single lookup. (It's also how we get "intuitive leaps" and "gut feelings".)

[-][anonymous]15y00

It's not that complicated. It's simply this: action is not an abstraction. The idea, as proposed, is abstract. To make it concrete, you would have to actually pick something, or at least figure out what process or criteria you would use to determine what to pick. You'd also need to know how you would record it, track it, etc. etc.

That's a useful model to describe things with but I do not believe it is that simple at all. We can break things down and consider only the parts that are most useful for personal development but the fact remains that our instincts have emerged from a from a whole heap of competing priorities that we have adapted to evolutionarily, culturally and personally. Some of these processes are easily tracable and explained while others are not.

This stuff is all "near" thinking, and you're not doing it, which leaves this whole matter in the "far" system by default... where nothing is going to happen.

True enough and that may well be all we need to know for the purposes of personally moving ourselves through to action. Meanwhile, if we are simply fascinated with how we, as a species act and wonder why we collectively act as we do or how we can expect a tribe of people to act in certain situations then there are correlations to uncover, causes to deduce, 'just so' traps to avoid and curiosities to be satisfied!

In order for it to actually count as akrasia, you'd have to first have DONE some of the near-system thinking needed to make it a concrete possibility for action. THEN, if you still resisted, it might be for reasons along the lines of what you mentioned... but in that case, it would not be by some complex string of reasoning, but because of a single, cached emotional response of shame (or a related status-lowering emotion).

I assert that akrasia can occur around the act of moving things from far to near. What I've noticed throughout your posts here and on your own site is that you've made it core priority in your own development to focus on moving from far thinking to near thinking rather than just throwing willpower at far thinking and hoping something good happens. I played around with some of the simple visualisation techniques you advocated and they do seem to be remarkably effective at fostering that 'far-->near' transition. That is, they do if you can get the 'visualisation' into the near system!

As for complex strings of reasoning... well, it's true. There isn't really a 'demon' species by the name 'akrasia' and said demons do not really whisper complex thoughts into my ear! In fact, much of that process does not even occur as thoughts! Rather, the other source of complex optimisation handles it. Sustained statistical pressure over many generations of selection. What we are left with are a balance of primitive drives and in some cases the groundwork for some 'intuitive leaps' and 'gut feelings'. Unfortunately, when that 'balance' doesn't quite fit our current circumstances akrasia comes and bites us in the ass. At least, that's what it'd do if was a demon.

Why not determine publicly to fix it?

I suppose rather than just asking a rhetorical question, I should advocate for publicizing one's plans. So:

It is far too easy to let oneself off the hook, and accept excuses from oneself that one would not want to offer to others. For instance, if one plans to work out three times a week, they might fail, and let themselves off the hook because they were relatively busy that week, even though they would not be willing to offer "It was a moderately busy week" as an excuse when another person asked why they didn't exercise three times that week. On the other hand, the genuinely good excuses are the ones that we are willing to offer up. "I broke my leg", "A family member fell ill", etc. So, for whatever reason, the excuses we are willing to publicly rely on do a better job of tracking legitimate reasons to alter plans. Thus, whenever one is trying to effect a change in their lives, it seems good to rely on one's own desire not to be embarrassed in front of their peers, as it will give them more motivation to stick to their plans. This motivation seems to be, if anything, heightened when the group is one that is specifically attending to whether you are making progress on the goal in question (for instance, if the project is about rationality, this community will be especially attuned to the progress of its members).

So, our rationality "to do" lists should be public (and, to echo something I imagine Robin Hanson would point out) so should our track-records at accomplishing the items on the list.

[-][anonymous]15y10

So, our rationality "to do" lists should be public

Any suggestions as to where we should post them? I tried posting mine to my Drafts, but I cannot access it without being logged in.

someone could start a thread, I guess.

What makes it a crutch?

Heres what I've gathered from you so far: You have not been more insightful since castration, but you have been calmer, and less influenced by some unspecific bias. You see testosterone as you see blood alcohol, and prefer its absence.

If you're interested in persuading us, stop promoting your brand with single sentences and go in to more detail.

Applied rationality? EY developed this rationality stuff as part of his work on AI. In terms of rational behavior, there's a general question of how do we change behavior to what we think is rational and the specific question of what is rational for whatever it is we're dealing with. Now that I write that I realize that different areas might have different best ways to change behaviors (diet changes versus exercise perhaps). I think that looking at more specific applied cases--what is a 'rational' diet and how to rationally adopt that diet--might be a good way to both test and develop x-rationality? Versus more abstract decision theory (Prisoner's dilemma).

Wow, minus 13. Time to get a new handle.