Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

In response to Project Hufflepuff
Comment author: lifelonglearner 19 January 2017 05:16:04AM 2 points [-]

Nice! I think your use of "Hufflepuff virtue" really points at a great group of related memes that seem really helpful for group cohesion and sustainability.

I'll try to add some more examples/bounce off yours and you can let me know if they're in the same spirit?

  • Genuinely being excited when someone else gets a great opportunity because we're on the same side. Referring opportunities to people we know who're well fit for them.

  • Matching people with similar goals together and other 3rd party coordination tasks that are helpful for others. Valuing 3rd party actors who help link things together.

  • Being positive and vocalizing support, even if it's just a basic "this is cool!" on posts / things (so we don't just assume silence = ambivalence).

  • Making it more of a norm to contribute some fraction of time towards public-good projects (EX: the wiki, beginner how-to's, etc.)

  • Valuing coordination / teamwork in and of itself as a terminal value. (This could be misguided in the limit, but I think it approximates the sort of behavior we want to see more of.)

[Link] Instrumental Rationality and Overriding Defaults

0 lifelonglearner 19 January 2017 04:34AM
Comment author: gwern 09 January 2017 08:50:35PM *  21 points [-]

So apparently the fundamental attribution bias may not really exist: "The actor-observer asymmetry in attribution: a (surprising) meta-analysis", Malle 2006. Nor has Thinking, Fast and Slow held up too well under replication or evaluation (maybe half).

I am really discouraged about how the heuristics & biases literature has held up since ~2008. At this point, it seems like if it was written about in Cialdini's Influence, you can safely assume it's not real.

Comment author: lifelonglearner 12 January 2017 06:32:02AM 3 points [-]

Is there a current list of biases that have held up?

I've been looking quite a bit specifically into the planning fallacy / miscalibration / overconfidence, which appears to be well-substantiated across a variety of studies (although I haven't seen any meta-analyses).

Comment author: lifelonglearner 12 January 2017 06:25:09AM 1 point [-]

[Comment related to the title. Post itself is insightful and I enjoyed it]:

I read this thinking that it was going to be about charitably treating interlocutors in the spirit of leaving a line of retreat ala Yudkowsky. While related, the switch into using fiction as a way to have more serious truthseeking discussions was unexpected.

And it does seem like the fiction part is pretty core to this essay.

Any chance of changing the title (EX: Fiction Allows Better Truthseeking) to something a little more direct?

In response to Planning Fallacy
Comment author: Shakespeare's_Fool 17 September 2007 12:54:36PM 6 points [-]

In Boston we have heard many reporters claim that the delays in the Big Dig (the highways tunneled under the city and the harbor) were increased by contractors stretching out the work to increase their incomes. This suggests that there are strong incentives, particularly in projects paid for through government, to over promise (underbid) and under deliver (negotiate higher pay when work is under way).

Not so much a planning bias as a pocket book bias.


Comment author: lifelonglearner 10 January 2017 05:22:49PM 1 point [-]

Very very very late reply.

But there's been research on this in recent years. See Bent Flyvbjerg's papers on "strategic misrepresentation", where he outlines how perverse incentives can lead people to intentionally make overconfident predictions in government work projects.

However, Flyvbjerg also points out that there is probably a combination of psychological factors involved, too, as we continue to see this kind of overconfidence/optimism in areas like student predictions or trading (where actively trading often does worse).

Comment author: moridinamael 09 January 2017 04:05:18PM *  0 points [-]
  1. Feel aversive towards continuing the loop. Mentally shudder at the part of you that tries to continue.

I haven't had much luck with implementing this sort of mental movement into a sustainable practice. I think training yourself to shudder or in some other sense despise your own mental activity is contraindicated by a number of therapeutic models.

A core assumption of most models of self-care is that approaches should be "integrative". In other words, unacceptable inner voices, impulses and desires should be first compassionately acknowledged, rather than immediately dismissed/ignored. You don't have to act on the impulses, but you do have to listen to them and acknowledge that there is some brain-module that thinks you should be doing this thing right now. Otherwise that brain module is just going to keep sending its message with increasing urgency.

Subjectively, I find that the consequences of training myself to "clamp down on" and "reject" or "aggressively disapprove" of my own mental activity only results in a kind of increased subconscious pressure trying to force these undesirable objects/impulses into awareness.

My solution has been to accommodate even my most annoying impulses, such as the impulse to browse Facebook, by bargaining with myself ("I'll do that on my lunch break") rather than rejecting the impulse outright ("No, that's a bad thing to want to do"). This is more sustainable for me and results in fewer complete breakdowns of apparent willpower.

Comment author: lifelonglearner 09 January 2017 05:09:28PM 0 points [-]

Yes, thank you for pointing this out.

Since writing this (which was several months ago), I've been thinking more towards a more wholesome self-care type approach, where it's important to understand what all the parts of yourself are trying to say. I think CFAR emphasizes this quite a bit in their curriculum.

When it comes to diagnosing action-intention gaps, e.g. you "want" to do something but don't actually do it due to hidden aversions, the sort of attitude you propose leads to helpful dialogues with yourself that are often a much better long-term solution than the brute-force "hate the 'bad' parts of yourself" thing I put in Step 1.

Comment author: RomeoStevens 08 January 2017 05:38:16AM *  0 points [-]

Generating an exo-brain/conceptverse for myself helped me level up by seeing the upstream skills that underlie many subskills more clearly and then practice them, which is much more efficient than practicing lots of highly specific skills. I would advise against feeling like they need to hang together coherently. Trying to generate such a 'perfect system' seems to mostly be a waste of time as your ontology/knowledge representation schemes will change a lot as you improve your mind.

Here's one instantiation of such that I still use sometimes: http://conceptspace.wikia.com/wiki/List_of_Lists_of_Concepts

Comment author: lifelonglearner 08 January 2017 04:18:45PM 0 points [-]

Hey Romeo,

I can see why spending lots of resources into creating a perfect system might be both costly and easily made obsolete as I update.

However, I don't think that having a mental concept-verse is "enough", in the sense that I wouldn't trust myself to easily find the right instantiation of a meta-skill for a specific situation. I'd rather shortcut the thinking time for that entirely and just add it as a routine / TAP.

I'm curious what sorts of "upstream skills" you find the most value from and what sorts of practice schemes you have tried to integrate them into your life.

At least for me, if I'm not deliberately practicing something, I find that I really won't remember it or get value out of it consistently.

Actually Practicing Rationality and the 5-Second Level

5 lifelonglearner 06 January 2017 06:50AM

[I first posted this as a link to my blog post, but I'm reposting as a focused article here that trims some fat of the original post, which was less accessible]

I think a lot about heuristics and biases, and I admit that many of my ideas on rationality and debiasing get lost in the sea of my own thoughts.  They’re accessible, if I’m specifically thinking about rationality-esque things, but often invisible otherwise.  

That seems highly sub-optimal, considering that the whole point of having usable mental models isn’t to write fancy posts about them, but to, you know, actually use them.

To that end, I’ve been thinking about finding some sort of systematic way to integrate all of these ideas into my actual life.  

(If you’re curious, here’s the actual picture of what my internal “concept-verse” (w/ associated LW and CFAR memes) looks like)


MLU Mind Map v1.png

Open Image In New Tab for all the details

So I have all of these ideas, all of which look really great on paper and in thought experiments.  Some of them even have some sort of experimental backing.  Given this, how do I put them together into a kind of coherent notion?

Equivalently, what does it look like if I successfully implement these mental models?  What sorts of changes might I expect to see?  Then, knowing the end product, what kind of process can get me there?

One way of looking it would to say that if I implemented techniques well, then I’d be better able to tackle my goals and get things done.  Maybe my productivity would go up.  That sort of makes sense.  But this tells us nothing about how I’d actually be going about, using such skills.  

We want to know how to implement these skills and then actually utilize them.

Yudkowsky gives a highly useful abstraction when he talks about the five-second level.  He gives some great tips on breaking down mental techniques into their component mental motions.  It’s a step-by-step approach that really goes into the details of what it feels like to undergo one of the LessWrong epistemological techniques.  We’d like our mental techniques to be actual heuristics that we can use in the moment, so having an in-depth breakdown makes sense.

Here’s my attempt at a 5-second-level breakdown for Going Meta, or "popping" out of one's head to stay mindful of the moment:

  1. Notice the feeling that you are being mentally “dragged” towards continuing an action.
    1. (It can feel like an urge, or your mind automatically making a plan to do something.  Notice your brain simulating you taking an action without much conscious input.)
  2. Remember that you have a 5-second-level series of steps to do something about it.
  3. Feel aversive towards continuing the loop.  Mentally shudder at the part of you that tries to continue.
  4. Close your eyes.  Take in a breath.
  5. Think about what 1-second action you could take to instantly cut off the stimulus from whatever loop you’re stuck in. (EX: Turning off the display, closing the window, moving to somewhere else).
  6. Tense your muscles and clench, actually doing said action.
  7. Run a search through your head, looking for an action labeled “productive”.  Try to remember things you’ve told yourself you “should probably do” lately.  
    1. (If you can’t find anything, pattern-match to find something that seems “productive-ish”.)
  8. Take note of what time it is.  Write it down.
  9. Do the new thing.  Finish.
  10. Note the end time.  Calculate how long you did work.

Next, the other part is actually accessing the heuristic in the situations where you want it.  We want it to be habitual.

After doing some quick searches on the existing research on habits, it appears that many of the links go to Charles Duhigg, author of The Power of Habit, or B J Fogg of Tiny Habits. Both models focus on two things: Identifying the Thing you want to do.  Then setting triggers so you actually do It.  (There’s some similarity to CFAR’s Trigger Action Plans.)  

B J’s approach focuses on scaffolding new habits into existing routines, like brushing your teeth, which are already automatic.  Duhigg appears to be focused more on reinforcement and rewards, with several nods to Skinner.  CFAR views actions as self-reinforcing, so the reward isn’t even necessary— they see repetition as building automation.

Overlearning the material also seems to be useful in some contexts, for skills like acquiring procedural knowledge.  And mental notions do seem to be more like procedural knowledge.

For these mental skills specifically, we’d want them to go off, time irrespective, so anchoring it to an existing routine might not be best.  Having it as a response to an internal state (EX: “When I notice myself being ‘dragged’ into a spiral, or automatically making plans to do a thing”) may be more useful.

(Follow-up post forthcoming on concretely trying to apply habit research to implementing heuristics.)




Comment author: eukaryote 28 December 2016 11:18:46PM 5 points [-]

This resonates. When a group conversation became unexpectedly intimate, I've definitely felt that urge to bail - or interfere and bring the conversation back to a normal level of engagement. It feels like an intense discomfort, maybe a sense of "I shouldn't be here" or "they shouldn't have to answer that question."

I think that's often a good instinct to have. (In this context, where 'interesting' seems to mean not just a topic you think is neat, but something like 'substantive and highly relevant to someone' or 'involving querying a person's deep-held beliefs', etc. Correct me if I'm wrong.) Where "diplomat mode" might be coming from:

  • The person starting an intensive conversation might be 'inflicting' it on the other person, who can't gracefully duck out

  • Both people are well-acquainted and clearly interested in having the conversation, but haven't considered that they're in public, and in retrospect would prefer not to have everyone else there

  • Even if they seem to be fine with me being there, my role is unclear if I'm not well-versed on the issue - am I suppose to ask questions, chime in with uneducated opinions, just listen to them talk?

  • Relatedly, conversations specific to people's deeply held interests are likely to require more knowledge to engage with, and thus exclude people from the conversation.

  • If other people are sharing personal stories or details, I might feel pressure to do that too

  • Conversations that run closer to what people really care about are more likely to be upsetting, and I don't want to be upset (or, depending, expect them to want to be upset in front of me)

  • I expect other people are uncomfortable, for whatever (any of the above) reasons

Most of these seem to apply less in small groups, or groups where everybody knows each other quite well. Attempting diplomat --> engineering shifts in large group seems interesting, but risky if there are near-strangers present, and also like managing or participating in that would take a whole different set of group-based social skills. (IE: Reducing risks from the above, assessing how comfortable everybody is with increased above risks, etc.)

Comment author: lifelonglearner 29 December 2016 12:36:57AM *  2 points [-]

Yep, your listed points are a really good extension of the intuition I sorta had in mind.

In particular, I think there can be a lot of awkwardness when it becomes something that other people might perceive as "not my domain", e.g. of a philosophical nature, which can lead to an uncertain role ("what do I say now?", "will they value my opinion?", etc.)

But the other bullet points you raised are also really, really valid. Thanks for expanding on this!

Comment author: lifelonglearner 28 December 2016 04:35:11PM 4 points [-]

I am skeptical that group conversations have a tendency to fall apart at when they get interesting because people have social reasons for doing so.

Rather, it feels like there's some expectations that group conversations are "supposed" to be lighter, and one-on-one / small group discussions are really meant for intimacy.

So it might not be so much that people deliberately leave to sabotage interesting conversations, but they see it as a signal to start one of their own in a small group, or politely leave to increase the perceived value of the discussion of those involved.

View more: Next