The 5-Second Level

111 Post author: Eliezer_Yudkowsky 07 May 2011 04:51AM

To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less.  Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.

As our first example, let's take the vital rationalist skill, "Be specific."

Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"

A couple of formative childhood readings that taught me to be specific:

"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"

You have pushed him into the clouds.  If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about.  This habit displays itself in an answer such as this:

"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them.  Also, you might go to the fire department and see how their trucks are painted."

-- S. I. Hayakawa, Language in Thought and Action

and:

"Beware, demon!" he intoned hollowly.  "I am not without defenses."
"Oh yeah?  Name three."

-- Robert Asprin, Another Fine Myth

And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?"  Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."

But the real subject of today's lesson is how to see skills like this on the 5-second level.  And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.

Over-abstraction happens because it's easy to be abstract.  It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign.  Abstraction is a path of least resistance, a form of mental laziness.

So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.

Then, you have actionable stored procedures that associate to the trigger.  And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task.  An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."

Or to be more specific on the last mental procedure:  Why were you trying to describe redness to someone?  Did they just run a red traffic light?

(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights?  What could teach someone to distinguish red from green?)

When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?"  Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?"  Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?"  Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"

Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.

To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill.  Break down their internal experience into the smallest granules you can manage:  perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration.  And then generalize what they're doing while staying on the 5-second level.

Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common."  What did they do on the 5-second level?

  1. Perceptually recognize a statement they made as overly abstract.
  2. Feel the need for an accompanying concrete example.
  3. Be sufficiently averse to the lack of such an example to avoid the path of least resistance where they just let themselves be lazy and abstract.
  4. Associate to and activate a stored, actionable, procedural skill, e.g:
    4a.  Try to remember a memory which matches that abstract thing you just said.
    4b.  Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
    4c.  Ask why you said the abstract thing in the first place and see if that suggests anything.

and

  • Before even 1:  They recognize that the notion of "concrete" means things like folding chairs, events like a young woman buying a vanilla ice cream, and the number 17, i.e. specific enough to be visualized; and they know "red is a color" is not specific enough to be satisfying.  They perceptually recognize (this is what Hayakawa was trying to teach) the cardinal directions "more abstract" and "less abstract" as they apply within the landscape of the mind.

If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.

Next example of thinking on the 5-second scale:  I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back?  One of the primary qualities cited was "Being non-judgmental."  Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time.  (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.)  So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this:  If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds.  I may judge.  But I won't start saying in immense indignation what a terrible person you must be for suggesting it.

Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities.  That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun.  There'd be a new accusation to worry about if you said the wrong thing - "Hey!  Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.

In this case I think it's actually easier to define the thing-we-avoid on the 5-second level.  Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person.  On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.

(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists.  It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.  You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)

The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences.  Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"

What would be an exercise which develops that habit?  I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant.  One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead.  Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs.  Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone.  (Did that last sentence offend you?  Pause and reflect!)  The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill.  The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced.  But we could try that teaching method, at any rate.

(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it.  To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas.  You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)

I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it.  This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.

Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like?  Of course you have!  You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.

Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments.  Hope to hear from you soon!

Comments (310)

Comment author: mingularity 16 September 2013 06:00:10PM *  4 points [-]

"Abstraction is a path of least resistance, a form of mental laziness."

This can be a good thing. Abstractions, such as category theory, often generalize to many other more specific domains. "Abstract nonsense" can teach us much about the world, and it can allow you to transfer your knowledge from one domain to another. I suppose you would now like to see a concrete example of abstract nonsense at work. However, my time on earth is limited.

Comment author: NiceButAngry 15 November 2012 03:50:23PM 1 point [-]

I'm new here and couldn't find a better place to ask this: Are there any exercises to train such skills on the site? For example a list of statements to assess their testability?

Also I was wondering if there is some sort of pleasant way to access this site using an Android phone. I would like to read the sequences on mine.

Oh and hello everybody! :) I hope I can find the time and motivation to spend some time in this place, I think I might like to have your skills. ^^

If I violate any of your rules or anything just let me know I have barely scratched the surface of this seemingly massive site.

Comment author: aletheianink 30 November 2013 04:39:38AM 0 points [-]

Your post was over a year ago, but I will reply anyway:

I don't know the answer to the first question, as I am also new.

To the second question, I recommend something like readability where you can clip a page (or sequence) and then read that in a really nice interface through the readability app.

Comment author: hyporational 30 November 2013 12:59:01PM 0 points [-]

Pocket is nice too.

Comment author: Folcon 29 May 2011 08:43:27AM 0 points [-]

Could someone give me the reasoning for why silver lining thinking itself is bad? Making mistakes is inevitable and so I would have thought this is a way to start to look past the mistake and try to give it a sense of perspective. Falsely rationalising a bad thing into a good thing is not valuable, however taking a bad thing and working out how to use the situation you are now in into a more positive experience or if you are completely stuck, realising that it is time to move on I would have thought to be a useful skill. Please explain if you believe that I am wrong.

Comment author: Carinthium 02 August 2011 07:05:36AM 0 points [-]

If you're still here: As far as I can tell, emphasising how to take advantage of a bad situation can be useful, but a tendency to downplay the bad side of a situation reduces objectivity by making it hard to percieve the bad side of a situation. Of course you should try to turn such experiences to your advantage (often- sometimes it's better to 'cut and run', and sometimes to try and minimise losses. In some situations it would be necessary to try and avert a greater castatrophe), but objective awareness of the extent of the problem is useful.

In addition, mistakes can be minimised (for some people in some areas of life, they are reducable to insignificance). It is best if a person can recognise a mistake, figure out what they did wrong, and be sure not to do it again.

Comment author: Folcon 20 January 2015 01:28:08AM 0 points [-]

A bit late, but thank you for the insight.

Comment author: diegocaleiro 19 May 2011 07:48:29PM 0 points [-]

I decided on using "Motivated stopping" and "Motivated continuation" as my two examples.

To successfully avoid motivated stopping, someone who thinks he can use Solomonoff Induction to simulate "what is it like to be the epistemology of a mind" should think if he has or not considered in detail how much of our understanding of gross-level affective neuroscience can be mapped into a binary ´01010001´ kind of description, and if he has sufficiently detailed evidence to go on and write smth like http://arxiv.org/PS_cache/arxiv/pdf/0712/0712.4318v1.pdf (This is not a critique of Peter de Blanc, but of Solomonoffist Inductors in general)

To successfully avoid motivated continuation, someone who thinks she can make easy money without much effort should 1) Notice if her decision to postpone actually doing it because she believes it doable is a form or akrasia, or fear of the twinge of starting 2) Think if she would be confortable explaining her thesis on how to easily make money to a friend (and not embarassed by it) 3) Wonder if she keeps reading about how to do it in order to feel the warm glow of reading Tim Ferris-like material and pretending to be awesome or if there is an actual need for more information then she presently has.

Comment author: MixedNuts 15 May 2011 08:42:51PM *  8 points [-]

take a rationalist skill you think is important

Facing Reality, applied to self-knowledge

come up with a concrete example of that skill being used successfully;

"It sure seems I can't get up. Yet this looks a lot like laziness or attention-whoring. No-no-I'm-not-this-can't-be-STOP. Yes, there is a real possibility I could get up but am telling myself I can't, and I should take that into account. But upon introspection, and trying to move the damn things, it does feel like I can't, which is strong evidence.

So I'm going to figure out some tests. Maybe see a doctor; try to invoke reflexes that would make me move (careful, voluntary movement can truly fail even if reflexes don't); ask some trusted people, telling them the whole truth. Importantly, I'm going to refuse to use it as an excuse to slack off. I can crawl!"

crawls to nearest pile of homework, and works lying prone, occasionally trying to get up

decompose that use to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures;

  • try to move legs, fail
  • compare with expectation (possibly verbalizing it "Those are legs. They're used to move around.", more likely not), be surprised
  • recognize this as an obstacle to reaching a goal, thwarting the "decide to work => get up => walk to desk => sit down => work" chain
  • recognize this obstacle as unusual and un/insufficiently planned for
  • pattern-match "weird obstacle" to "overly convenient excuse"
  • automatically think "No, other people use convenient excuses, but I don't, I'm sincere"
  • recognize this as wishful thinking (re: self-image)
  • accept the unpleasant hypotheses as possible (this looks litany-of-Gendlin-ish); "I do not want to be a lazy attention whore, but believing I am not won't help" - ick reaction to the process of rejecting the thought before reflecting on it in detail, flinch towards the painful thought
  • recognize you have a hypothesis to test; do a little dance and exclaim "yay, science!"
  • look for ways to test the hypothesis, as triggered by the recognition
  • implement easy tests, note others for later use
  • mark this train of thought with a little [closed] tag
  • go back to the original problem (easy in this example, since the awkward position triggers it)
  • examine the overly convenient excuse and check what it excuses from
  • feel a jolt of determination ("Oh yeah? You think you can stop me?") and look for roundabout ways to reach your goal anyway, partially out of spite and competitiveness
  • implement one of these ways
  • feel good about being The Determinator
  • optionally, reconsider the "I'm a lazy attention whore" hypothesis in light of the (totally rigged) test; move probability mass away from it towards "I have a legitimate problem, which I'm totally overcoming because I'm awesome" and "Sure I am, but look, I'm recovering"; award self a gold star

For a problem previously but rarely encountered, this takes about 5 seconds. For completely new problems it takes longer in tests, and there are a few more steps battling fear.

check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks;

Tricky; mental events are hard to visualize. I think "check what it excuses from" is the vaguest step (but it's not a crucial one, anyway), it could be done in more than one way.

come up with a teaching exercise which seems like it ought to cause those 5-second events to occur in people's minds;

Steps that need teaching:

  • pattern-match "weird obstacle" to "overly convenient excuse"
  • recognize this as wishful thinking (re: self-image)
  • accept the unpleasant hypotheses as possible (this looks litany-of-Gendlin-ish); "I do not want to be a lazy attention whore, but believing I am not won't help" - ick reaction to the process of rejecting the thought before reflecting on it in detail, flinch towards the painful thought
  • feel a jolt of determination ("Oh yeah? You think you can stop me?") and look for roundabout ways to reach your goal anyway, partially out of spite and competitiveness

The first is easiest to learn. Show people a lot of cases where people use convenient excuses. Hell, most people probably overfit here, look at all the disabled people told they're just lazy.

The second is crucial. It can be taught with a stern teacher; student describes their life to the teacher, and whenever something looks like self-deception ("No, really, I'm not gay, I just have sex with men sometimes, and of course I don't look at women, that would be cheating on my wife") the teacher calls their bluff. (Is this what therapy does?) That demands a lot of time and trust.

For a more self-teaching route, maybe try to explain every one of your behaviors with a bad character trait, rather than circumstance or a good trait. Might feel too fake, though. At least, reflect upon behavior that looks bad, even if you have good private reasons for it. The point of this step is to notice the possibility you have a bad trait, not to test it.

The third step is to accept it once noticed. I would go with two sets of exercises. One set teaches general flinching towards pain, like talking to strangers and walking on the roofs of tall buildings and resisting delicious cake. The second teaches singlethink; an obvious method is to write down all thoughts and notice flinches away from painful thoughts and rationalizations, and face them squarely, both immediately (with a set topic) and over time. Also, recite the litanies, and freak yourself out with horror stories of self-deception. This may well take more than five seconds for beginners, but I've found it becomes near-instant with comparatively little training.

The fourth step is rather me-specific. You may prefer other attitudes like "I'm so clever!" or "Okay, I noticed, moving on" or "Other people have it so much worse, how dare I whine".

There are standard exercises to teach determination. Pick your favorite shounen character, and use him or her (okay, him) as a role model - what would Edward Elric do? Use motivators liberally, and have a laugh when you outdo them (as in my example; Courage Wolf thinks paralysis is an excuse).

Comment author: MrMind 13 May 2011 01:44:50PM *  4 points [-]

I wanted to do the 5-second decomposition on what I think is one of the most important quality of a rationalist: s/he is able to say "oops!", but I found that it's probably a rationalist primitive. Anyway, here's my attempt:

  • notice the feeling of being wrong, or of having something screwed up, etc
  • don't deny it, stay with the feeling, let it be present in your mind
  • notice that you're still alive, that just because you admit it, nothing changed in the world: you already screwed up, you already experienced the consequences of your failure
  • say oops!
  • get on with your life (correct the mistake / revise your belief / etc)
Comment author: MrMind 13 May 2011 01:53:34PM 7 points [-]

It also seems to me that a general structure for the application of rationality follows a path like this:

  • notice a trigger: usually automatically activated bias has an unpleasant feeling attached to it
  • insert a space of rest so that the bias doesn't get automatically triggered
  • execute instead the rational behaviour
Comment author: loqi 14 May 2011 03:58:23PM *  2 points [-]

I really like this breakdown. I do think the first item can be generalized:

usually automatically activated bias has a feeling attached to it

since positive-affect feelings like righteousness are also useful hooks.

Comment author: MrMind 14 May 2011 11:13:37PM 2 points [-]

You're right, they don't even need to be strong emotions: like in the case of positive-affect induced biases building incrementally over time, as in the affective death spirals.

Comment author: sriku 12 May 2011 01:51:07PM 6 points [-]

I haven't seen meditative practices described much here and I've known first hand how they can help with this level of introspection. So, for those who might wish to try, I'll briefly describe the plain instruction given to zen students. If you want to read in a bit more detail, the thin book "zen in plain English" is an excellent intro.

Sit in a quiet place, with lights dimmed, facing a wall, with your back straight (ex: use a cushion for lower back support). Half-close your eye lids. Adjust your breathing by taking a few deep breaths and then fall back to natural effortless breathing. Count your exhalations. Inhale-1-inhale-2-inhale-3...10 and cycle back to 1. If you lose count in the middle (yes you will) just start again at 1. Try this for at least 5mins. You can go up to 30 mins. That's all!

You can stop reading and try it.

When I began (don't laugh) I barely could count to 3. Here's how it went -

Inhale-1-inhale-2 ... what am I doing? What is this supposed to get me? Never stared at a wall before. Oh drats, back to 1.

Inhale-1-inhale-2... the plaster on the wall looks like a gorgon's face ... wonder what the others are thinking about .... Where was I? .. ok focus. 1..

Inhale-1-inhale-2... Damn is this what the famous sages did day in and day out? ... Oh shit lost it again. Am I that incapable of focusing? .. Ok back to 1

Inhale-1-inhale-2... Wait did I just chastise myself for something so trivial as counting my breath? .. (sigh) back to 1.

(Slowly the noise comes down and you get more real noise.)

Inhale-1-inhale-2 ... should I be taking deep breaths? Was the previous one long enough? ... Ok ok just sit and breathe ... Back to 1 ...

..... and so it goes. Just try it. The "back to 1" breakpoint works like a lens into your thought stream.

PS: apologies for the rough post. Just thought of writing this while on the bus.

Comment author: Charlie_OConnor 12 May 2011 05:40:57AM 3 points [-]

5 second level for evidence as soldiers

  1. Notice that all your evidence favors your belief; or Notice the anger/resentment/fear when coming across evidence against your belief.
  2. Pause and remember that
    1. beliefs are just expectations and truth is a measure of how accurate your expectations are
    2. evidence is not for or against a belief, it is a flow of probability between expectations
  3. Feel aversion to not internalizing all the evidence, to not letting reality constrain your expectations (beliefs)
  4. Make an bayesian calculation, incrementally incorporating all the evidence, so that your expectations (beliefs) are accurate (true).

A recent example for me comes from reading The Nurture Assumption and Selfish Reasons to Have more Kids.

  1. I noticed I was really convinced by a lot of evidence in favor of the view that parental influence is less important than I thought.
  2. My beliefs were being updated, but only by evidence in one direction - in favor of the hypothesis.
  3. Not wanting to be inaccurate about the best way to raise children I searched google scholar for twin/adoption studies and criticisms.
  4. I updated by beliefs based on the criticisms of the studies and I now feel confident in my expectations about parental influence.

Exercises include picking a belief (maybe one you recently acquired from a convincing friend) and researching all arguments for and against the belief. Write down your expectations before the research. As you research compare the research to your expectations and update your expectations as you go (I actually mean writing down so others can read it what you actually expect). Repeat. Eventually pick beliefs you have held for a long time and are a part of your identity (after practicing on recent beliefs that matter less).

Comment author: laakeus 20 December 2012 08:32:51PM *  0 points [-]

I updated by beliefs based on the criticisms of the studies and I now feel confident in my expectations about parental influence.

I'm curious as to what your updated beliefs are on parental influence. Can you summarize in couple of paragraphs?

(I think the original description matches how I view the issue, but I feel the topic doesn't have enough importance for me to spend a lot of time trying to update my beliefs.)

Comment author: outofculture 15 May 2011 07:13:30AM *  1 point [-]

A variant on this topic:

  1. Notice when providing evidence X for a position P you believe in.
    1. Bonus points for reviewing recent memories to see if you have supported P repeatedly, especially to the exclusion of evidence to the contrary.
  2. Feel revulsion at having become the puppet of P.
  3. Introduce a nudge away from P. Some examples:
    1. Provide some good evidence counter to P.
    2. If you cannot point to specific counter evidence, try to at least describe what counter evidence would look like.
    3. State just how surprised you would be to see the evidence X if the position P were false. Can you rank it relative to other pieces of evidence under consideration? If the evidence is really weak, ask to have it weighted as such.

This seems sloppy, as it relies on the sense of revulsion to determine how much of a counter-nudge to give. It should still be useful, I hope.

The exercise to train this with:

  1. Propose a character facing a choice, especially on topics that are muddled by being high-profile (e.g. Jane Senator must decide how to vote on extending unemployment benefits).
  2. Provide a small selection of evidence that the character has considered, and state that their position after seeing just that evidence is for, against or undecided.
  3. Ask the participants what additional evidence they think the character should consider.
Comment author: Broggly 11 May 2011 10:58:06AM 2 points [-]

The first fictional example I thought of was the Wax Lips scene from The Simpsons. "Try our wax lips: the candy of 1000 uses!" "Like what?" "One, a humourous substitute for your own lips." "Keep going..." "Two, err...oh, I'm needed in the basement!"

Comment author: novalis 11 May 2011 04:35:02AM 3 points [-]

I was thinking about how Beliefs Must Pay Rent the other day, because my wife is much better than me at noticing when this isn't happening. One major trick to this is that she always asks (at least internally), "So what?"

That is, rather than immediately finding a way to attack whatever it is that the other person said, she considers whether what they've said affects anything in their argument. One line of inquiry is, "can I concede this point and still win?" But "so what?" goes further than that -- it helps her internally understand if there is anything of substance to the argument. If the answer (in her mind) to "so what?" is, "that would be bad", then there at least might be some substance there. But if there is no answer, she asks the question out loud, to see whether she's missing something, or whether there really is no valid belief at all.

Note: this is my paraphrasing of her technique; she may or may not endorse this interpretation.

Comment author: thomblake 09 May 2011 11:27:01PM 0 points [-]
Comment author: AlanCrowe 09 May 2011 09:39:24PM 3 points [-]

For my attempt at the exercise I pick a sub-skill of "reading, pen-in-hand" that I call "spotting opportunities to engage." My attemp runs to 2020 words and was rejected by the LessWrong software for being too long. I've put the raw text on a web page. Sorting out the html will have to wait for another day.

Why so long? I see the skill as very important. I'm crap at it. I've just had a success that I'm pleased with, but it is too recent, I haven't had time to boil it down so that I can describe it briefly.

Comment author: thomblake 09 May 2011 06:34:10PM 0 points [-]

Taking a look at Hug the Query for the exercise:

We have an ordered hierarchy:

  • authority
  • argument
  • calculations
  • experiment

In which we should be going as far down the chain as possible when considering a factual dispute.

Thus, if you find yourself thinking about whether someone can be trusted based on reputation or prestige, ask, "Can I look at their arguments instead?". If you find yourself looking at their arguments, ask, "Can I look at their calculations?". If you find yourself looking at their calculations, ask, "Can I perform an experiment?".

An exercise would be difficult in the absence of real factual disputes. If there are real factual disputes amongst the participants: Begin arguing about the factual dispute based on whatever seems most compelling. Ask the above questions, and resolve it to the point where at least in principle an experiment is identified which would answer that question. It would be helpful if the dispute is cut off after a set amount of time (slightly more than 5 seconds, I think) so that it counts as practice for the 5-second skill of determining whether experimental evidence is available.

Did I miss anything?

Comment author: calcsam 09 May 2011 07:43:41AM -1 points [-]

Good post. This invokes, of course, the associated problem, of phrasing this in a way that might encourage listening on the other end.

Comment author: Eliezer_Yudkowsky 09 May 2011 05:02:59AM 5 points [-]

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

Comment author: [deleted] 09 May 2011 07:03:36AM 3 points [-]

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

Comment author: wedrifid 09 May 2011 09:45:18AM 7 points [-]

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

No, that's more or less right. Which is unsurprising since moralizing is just politics.

Comment author: wedrifid 09 May 2011 06:15:12AM *  0 points [-]

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

In case it isn't clear let me say that my reply continues to apply to the current version. I refer to the underlying concept described, not the word so consider my reply to be edited to match.

Comment author: Eugine_Nier 09 May 2011 05:29:29AM *  7 points [-]

Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Edit: A better way to express what I find ironic about Eliezer's statement, is that at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!" This fact is useful to keep in mind when predicting their reactions to big bright warning signs.

Comment author: JamesAndrix 09 May 2011 03:42:20PM 13 points [-]

Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.

Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.

The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.

Comment author: lessdazed 09 May 2011 05:12:12PM 0 points [-]

I thoroughly endorse this comment.

Just a note relevant for people involved in the discussion on this page regarding upvoting and downvoting. This is a sort of situation in which I might downvote lessdazed's comment below, simply to increase local contrast between the vote totals of responses to the parent (so long as I did not push the score of the below comment into the negatives). This is true even though I (happen to ;-)) agree with the below comment.

Downvoting is not a personal thing, and if you take it personally, it is probably because it happens to be so for you and you are projecting your voting behavior onto others. In all discussions of voting I've seen, people have different criteria.

Apologies for metaness and thread hijack.

Comment author: lessdazed 09 May 2011 03:00:49PM *  3 points [-]

at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!"

Your edit is perfectly sufficient and I have no criticisms of it. However, the point can be expanded upon such that it will seem different and it may appear I am disagreeing.

The metaphorical signs that exist invoke the idea "Don't question God!", but in the West, that's not too close what they actually say. In religious communities at least moderately touched by the enlightenment, enough distaste of signs reading "Don't question God!" has been absorbed that such signs would be disrespected as low status.

This is something a member of a moderate strain of fundamentalism might pride himself or herself on, as a factor that distinguishes him or her from literalists, perhaps as an important part of his or her identity.

To make someone think "Don't question God (this time)!", the sign might say something like "You don't know what the consequences would have been had those people lived. God does, so rely on his judgment."

The "this time" will happen to be every time, but the universality of it won't be derived from so general a rule; it will be a contingent truth but not a logical one exactly.

Comment author: wedrifid 09 May 2011 06:33:28AM 1 point [-]

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Not quite ironic. More just arbitrary.

Comment author: rhollerith_dot_com 09 May 2011 06:04:12AM *  1 point [-]

It's ironic only to those who have different ideas about what it means to reason. Reason need not be applied indiscriminately. (And it's not equivalent to arguing.)

Comment author: Eugine_Nier 09 May 2011 06:46:35AM 2 points [-]

Reason need not be applied indiscriminately.

This is a very interesting statement (with which I agree). I would also like to see your explanation for when it's inappropriate to apply reason, I'll post mine afterwords.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make. Especially in this context since the posters arguing about morality were certainly trying to reason about it and not just arguing for the sake of arguing.

Comment author: rhollerith_dot_com 09 May 2011 07:52:35PM *  3 points [-]

I (and probably the 2 who upvoted me) misunderstood your use of 'ironic'. I now see that you probably meant it in the sense of 'superficially paradoxical or false, but on closer inspection, interesing'. (I thought you meant it more in the sense of 'incongruous, and consequently suspect'. I.e., I thought you were arguing that it is probably bad pedagogy to advise an aspiring rationalist not to reason about something.)

Comment author: rhollerith_dot_com 09 May 2011 07:15:57AM *  1 point [-]

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else. My point is that I do not see the irony in Eliezer's advising his readers that some particular issue is not worth applying reason to.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make.

Can I just declare my statement in parens above to be withdrawn? :)

Comment author: Eugine_Nier 09 May 2011 07:47:51AM *  7 points [-]

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else.

Interesting, I had in mind something much stronger. For example, if you attempt to apply too much reasoning to a Schelling point, you'll discover that the Schelling point's location was ultimately arbitrary and greatly weaken it in the process.

Another related example, is that you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.

Comment author: TimFreeman 09 May 2011 08:04:48PM *  1 point [-]

...you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.

I didn't see any mentions of examples in Szabo's paper of traditions that have a high instrumental value but can't be derived from first principles, although he does seem to be saying that they exist. The best example that comes to mind is Jews and Moslems not eating pork, but I eat pork and my family has on both sides for multiple generations, and we haven't curled up and died yet, so the present instrumental value of that tradition is unclear to me. Do you have any examples in mind?

I can see that the wellbeing of the population that obeys the tradition would contribute to it doing well in cultural evolution, but it's not at all clear to me that it's a large enough factor that we're unlikely to come out ahead by discarding the tradition and designing a new one.

I suppose the claim that a tradition is one of these truths that one cannot usefully rederive from first principles is testable. Go form an intentional community that, say, has an 8 day week, and if they're still doing well physically and financially in a generation or two, then the 7 day week apparently wasn't such a tradition.

ETA: I suppose the organizational structure of a church is such a tradition.

Comment author: Eugine_Nier 10 May 2011 12:56:02AM *  3 points [-]

Well Szabo's main examples, which he briefly alludes to in this essay, are legal, economic and political systems. He discusses them at length in his other writings.

Comment author: rhollerith_dot_com 09 May 2011 07:49:59PM 2 points [-]

I agree with your 2 examples.

Comment author: byrnema 09 May 2011 03:18:06PM 2 points [-]

You've articulated a couple of ideas that have been lurking in the collective concern of ideas here on Less Wrong, but which, as far as I know, haven't been made definite. About why some topics shouldn't have too much light directed at them -- ironically, as you originally claim, in the interest of reason. It's been a very vague concern and precisely because it hasn't been articulated it persists stronger than it might otherwise. I would encourage development of these points (not specifically by you, or specifically in this thread, but by anyone, wherever) .

Comment author: atucker 09 May 2011 03:38:48AM 2 points [-]

Why is so much of the discussion about the "avoid moralizing" statement?

Comment author: Eliezer_Yudkowsky 09 May 2011 04:58:43AM 2 points [-]

I made the mistake of using a word for something people shouldn't do. Then they started disputing the definition of the word, even after I told them not to. I will edit to take out the evil word.

Comment author: HopeFox 09 May 2011 12:04:56AM *  8 points [-]

I think I've started to do this already for Disputing Definitions, as has my girlfriend, just from listening to me discussing that article without reading it herself. So that's a win for rationality right there.

To take an example that comes up in our household surprisingly often, I'll let the disputed definition be " steampunk ". Statements of the form "X isn't really steampunk!" come up a lot on certain websites, and arguments over what does or doesn't count as steampunk can be pretty vicious. After reading "Disputing Definitions", though, I learnt how to classify those arguments as meaningless, and get to the real question, being "Do I want this thing in my subculture / on my website"? I think the process by which I recognise these questions goes something like this:

1) Make the initial statement. "A hairpin made out of a clock hand isn't steampunk!"

2) Visualise, even briefly, every important element in what I've just said. Visualising a hairpin produces an image of a thing stuck through a woman's hair arrangement. Visualising a clock hand produces a curly, tapered object such as one might see on an antique clock. Visualising "steampunk" produces... no clearly defined mental image.

3) Notice that I am confused. Realise that I've just made a statement about something that I can't properly visualise, something that I don't think I've properly defined in my own brain, so how can I expect anyone else to have a proper definition at all, let alone one that agrees with mine? (Honestly, the fact that I keep writing "steampunk" in quotation marks should have been a clue already.)

4) Correct my mistake. "Hmm, now that I think about it, what I just said didn't actually mean anything. What's the point of this discussion again? Are we arguing about whether or not this picture should be on the website, or whether this person should be going to conventions, or what? If so, let's talk about that specifically. Let's not pretend that "steampunk" exists as a concrete category boundary in the phase space of fashion accessories, okay?"

Now, this process can fall down at step 2 when I, personally, have a very well-defined mental image of what a word means (such as "sound", which I will always take to mean "compression waves of the sort that a human or other animal might detect as auditory input, whether or not a listener is actually present"), but which other people might interpret differently. Here, the trick to step 2 is to imagine my listener's most obvious responses, based on my experience in discussing the topic previously (such as "But there's nobody to hear it, so by definition there's no sound!"). If I can imagine somebody saying this, without also being forced to imagine that the speaker is hopelessly misinformed, mentally deficient, or some other kind of irrational mutant, then what I'm saying must have some defect, and I should re-examine my words.

As for a training exercise, step 2 seems to be the one to train. The "rationalist taboo" technique seems pretty effective here. Discuss a topic with the student, and when they use a word that doesn't seem to mean anything, or means too many things at once, taboo it and get them to restate their point. Encourage the student to visualise everything they say, if only briefly, and explain that anything they can't visualise properly is suspect.

Alternatively, allow the student to get into a couple of disputes over definitions, let them experience firsthand how frustrating it is, then point them to this blog and show them that there's a solution. Their frustration will drive them to adopt a method of implementing the solution in their own discourse. Worked for me!

Comment author: Oscar_Cunningham 08 May 2011 09:43:40AM *  14 points [-]

My attempt at the exercise for the skill "Hold Off On Proposing Solutions"

Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:

1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.

2) Come up with a witty and brilliant solution. This happens automatically.

3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!

4) Warn other people to hold off on proposing solutions.

Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.

Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.

Comment author: alexflint 08 May 2011 02:13:47PM 4 points [-]

Wouldn't it be better to realise right after step (1) that one needs to avoid coming up with solutions and deliberately focus one's mind on understanding the problem. Avoiding verbalization of solutions is good, but they can still pollute your own thinking, even if not others'.

Comment author: BrandonReinhart 08 May 2011 06:13:20AM 5 points [-]

Grunching. (Responding to the exercise/challenge without reading other people's responses first.)

Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority on the subject and to avoid embarrassment.

An example of success: I stop myself, admit that I have changed my mind, that the idea was in error, and then relinquish the belief.

A 5-second-level description:

  • I notice that my actual belief state and my professed belief state do not match. This is a trigger that signals that further conscious analysis is needed. What I believe (the suggestion will have undesirable side effects) and what I desire to profess (the suggestion is good) are in conflict.

  • I notice that I feel impending embarrassment or similar types of social pain. This is also a trigger. The feeling that a particular action may be painful is going to influence me to act in a way to avoid the pain. I may continue to defend a bad idea if I'm worried about pain from retreat.

  • Noticing these states triggers a feeling of caution or revulsion: I may act in a way opposed to what I believe merely to defend my ego and identity.

  • I take a moment to evaluate my internal belief state and what I desire to profess. I actively override my subconscious desire to evade pain with statements that follow from my actual internal belief. I say "I'm sorry. I appear to be wrong."

An exercise to cause these sub-5-second events:

I proposed a scenario to my wife wherein she was leading an important scientific project. She was known among her team as being an intelligent leader and her team members looked up to her with admiration. A problem on the project was presented: without a solution the project could not move forward. I told my wife that she had had a customary flash of insight and began detailing the solution. A plan to resolve the problem and moving the project forward.

Then, I told her that a young member of her team revealed new data about the problem. Her solution wouldn't work. Even worse, the young team member looked smug about the fact she had outsmarted the project lead. Then I asked "what do you do?"

My wife said she would admit her solution was wrong and then praise the young team member for finding a flaw. Then she said this was obviously the right thing to do and asked me what the point of posing the scenario was.

I'm not sure my scenario/exercise is very good. The conversation that followed the scenario was more informative for us than the scenario itself.

Comment author: Charlie_OConnor 11 May 2011 03:45:44AM 1 point [-]

I think your scenario is good. I think the group dynamic and individual personality determine when this is easy and when it is difficult.

I have been in groups where it is easy to admit mistakes and move on; and I have been in groups where admitting a mistake feels like you are no longer part of the group.

So this can be realistic. I find taking the approach of admitting mistakes often helps others follow the same path, and leads to a better group dynamic.

Comment author: Cayenne 08 May 2011 06:31:48AM *  1 point [-]

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 07:16:03AM *  4 points [-]

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

I prefer to cherish being right enough that I appreciate finding out that I was wrong. It feels like more of a positive frame! (And the implicit snubbing to the typical "don't care about being right" injunction appeals.)

Comment author: Alicorn 08 May 2011 06:39:25AM 0 points [-]

And under this model, we like learning because...?

Comment author: katydee 08 May 2011 06:59:55AM *  2 points [-]

Well, it isn't being wrong that you cherish under Cayenne's model, just finding out about it so that you can correct it. To put it in other terms, being wrong is bad, but learning that you are wrong is good, because all of a sudden something gets shifted out of the "unknown unknown" category.

Comment author: Cayenne 08 May 2011 07:29:09AM *  0 points [-]

This is it exactly!

Edit - please disregard this post

Comment author: roland 08 May 2011 04:19:45AM 8 points [-]

I know that I'll probably be downvoted again, but nevertheless.

Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

To give a concrete example of Eliezer himself

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/

I don't believe there were explosives planted in the World Trade Center. ... I believe that all these beliefs are not only wrong but visibly insane.

I politely asked for clarification only to be not only ignored but also downvoted to -4:

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1t7r

On another comment I presented evidence to the contrary(a video interview) to be downvoted to -15: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r5v

So when just asking the most basic rationality question(why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Comment author: jsalvatier 08 May 2011 05:47:16PM 2 points [-]

Are there lots of other topics you feel this way about?

If it's just this topic, that doesn't seem like a very big deal to me. I have no doubt LW has at least a few topics where people have an unproductive moralizing response. However, if such toxicity uncommon and doesn't affect important topics then I don't think it's a very big deal (though certainly worth avoiding).

Comment author: [deleted] 08 May 2011 06:14:48PM 4 points [-]

It was made pretty clear in the other thread that the evidence linked was extremely weak.

Maybe that doesn't justify -15, but a priori I'd downvote it.

Comment author: wedrifid 08 May 2011 06:20:19PM 1 point [-]

but a priori I'd downvote it.

ceteris paribus?

Comment author: [deleted] 08 May 2011 07:25:30PM 2 points [-]

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

Comment author: wedrifid 08 May 2011 10:16:42PM 1 point [-]

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

I understood the message. But the latin phase was off. Ceteris paribus is the one that would fit.

Comment author: [deleted] 08 May 2011 11:54:56PM 1 point [-]

Fair enough.

Comment author: wedrifid 08 May 2011 07:29:30AM 2 points [-]

I upvoted your comment prospectively. That is, it'll be worth an upvote when you edit out the passive aggressive intro and I'm being optimistic. :)

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

We do. Not all the downvoting is moralizing but a significant subset is. And not all the moralizing is undesirable to me, even though a significant subset is.

For what it is worth, believing the WTC was loaded with explosives really is insane.

Comment author: roland 12 June 2011 08:46:22PM 0 points [-]

Following a suggestion from Cayenne:

For what it is worth, believing the WTC was loaded with explosives really is insane.

wedrifid, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Comment author: roland 08 May 2011 07:02:34PM 0 points [-]

For what it is worth, believing the WTC was loaded with explosives really is insane.

How did you arrive at this conclusion? Did you really think it through or is it just a knee-jerk reaction?

Comment author: Mitchell_Porter 11 May 2011 03:41:33AM 1 point [-]

Years ago, I formulated the "No Bullet Hypothesis" of the Kennedy assassination: he wasn't hit by any bullets at all, his head just blew up. I had been thinking it was a peculiar form of spontaneous human combustion, perhaps involving Marilyn Monroe and Tibetan Nazis, but now I realize that his head must have been full of nano-thermite, possibly inserted during a trip to the presidential dentist.

Comment author: TheDave 12 May 2011 04:46:13AM 10 points [-]

I'm not sure that heavy sarcasm like this is constructive. While I thought it was funny, I think it encourages the audience to automatically disregard and deride the subject. In my experience, heavy sarcasm tends to both make the subject angry and reinforce the subject's (erroneous?) beliefs.

My own sarcastic responses (about political or otherwise weighty matters) typically just polarize the group I'm in, making the new in-group like me and the new out-group dislike me.

Comment author: lessdazed 11 May 2011 12:45:59PM 0 points [-]

This comment is awesome, and I'd like to think that if I believed the twin towers were destroyed by demolitions set off by the government I would still upvote it.

Comment author: WrongBot 11 May 2011 02:41:06AM 5 points [-]
  • The WTC being loaded with explosives is a much more complex explanation than the orthodox one - penalty.
  • The explosives theory involves a conspiracy - penalty.
  • The explosives theory can be and is used to score political points - penalty.
  • Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it - penalty.
  • The explosives theory doesn't make any goddamn sense - huge penalty.
Comment author: bgaesop 28 July 2011 05:30:41PM 0 points [-]

The explosives theory involves a conspiracy

So does the traditional explanation.

The explosives theory can be and is used to score political points

So is the traditional explanation. War in Iraq, anyone?

Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it

This is a very silly reason to reject an idea.

Comment author: Vladimir_Nesov 09 August 2011 10:45:42PM 2 points [-]

Explosive-theory advocates seem to prefer videos to text, which raises the time cost I have to pay to investigate it

This is a very silly reason to reject an idea.

It's a reason to keep the idea rejected, without giving it a chance to become accepted.

Comment author: shokwave 29 July 2011 05:56:54AM 4 points [-]

This is a very silly reason to reject an idea.

Not always. Time-consuming investigations have a disutility value - if the prior for theories in this reference class multiplied by the utility of finding this idea to be true does not overcome that disutility, you ought not investigate. That is a very serious reason to reject an idea. If you do not give some weight to time costs of investigation, I have a reductio ad absurdum here that will monopolise your free time forever.

Comment author: bgaesop 09 August 2011 10:25:22PM 1 point [-]

That's true. But that's a reason to not investigate and not read this thread and not think about the subject at all, not a reason to reply in this thread that the idea is unlikely, much less to declare it unlikely.

If your reaction to reading about the truther idea is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, so I don't care" that is A-OK. If your reaction is "the value of knowing the facts about this issue, whatever they are, is rather low, and it would be time consuming to learn them, therefore I am not going to update whatsoever on this issue and will ignore the evidence I know is available and yet still have a strong, high-confidence belief on it" then that seems kind of silly to me.

Does that make sense? Do you agree, or not? This is not an issue I feel very strongly about, but value of information is something I've been thinking about more recently and so I think that hearing others' opinions on it would be useful. At the very least, worth the time to read them :) Amusing link, by the way.

Comment author: shokwave 10 August 2011 12:55:46AM 0 points [-]

I agree with you that "investigating is time-consuming" is not a defense for declaring ideas you don't like to be unlikely.

Comment author: Vladimir_Nesov 09 August 2011 10:44:14PM 0 points [-]

That's true. But that's a reason to not investigate and not read this thread and not think about the subject at all, not a reason to reply in this thread that the idea is unlikely, much less to declare it unlikely.

If it's a priori deemed unlikely, deciding not to investigate will lead to it staying this way, and one could as well express this state of knowledge in posting to the thread.

Comment author: simplyeric 12 May 2011 09:01:05PM 0 points [-]

A brief continuance on the derailment of the thread:

•The explosives theory involves a conspiracy - penalty.

The 9/11 attack undisputedly did involve a conspiracy.
The question here is, by whom? (a. just by foreign terrorists, b. an "inside job").

•The explosives theory can be and is used to score political points - penalty.

What does that have to do with anything? A reduction in unemployment can be used to score political points...that certainly doesn't make is unlikely

•The explosives theory doesn't make any goddamn sense - huge penalty.

This is subjective - penalty?

The biggest point is: the orthodox explanation of the collapse seems robust to me on its own merits. There are other questions.

Comment author: roland 12 June 2011 08:43:57PM 0 points [-]

I think your points are all valid but they were downvoted because they are against the group belief.

Comment author: Dorikka 11 May 2011 03:36:06AM 1 point [-]

Labeling these as 1-5 from top to bottom, 2 contributes to 1 (you may be double-penalizing if you're counting them distinctly), and 4 (time cost to investigate) doesn't seem like a valid reason to discount a hypothesis.

I don't know whether I disagree with your conclusion -- I haven't bothered to read arguments about the topic and probably will continue to not do so because the expected value of such data is of pretty low for me -- I just wanted to point out possible errors in your process.

Comment author: WrongBot 11 May 2011 05:14:30AM 1 point [-]

2 contributes to 1, yes, but conspiracy hypotheses are flawed for reasons other than their complexity.

I agree with you on 4: it isn't a reason to discount the hypothesis, but it is a reason to avoid seeking further information on the topic (high opportunity cost).

On reflection, I now regret engaging on this topic. My apologies for time wasted.

Comment author: Viliam_Bur 05 September 2011 11:30:24AM 0 points [-]

On reflection, I now regret engaging on this topic. My apologies for time wasted.

Please don't. Your comment was an example that it is possible to reply politely and rationally even in a discussion on topic that you (presumably) consider irrational. That is a nice skill to have.

Comment author: lessdazed 08 May 2011 06:05:19AM *  11 points [-]

A point about counteracting evidence: if I believe I have a weighted six sided die that yields a roll of "one" one out of every ten rolls rather than one out of every six rolls as a fair die would, a single roll yielding a "one" is evidence against my theory. In a trial in which I repeatedly roll the die, I should expect to see many rolls of "one", even though each "one" is more likely under the theory the die is fair than it is under the theory the die is weighted against rolls of "one".

You really didn't present evidence that contradicted anything, the most this sort of testimony could be is as you said, "evidence to the contrary", but not as you also said, "contradicts". One thing to look out for is idiosyncratic word usage. Apparently, I interpret the word "contradict" to be much stronger than you do. It would be great to find out how others interpret it, there are all sorts of possibilities.

When I consider whether or not the things I am directed to are good evidence of a conspiracy behind the destruction of the World Trade Center, I discount apparent evidence indicating a conspiracy against what I would expect to see if there were no actual conspiracy.

As an analogy: if I hear a music album and find 75% of the songs are about troubled relationships or love, I don't conclude the songwriter's life is or was particularly troubled, because that's what gets sung about by people of somewhat normal background, even though much of their lives are spent sleeping, eating, standing in line, etc. Only when every song sounds like the same complaint do I conclude something is uniquely wrong with them. This is somewhat counterintuitive, one might have thought 75% love/troubled songs indicated unique problems, but its not so.

Similarly, the conspiracy stuff surrounding the Twin Towers has been underwhelming to me. What I see is exactly what I would expect were the towers collapsed by Al-Quaeda hijacked planes. This absolutely includes what you presented, an interview after the fact by someone saying that in the confusion he heard sounds that sounded like explosions beneath him. Seeing this evidence is like rolling a "one" or hearing a love song on an album: totally expected according to the theories that the die lands on "one" 10% of the time, that the singer is normal, or that the towers were brought down by the planes.

Comment author: roland 08 May 2011 07:08:55PM *  -1 points [-]

I concede the point about language.

Discounting evidence is dangerous considering we are all biased and if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

Comment author: lessdazed 09 May 2011 12:36:51AM *  6 points [-]

if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

As a separate point, I have always argued against the validity of a certain argument against theists, that they are obligated to say what would constitute evidence sufficient to change their minds. The demand is an argument from ignorance. Nonetheless, being able to articulate what sufficiently contradictory evidence would be is a point in an arguers favor, even though the inability to do so is not fatal.

In this case, I'd say the question is somewhat ill-formed for two reasons. First, many entirely different things would be sufficient evidence to get me to change my mind, but if other things were also the case, they would no longer be sufficient. Certain statements by the CIA might be sufficient, but not if there were also other statements from the FBI.

Second, there are many sorts of mind changing possible. The more sane conspiracy theorists simply say the official account is not credible. The others articulate theories that, even granting all of there premises, are still less likely than the official story. A related point is what it means to be wrong according to different logics. If I believe in Coca-Cola's version of Santa Claus and also believe that Kobe Bryant is left-handed, in one sense there is no "Kobe Bryant" in the same way that there is no "Santa Claus". In a more useful sense, we say "Kobe Bryant really exists, but is right handed, and Santa Claus does not exist." This is so even though there is no one thing preventing us from saying "Santa Claus is really young, not old, tall and thin, not fat, has no beard and shaves his head, is black, and not white, and plays shooting guard for the Lakers under the alias "Kobe Bryant", and does nothing unusual on Christmas." Whether you say things I learn falsify the official story or modify it is a matter of semantics, but certain elements-like the involvement of Al-Quaeda-are more central to it than others. These elements are better established by existing evidence and would take correspondingly more evidence to dislodge.

So the answer to "what evidence would be strong enough to change your mind?" varies a lot depending on exactly what is being asked.

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

I think it is notable and important that the different but similar things you said got different responses. One was downvoted unto automatic hiding (the threshold is set to hide at -3 or less (more negative) by default). One was downvoted much more. We can speculate as to why but its important to acknowledge different community responses to different behavior (I won't prejudge it by saying "Different going against social beliefs").

Onto speculation: one problem with the video as evidence for explosions was a certain kind of jumping to conclusions. The guy said he heard explosions, but this is skipping a step. I could just as well say I heard people in a box, when I had actually heard sound waves emitted by a speaker attached to a computer. The guy's insistence that explosions were causing the sound is very strange, even granted that he had heard explosions before and the sounds he heard may have sounded exactly like those. Likewise for his claim they were coming from beneath him, considering what was going on.

Similarly, your assumption about the reason for your downvotes is certainly skipping steps. Most noticeable is how you don't distinguish what you are being socially punished for among your several downvoted posts, but the response to them was so different.

It's not so simple as that you were "go[ing] against their beliefs". Not everyone uses the voting function identically, but assuming many others use it as I do I can offer an analysis. I use it to push things to where I think they should be, rather than as an expression that I was glad I read a post (in hopes others will do the same, such that votes reflect what individuals were glad to have read. I believe something like this was the intent of the system's creators). I see -4 and -15 as not inappropriate final marks for your posts, and so didn't weigh in on them through the voting mechanism.

The problem with your first post was that it unfairly pushed the work of argument onto Eliezer. This is the same problem with the poll sent out by the fundamentalists to philosophers a few months ago, I couldn't find it, but it included questions such as "Do you agree: life begins at conception?" and "Do you agree: humans are unique and unlike the other animals?" The problem with that question is that the work/number of words needed to adequately disentangle and answer that exceed those required to ask it. Your question also didn't start from anywhere, you would have gotten a better response if you had said you thought the beliefs either actually right or wrong but not insane.

The tl;dr is that it was a passive-aggressive question. A small sin, for which it gets a -4, as implicitly the one voicing it disagrees with it and is against the communal norm, how important that factor is, I can't know.

The video evidence was a larger sin, as it was basically a waste of time to listen to it. First, the guy emphasized that he certainly heard explosions beneath him, as if by disbelieving that one would be thinking him to be a liar. Like I said above, this is the same thing ghost observers do: I don't necessarily disbelieve you heard what you heard and saw what you saw, I just am unsure about the original cause of that noise, especially considering how humans hear what they hear based on what they are familiar with hearing and expecting to hear (the multiple-drafts model of cognition).

What's more, when the advocate of a position has an opportunity to direct someone to evidence supporting his or her position and must elect to give them one piece of evidence in an attempt to spread the belief, I expect them to go with their best argument, which in turn ought to sound pretty impressive, as even incorrect positions often have one compelling argument in their favor.

If I had come across the video you showed as the first video I saw in the course of randomly watching accounts of 9/11 survivors (if a random sample of survivors were filmed and archived), it would be maybe perhaps be somewhat suspicious. As a video cherry picked by someone trying to justify skepticism, it's catastrophically weak, shockingly so actually. I expect cherry picked evidence in favor of any conspiracy to at least induce a physiological response, e.g. OMG bush has reptilian eyes he is a reptile he is a lizard person, oh wait that's stupid, it's an artifact of light being shined on dozens of presidents millions of times and this video has been cherry-picked.

Comment author: [deleted] 08 May 2011 07:20:59PM *  12 points [-]

There was a time, many years ago, when I paid close attention to the arguments of the "truthers", and came to the conclusion that they were wrong. What you're doing now is bringing up the same old arguments with no obviously new evidence. I'm not going to give you my full attention, not because I want to close my eyes to the truth, but because I already looked at the evidence and already, in Bayesian terminology, updated my priors. Revisiting old evidence and arguments as if they were fresh evidence would arguably be an irrational thing for me to do, because it would be treating one piece of evidence as if it were two, updating twice on the same, rehashed, points that I've already considered.

I did not downvote you, because I have a soft spot for that sort of thing, but if other people have already, long ago, considered the best arguments and evidence, then at this point you really are wasting their time. It's not that they're rejecting evidence, I suspect, but that they're rejecting having their time being taken up with old evidence that they've already taken into account.

Comment author: Cayenne 08 May 2011 06:04:15AM *  4 points [-]

I know that I'll probably be downvoted again, but nevertheless.

This is precisely the wrong way to start off a post like this, a very passive-aggressive tone.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

Are you certain that it isn't simply the tone of your posts?

So when just asking the most basic rationality question (why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Also bitterness. I think that you would benefit a lot by rephrasing your questions in a less confrontational manner.

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

could have become

Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Soften up your posts.

I never downvote, as I think it's counterproductive. Others don't agree, but that is their right. Taking it personally is not the right approach.

Edit - please disregard this post

Comment author: roland 12 June 2011 08:40:36PM *  0 points [-]

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

could have become

Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Done. and done

Edit - please disregard this post

Sorry, I can't unread it.

Comment author: roland 08 May 2011 07:12:32PM 1 point [-]

I would welcome factual criticisms of my posts instead of just attacking the "tone" you read in them.

Right, the posts could be softened up, but isn't it funny that you don't direct the same criticism to the ones who called a certain point of view insane? How confrontational is that?

Comment author: thomblake 09 May 2011 05:24:12PM 5 points [-]

I would welcome factual criticisms of my posts instead of just attacking the "tone" you read in them.

Characterizing helpful criticism as "attacking" is also not good.

Comment author: Cayenne 08 May 2011 07:52:23PM *  4 points [-]

I'm limited in my scope, I'm not going to follow links and criticize every single post. I happened to be reading yours, and thought that I might be able to help you with tone... others are probably better at dealing with actual content. If you would prefer me to not try to help you, let me know and I'll focus my efforts elsewhere.

Edit - please disregard this post

Comment author: LHJablonski 08 May 2011 05:47:37AM 4 points [-]

And I feel people moralize here especially using the downvote function.

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

I think probably both happen, but it's tilted heavily toward the latter. Feel free to downvote if you disagree. :)

Comment author: mendel 08 May 2011 08:52:39PM 3 points [-]

The problem with the downvote is that it mixes the messages "I don't agree" with "I don't think others should see this". There is no way to say "I don't agree, but that post was worth thinking about", is there? Short of posting a comment of your own, that is.

Comment author: lessdazed 09 May 2011 02:35:31AM 2 points [-]

I think there is a positive outcome from the system as it is, at least for sufficiently optimistic people. The feature is that it should be obvious that downvoting is mixed with those and other things, which helps me not take anything personally.

Downvotes could be anything, and individuals have different criteria for voting, and as I am inclined to take things personally, this obviousness helps me. If I knew 50% of downvotes meant "I think the speaker is a bad person", every downvote might make me feel bad. As downvotes currently could mean so many things, I am able to shrug them off. They could currently mean: the speaker is bad, the comment is bad, I disagree with the comment, I expect better from this speaker, it's not fair/useful for this comment to be voted so highly rated compared to a similar adjacent comment that I would rather people read instead/I would like to promote as the communal norm, etc.

If one has an outlook that is pessimistic in a particular way, any mixing of single messages to multiple meanings will cause one to overly react as if the worst meaning is intended by a message, and this sort of person would be most helped by ensuring each message has only one meaning.

Comment author: AdeleneDawner 08 May 2011 08:55:38PM 2 points [-]

I've been known to upvote in such cases, if the post is otherwise neutral-or-better. I like to see things here that are worth thinking about.

Comment author: Swimmer963 08 May 2011 08:54:33PM 3 points [-]

Short of posting a comment of your own, that is.

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

Comment author: wedrifid 08 May 2011 11:31:12PM 0 points [-]

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

That's exactly what I do too. (Although my downvote threshold is likely a tad more sensitive. :P)

Comment author: Swimmer963 09 May 2011 12:26:05AM 0 points [-]

(Although my downvote threshold is likely a tad more sensitive.

Likely. Mine will probably become more sensitive with time.

Comment author: TimFreeman 08 May 2011 07:58:30PM 5 points [-]

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

There's another possibility. I downvote when I felt that reading the post was a waste of my time and I also believe it wasted most other people's time.

(This isn't a veiled statement about Roland. I do not recall voting on any of his posts before.)

Comment author: atucker 08 May 2011 03:58:26AM *  7 points [-]

"Don't be stopped by trivial inconveniences"

I used to do really stupid things and waste lots of time trying to do something in the path of least resistance. I'm not sure if other people have the same problem, but might as well post.

An example of being stopped: "Hmm, I can't find any legitimate food stands around here. I guess I'll go eat at the ice cream stand right here then."

An example of overcoming: "Hmm, I can't find any legitimate food stands around here. That's weird. Lemme go to the information desk and ask where there is one."

What it feels like:

  1. You have a goal

  2. You realize that there are particular obstacles in your way

  3. You decide to take a suboptimal road as a result

What you do to prevent it:

Notice that the obstacle isn't that big of a deal, and figure out if there are ways to circumvent this. If those ways are easy, do them. Basically, move something from not reachable to reachable.

Comment author: Cayenne 08 May 2011 08:08:56AM *  1 point [-]

Yak Shaving? http://sethgodin.typepad.com/seths_blog/2005/03/dont_shave_that.html

Edit - please disregard this post

Comment author: atucker 08 May 2011 03:15:25PM 1 point [-]

I should have made it clear when a trivial inconvenience ceases to be trivial.

Basically, if you have an object level understanding of what's in your way, can think of a way to avoid the problem, and don't see any other steps involved, then you should go ahead and do it.

I personally am normed to give up waaay too easily compared to what I can do.

Comment author: Cayenne 08 May 2011 06:13:53PM *  -1 points [-]

Oh, ok. I see the difference you mean..

Edit - please disregard this post

Comment author: lessdazed 08 May 2011 03:26:59AM 9 points [-]

When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.

Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.

If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.

If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.

If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty and ridiculous deontological system", or some variation thereof, is probably unwelcome.

On the other hand, responding with "You'll get over being depressed", when your interlocutor does not feel depressed, will frustrate them. "Being depressed is a sin" would be an even worse response, combining both misinterpretation and moralizing.

Refraining from filling in the blanks in others' arguments happens to be a good way to avoid moralizing, since in order to be indignant about something you have to believe in its existence.

Scott Adams has a good example of something that only causes offense to some people, supposedly dependent on their general penchant for smashing distinct statements together, which is one way people inappropriately fill in blanks.

The dog might eat your mom's cake if you leave it out. A dog also might eat his own turd.

When you read those two statements, do you automatically suppose I am comparing your mom's cake to a dog turd? Or do you see it as a statement that the dog doesn't care what it eats, be it a delicious cake or something awful?

In this pair, it is easy to get someone to agree with both statements and also say they think they would hypothetically feel offense towards the speaker were it not a mere test...at least I am one for one, and I imagine it would work for others. I also think the person I asked actually felt real offense

Something like this pair would be good for teaching because the student agrees with the component statements. Offense is a result of inappropriately combining them to infer a particular intent by the speaker.

If you are offended, ask yourself: "What am I assuming about the other person (that makes me think they are innately evil)?

Comment author: RobinZ 08 May 2011 02:31:46PM 5 points [-]

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

Comment author: wedrifid 09 May 2011 12:00:29AM *  4 points [-]

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

I usually find that I do understand what they are saying and it belongs in one of the neglected categories of 'bullshit' or "<OvercomingBias style nonsense/>".

Comment author: RobinZ 09 May 2011 01:38:43PM 2 points [-]

Those don't usually give me much trouble - I find that the nonsense people propose is usually self-consistent in an interesting way, much like speculative fiction. On reflection, what really gives me trouble is viewpoints I understand and disagree with all within five seconds, like [insert politics here].

Comment author: wilkox 09 May 2011 01:09:24AM 2 points [-]

"things that people say that really actionable beliefs even though they may not be clear on the difference"

This sounds interesting, but I can't parse it.

Comment author: wedrifid 09 May 2011 05:38:23AM 0 points [-]

This sounds interesting, but I can't parse it.

That's because you are using an English parser while my words were not valid English.

Comment author: RobinZ 09 May 2011 01:08:56AM 0 points [-]

"things that people say that" what? The grammar gets a little odd toward the latter half of that.

Comment author: wedrifid 09 May 2011 05:39:43AM 0 points [-]

Fixed.

Comment author: RobinZ 09 May 2011 01:15:14PM 0 points [-]

Thanks!

Comment author: endoself 09 May 2011 01:19:54AM 1 point [-]

Presumably "things that people say that aren't really actionable beliefs"; though this reply feels awkward in a discussion about misunderstanding, I'm pretty sure that was the intended phrase.

Comment author: Cayenne 08 May 2011 03:26:37AM *  1 point [-]

It might be useful to form a habit of reflexively trying to think about a problem in the mode you're not currently in, trying to switch to near mode if in far, or vice-versa. Even just a few seconds of imagining a hypothetical situation as if it were imminent and personal could provoke insight, and trying to 'step back' from problems is already a common technique.

I've used this to convince myself that a very long or unbounded life wouldn't get boring. When I try to put myself in near-mode, I simply can't imagine a day 2000 years from now when I wouldn't want to go talk to a friend one last time, or go and reread a favorite book, or cook a favorite meal, or any one of a thousand other small things. I might get bored off and on, but not permanently.

Edit - please disregard this post

Comment author: mendel 08 May 2011 03:02:57AM *  1 point [-]

Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.

First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.

It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on how "teaching exercises" for this should be arrived at.

You suggest a form of self-teaching: The 5-second analysis identifies situations when I want some desired behaviour to trigger, and to pre-think my reaction to the point where it doesn't take me more than 5 seconds to use. In effect, I am installing a memory of thoughts that I wish to have in a future situation. (I could understand this as communcating with "future me" if I like science fiction. ;) Your method of limiting this to the "5-second-level" aims to make this pre-thinking specific enough so that it actually works. With practice, this response will trigger subconsciously, and I'll have modified my behaviour.

It would be nice if that would actually help to talk about rationality more clearly (but won't we be too specific and miss the big picture?), and it would be nice if that would help us arrive at a "rationality syllabus" and a way to teach it. I'm looking forward to reports of using this technique in an educational setting; what the experience of you and your students were in trying to implement this. Until your theory's tested in that kind of setting, it's no more than a theory, and I'm disinclined to believe your "you need to" from the first sentence in your article.

Is rationality just a behaviour, or is it more? Can we become (more) rational by changing our behaviour, and then have that changed behaviour change our mind?

Comment author: mendel 09 May 2011 10:28:35AM *  0 points [-]

Of course, these analyses and exercises would also serve beautifully as use-cases and tests if you wanted to create an AI that can pass a Turing test for being rational. ;-)

Comment author: Gabriel 08 May 2011 02:12:11AM 2 points [-]

So here is a procedure I actually developed for myself couple of months ago. It's self-helpy (the purpose was to solve my self-esteem issues) but I think indignant moralizing uses some of the same mental machinery so it's relevant to the task of becoming less judgemental in general.

I believed that self-esteem doesn't say anything about the actual world so it would be a good idea to disconnect it from external feedback and permanently set to a comfortable level. At some point I realized that this idea was too abstract and I had to be specific to actually change something. And here's roughly what it led to:

  1. Notice that I'm engaging in judgement. If the judgement is internally-directed and negative then the trigger will be anxiety. If it were positive then it would be some sort of narcissistic enthusiasm. If the judgement were directed at another person then it could be a feeling of smugness, if negative, and probably some sort of reverential admiration if positive.

  2. Realize that the emotions I'm feeling don't represent objective reality. They are a heuristic hacked together by evolution to guide my behaviour in a savannah-dwelling hunter-gatherer tribe. And I'm definitely not currently a member of such a collective.

  3. Remember that thinking abstractly about a 'sense of self-esteem' doesn't capture the way it is experienced and that thinking that it should be disconnected from external stimuli isn't something that can be translated into action and I need something specific to target.

  4. Focus on how an algorithm feels from the inside -- that the sense of self-esteem doesn't feel like a sense of self-esteem. It feels like a feature of the world. As if everyone, including me, had an inherent, non-specific aura of awesomeness that I were able to directly perceive, though not with any of the 'standard' senses.

  5. Reflect on the silliness of that way of perceiveing. Look at the world and notice the distinct lack of worthiness everywhere I turn to. Tell myself, verbally, that there is no inherent awesomeness or worthiness and that therefore nothing can affect it. Don't just try to disconnect the emotions from experience, aim to outright destroy them (note: I don't claim that destroying them is actually possible).

Comment author: Psy-Kosh 07 May 2011 10:02:04PM 4 points [-]

Something I still need to work on, but which I think would be an important one (perhaps instead a general class of 5-second-skills rather than a single one) would be "remember what you know when you need it"

Example: you're potentially about to escalate a already heated political debate and make in personal. 5-second-skill: actually remembering that politics is the mind-killer, thus giving yourself a chance to pause, reconsider what you're about to do, and thus have a chance to avoid doing something stupid.

I'd also apply this notion to what you said about testability. Not so much being able to think of a quick test as much as being able to quickly remember to think about how it could be tested.

Perhaps this general category of 5-second-skills could be called "pause and think" or "pause and remember".

ie, the critical thing this 5-second-skill here isn't so much being able to swiftly execute some other rationalist skill quickly as much as remembering to use that skill at all when you actually need to.

Comment author: Cayenne 07 May 2011 10:46:14PM *  2 points [-]

How about 'flinch away from drama'?

Never argue opinions, only facts.

If you must argue an opinion, then pin it down so that it can't wriggle around. Example: if you have the opinion 'AI can/will paperclip', then try to pin down how and why it can as strictly as you can, and then take the argument from 'it can happen' to 'perhaps we can test this'. Bring it out of the clouds and into reality as quickly as possibly.

If you manage to kill someone's opinion, showing that it is just wrong, then pause and mourn its passing instead of gloating. It can't hurt to apologize for winning, since feelings are so easily hurt.

Edit - please disregard this post

Comment author: Psy-Kosh 07 May 2011 10:58:21PM 0 points [-]

Hrm... That could work for the specific "remember that politics is the mindkiller" rule (Although, of course, while one can distinguish issues of preference from issues of fact... issues of opinion vs issues of fact seems more questionable. :))

Comment author: Cayenne 07 May 2011 11:10:06PM *  1 point [-]

Well, I view opinions as inherently meaningless to attempt to test. A fact can be looked up or tested, but an opinion either can't be tested yet or is worthless to test.

'The sky is blue' is testable unless you've been stuck for generations underground. 'I like pink' is worthless to test, and really worthless to argue against. 'When we can do X it will then proceed to Y' is hard to do anything about until we can actually X, but if we pin the specifics down enough then it isn't totally useless to argue about it.

Some opinions can also just be completely infeasible to test as well, due to the steps the test would need to take. (Hayek vs. Keynes, I'm looking at you.)

Edit - please disregard this post

Comment author: Psy-Kosh 11 May 2011 12:08:46AM 0 points [-]

Sorry for delayed reply. "I like pink" is an assertion of a preference, rather than an opinion about a fact. (Well, I guess it's asserting the fact that you like pink... and stuff like brain analysis may help test it. ;))

Well, yes, some are difficult to test... but then one can argue the reasoning for having them.

Oh, just to clarify, I was proposing a sort of 5-second-meta-skill of "remembering your rationalist knowledge/skills when you need them", the "remember politics is the mind killer" being an example rather than one I wanted to single out.

*blinks at the edit* erm? disregard which part/aspect of it? (ie, are you retracting a claim, or...?)

Comment author: John_Maxwell_IV 07 May 2011 08:27:25PM *  3 points [-]

I thought of a few five-second skills like this:

  • remembering that a purpose of engaging in argument is to update your map
  • realizing you should actually spend time on activities that have proven to be helpful in the past (related to this)
  • noticing when you have a problem and actually applying your creativity to solve it (similar to this)
  • recognizing a trivial inconvenience for what it is

I noticed that all of my 5-second skills (and Eliezer's also) involve doing more mental work than you're instinctively inclined to do at a key point. This makes sense if the main reason people are irrational is due to taking cognitive shortcuts (see this great article; feel free to skip down to "Time for a pop quiz"). So maybe we could save some labor identifying or at least acquiring 5-second skills if we learn to be comfortable with constant reflectivity and hard mental work.

Comment author: Cayenne 07 May 2011 08:01:08PM *  15 points [-]

I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.

At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.

Edit - please disregard this post

Comment author: wilkox 08 May 2011 12:37:59PM 5 points [-]

In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

It seems really, really difficult to convey to people who don't understand it already that becoming offended is a choice, and it's possible to not allow someone to control you in that way. Maybe "offendibility" is linked to a fundamental personality trait.

Comment author: loqi 10 May 2011 07:01:37PM 5 points [-]

What constitutes a "choice" in this context is pretty subjective. It may be less confusing to tell someone they could have a choice instead of asserting that they do have a choice. The latter connotes a conscious decision gone awry, and in doing so contradicts the subject's experience that no decision-making was involved.

Comment author: wilkox 10 May 2011 11:28:45PM *  1 point [-]

Good point. Reading my comment again, it seems obvious that I committed the typical mind fallacy in assuming that it really is a choice for most people.

Comment author: erikerikson 20 December 2012 11:34:11PM 0 points [-]

I'd take this differently.

I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not.

Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations.

Loqi's feedback seems to me to be suggesting that individuals who do not have a belief that they have such a "possibility of choice" could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it.

That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.

Comment author: erikerikson 20 December 2012 11:50:53PM 0 points [-]

Of course, Loqi's suggestion could contingently be less optimal than the less easy to accept presentation.

While the approach you suggest could provide a more subjectively negative experience, the cognitive dissonance could cause the utterance to gain more attention with the brain as a more aberrant occurrence in its stimuli and as a result be worthy of further analysis and consideration.

I am generally in favor of delivering notions I believe to be helpful in a manner which can/will be accepted. In some cases however, others are able and more likely to accept a less than pleasant delivery mechanism. This is contingent upon the audience, of course, as well as the level of knowledge you have about your audience. In the absence of such knowledge, the more gentle approach seems advisable.

Comment author: Cayenne 08 May 2011 05:35:39PM *  3 points [-]

It could be. It seems not just difficult but actually against most culture on the planet. Consider that crimes of passion, like killing someone when you find them sleeping around on you, often get a lower sentence than a murder 'in cold blood'. If someone says 'he made me angry' we know exactly what that person means. Responding to a word with a bullet is a very common tactic, even in a joking situation; I've had things thrown at me for puns!

It does seem like a learn-able skill even so. I did not have this skill when I was child, but I do have it now. The point I learned it in my life seems to roughly correspond to when I was first trained and working as technical support. I don't know if there's a correlation there.

In any case, merely being aware that this is a skill may help a few people on this forum to learn it, and I can see only benefit in trying. It is possible to not control anger but instead never even feel it in the first place, without effort or willpower.

Edit - please disregard this post

Comment author: bbleeker 09 May 2011 07:41:14PM 2 points [-]

I imagine you wouldn't have lasted long in tech support if you hadn't learned that skill. :-)

Comment author: mendel 08 May 2011 09:27:25PM 0 points [-]

And yet, not to feel an emotion in the first place may obscure you to yourself - it's a two-sided coin. To opt to not know what you're feeling when I struggle to find out seems strange to me.

Comment author: Cayenne 08 May 2011 10:04:27PM *  2 points [-]

I think you're misunderstanding what I said. I'm not obscuring my feelings from myself. I'm just aware of the moment when I choose what to feel, and I actively choose.

I'm not advocating never getting angry, just not doing it when it's likely to impair your ability to communicate or function. If you choose to be offended, that's a valid choice... but it should also be an active choice, not just the default.

I find it fairly easy to be frustrated without being angry at someone. It is, after all, my fault for assuming that someone is able to understand what I'm trying to argue, so there's no point in being angry at them for my assumption. They might have a particularly virulent meme that won't let them understand... should I get mad at them for a parasite? It seems pointless.

Edit - please disregard this post

Comment author: mendel 09 May 2011 12:08:16AM 0 points [-]

Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower."

I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to doubt whether it's really a net benefit.

And yes, I do understand that with understand / assumptions about other people, stuff that would have otherwise bothered me (or someone else) is no longer a source of anger. You changed your outlook and understanding of that type of situation so that your emotion is frustration and not anger. If that's what you meant originally, I understand now.

Comment author: Cayenne 10 May 2011 11:46:45AM *  0 points [-]

Mostly I don't even feel frustration, but instead sadness. I'd like to be able to help, but sometimes the best I can do is just be patient and try to explain clearly, and always immediately abandon my arguments if I find that I'm the one with the error.

Edit - please disregard this post

Comment author: wedrifid 07 May 2011 08:09:32PM *  1 point [-]

I (really) like what you're saying here and it is something I often recommend (where appropriate) to people that have no interest in rationality whatsoever.

Well, except for drawing a line at 'true/false' with respect to when it an be wise to take actions to counter the statements. Truth is only one of the relevant factors. This doesn't detract at all for your core point.

I extend this philosophy to when evaluating socially relevant interactions of others. When things become a public scene that for some reason I care about I do not automatically attribute the offense, indignation or anger of the recipient to be the responsibility of the person who provided the stimulus.

Comment author: Cayenne 07 May 2011 08:19:22PM *  0 points [-]

The true/false isn't the only line, but I feel that it's the most important. If something someone says to or about you is true, then no matter what you should own it in some way. Acknowledge that they're right, try to internalize it, try to change it, but never never just ignore it! (edit: If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.)

If the thing they say is false, then don't get mad first! Think it through carefully, and then do the minimum you can to deal with it. The most important thing is to not obsess over it afterward, because if you're doing that you're handing a piece of your life away for a very low or even negative return. Laugh about it, ignore it, get over it, but don't let it sit and fester in your mind.

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 02:16:41AM 3 points [-]

If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.

When it comes to making the most beneficial responses feeling anger is almost never useful when you have a sufficient foundation in the mechanisms of social competition, regardless of truth. It tends to show weakness - the vulnerability to provocation that you are speaking of gives an opportunity for one upmanship that social rivals will instinctively hone in on.

In terms of the benefits and necessity of making a response it is the connotations that are important. Technical truth is secondary.

Comment author: Cayenne 08 May 2011 03:16:58AM *  2 points [-]

Very true.

I didn't mean to suggest that the truth/falsehood line was as usefully socially as I believe it is internally. The social reaction you may decide on is mostly independent from truth.

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 03:19:24AM *  2 points [-]

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

And, when false, when you may need to change what you do such that others don't get that impression (or don't think they can get away with making the public claim even though they know it is false).

Comment author: scientism 07 May 2011 03:40:35PM 4 points [-]

One of the things I think virtue ethics gets right is that if you think, say, lying is wrong then you should have a visceral reaction to liars. You shouldn't like liars. I don't think this is irrational at all (the goal isn't to be Mr. Spock). Having a visceral reaction to liars is part of how someone who thinks lying is wrong embodies that principle as much as not lying is. If somebody claims to follow a moral principle but fails to have a visceral reaction those who break it, that's an important cue that something is wrong. That goes doubly for yourself. Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Comment author: gjm 08 May 2011 09:35:52AM 2 points [-]
  1. Why do you think merely having a visceral reaction to lying (one's own or others'; actual or hypothetical) isn't enough?

  2. Conditional on having that visceral reaction, what is the advantage of then becoming indignant? Or do you think that becoming indignant is identical to that visceral reaction?

Comment author: Gabriel 07 May 2011 11:57:40PM 5 points [-]

Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Feedback arrives in the form of a split-second impression of "this is wrong". However long you spend being indignant after that, it won't provide you with any new ethical insight. Indignance isn't about ethics, it's about verbally crushing your enemy while signalling virtue to onlookers.

Comment author: cousin_it 07 May 2011 05:04:54PM *  1 point [-]

Why must my personal understanding of right and wrong also apply to other people? What if I think something's wrong for me to do, but I don't care if other people do it (e.g. procrastination)?

Comment author: thomblake 09 May 2011 05:21:07PM 0 points [-]

Why must my personal understanding of right and wrong also apply to other people? What if I think something's wrong for me to do, but I don't care if other people do it (e.g. procrastination)?

Because you care about other people, and other people are relevantly similar to yourself. This applies to both instrumentally relevant details, like the character of a person you're going to hire, and more personal concern, like whether your brother is living a good life.

Comment author: Peterdjones 07 May 2011 05:27:46PM -1 points [-]

If it's purely personal, why call it moral?

Comment author: thomblake 09 May 2011 05:18:35PM 0 points [-]

If it's purely personal, why call it moral?

I'm confused. With Sidgwick, I define 'ethics' as 'the study of what one has most reason to do or to want', and take 'moral' to in most cases be equivalent to 'ethical'.

Then, 'morality' is indeed purely personal, but being very similar creatures we can build off each others' moral successes.

Comment author: Cayenne 07 May 2011 09:09:46PM *  0 points [-]

I tend to think of 'the things I have to do to be me' as moral, and 'the things I have to do to fit into society' to be ethics. In a lot of cases when someone is calling someone else immoral, it seems to me that they're saying that that person has done something that they couldn't do and remain who they are.

Edit - please disregard this post

Comment author: wedrifid 07 May 2011 06:47:59PM 0 points [-]

If it's purely personal, why call it moral?

Why not? (A somewhat quirky twist that seems to crop up is that of having a powerful moral intuition that people's morals should be personal. It can sometimes get contradictory but morals are like that.)

Comment author: Peterdjones 07 May 2011 06:50:10PM 0 points [-]

Usual reasons...for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

Comment author: eugman 08 May 2011 06:52:17PM 0 points [-]

I think it makes sense in the negative sense, as things that aren't OK. What's wrong with holding oneself to a higher standard? What's wrong with saying "It'd be immoral for ME to murder?"

Comment author: a363 08 May 2011 12:04:27PM -1 points [-]

What about "war is OK for me"?

It really gets to me that when a bunch of people gather together under some banner then it suddenly becomes moral for them to do lots of things that would never be allowed if they were acting independently: the difference between war and murder...

The only morality I want is the kind where people stop doing terrible things and then saying "they were following orders". Personal responsibility is the ONLY kind of responsibility.

Comment author: wedrifid 07 May 2011 07:28:11PM 0 points [-]

for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

And yet if the same neurological hardware is being engaged in order to make social moves of a similar form 'morality' still seems appropriate. Especially since morals like "people should not force their view of right and wrong on others" legitimate instances of moralizing even when the moralizer tends to take other actions which aren't consistent with the ideal. Because, as I tend to say, morals are like that.

Comment author: shokwave 07 May 2011 05:07:40PM 0 points [-]

Is there some law of nature saying my pesonal understanding of right and wrong should also apply to other people?

Principles derivable from game theory, maybe.

Comment author: jimrandomh 07 May 2011 03:09:29PM *  50 points [-]

IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:

What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.

In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:

X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)

I would expect the first of these three explanations to succeed, and the other two to fail miserably.

Comment author: TrE 08 May 2011 07:08:36PM *  11 points [-]

Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'

And as well, this could be done with categories. 'Red is a color. Red is not a sound.'

I guess this one has something to do with confirmation bias, as cwillu suggested.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:29:41PM 13 points [-]

"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.

Comment author: Eliezer_Yudkowsky 09 September 2011 12:21:59AM 14 points [-]

A few months later, I've been teaching Anna and Luke and Will Ryan and others this rule as the "concrete-abstract pattern". Give a specific example with enough detail that the listener can visualize it as an image rather than as a proposition, and then describe it on the level of abstraction that explains what made it relevant. I.e., start with an application of Bayes's Theorem, then show the abstract equation that circumscribes what is or isn't an example of Bayes's Theorem.

Comment author: Anny1 07 May 2011 12:28:09PM *  1 point [-]

What would be an exercise which develops that habit?

Speaking from personal experience, I would propose that moralizing is mostly caused by anger about the presumed stupidity/ irrationality behind the statement we want to moralize about. The feeling of "Oh no they didn't just say that, how could they!". What I try to do against it, is to simply let that anger pass by following simple rules like taking a breath, counting to 10 or whatever works. When the anger is gone, usually the need for moralizing is as well.

Also I feel there is a lot of discussion about Eliezer moralizing in his posts that can be broken down to the distinction between moralizing as an automated response und moralizing after careful deliberation (as in blog posts). I wouldn't say that the latter is wrong per se.

In daily life I often meet people that I feel are so far off, so tangled up in their rationalizations, that even after my anger about their comments is passed I decide that a discussion would be a waste of everybody's time. In this case I use a sarcastic remark to a least get them of back. Maybe if the person in question gets a similar reaction from enough people, they will reconsider. It can also be for the benefit of bystanders.

So I think this would be the steps that work for me:

1) Recognize anger

2) Wait it out

3) Ask some questions to clarify/falsify your understanding of the questionable statement

4) Think about good, precise counterarguments and/or find the errors that I think the other one made.

5) Decide whether or not your arguing will probably be productive and then

a) Do it (in a civilized manner of course) or

b) Make a sarcastic comment that pinpoints the irrationality you see or simply say that you don't agree and leave.

I realize that this can't really be done in 5 seconds, but I think I got far enough myself that I can do the first two steps in a couple of seconds and keeping option 5b) in mind helps me in calming myself down.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:31:17PM 2 points [-]

The goal invoked in the post, though, is to avoid moralizing in conversations between rationalists so that they don't feel like they're walking through a minefield. Having the anger and suppressing it, doesn't work for that. The person next to you is still walking the minefield. They're just not getting feedback.

Comment author: Anny1 08 May 2011 10:07:35AM *  0 points [-]

From some of the above posts I get the impression that at least in a community of aspiring rationalists, there is still some anger around. I think it is one of the hardest things to get rid of.

There is a point about my personal technique I wanted to make that I feel I didn't really transport so far... I find it hard to explain though. Thinking about something like option 5b) somehow helps me to combat the feeling of helplessness that is often mixed in with the anger. Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

Also there is a difference between suppressing anger and what I am trying to describe that feels totally clear internally but is also hard to explain.

The point about the missing feedback is a very good one and I'm wondering if and how and how often rationalists give each other feedback about how the discussion makes them feel.

Comment author: Alicorn 08 May 2011 10:33:57AM 1 point [-]

Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

I think I may know what you're talking about. I find it immensely helpful to tell myself (when it's true) "there is no hurry", sometimes repeatedly. When there's no hurry, I can double-check. When there's no hurry, I can ask someone for help. When there's no hurry, there's no reason to panic. When there's no hurry, I can put it down, come back to it later whenever I feel like it, and see if anything's changed about how I want to react to it. So it's more general than just anger, but perhaps the same class of thing.

Comment author: Anny1 09 May 2011 06:03:36PM 0 points [-]

Yes that's what I mean, thank you.

Comment author: twanvl 07 May 2011 12:08:58PM 4 points [-]

Answering "a color" to the question "what is red?" is not irrational or wrong in any way. In fact, it is the answer that is usually expected. Often when people ask "what is X?" they do in fact mean "to what category does X belong?". I think this is especially true when teaching. A teacher will be happy with the answer "red is a color".

Comment author: DSimon 07 May 2011 08:31:56PM *  4 points [-]

Agreed, though I think this depends a lot on who you're talking to and what they already know. Typically if someone I know asks me something like "What is red?" they're trying to start some kind of philosophical conversation, and in that case "It's a color" is the proper response (because it lets them move on to their next Socratic question, and eventually to the point they're making).

On the other hand, if we were talking to color-blind aliens, answering "It's what is in common between light reflected by the stop sign there and the fire truck yonder, but not the light reflected by this mailbox here" is a lot more useful starting response than "it's a color". If I answered "It's a color", and the alien is fairly smart and thinks like a human, the conversation would probably then go:

Alien: So what's a color then?

Me: Well a color is a particular kind of light...

Alien: Wait, hold on. Light, like the stuff that bounces off objects and that I use to see with?

Me: Yep, that's it.

Alien: What distinguishes light of one color from that of another?

Me: The wavelength of the light wave.

Alien: What wavelength is red light?

Me: Off the top of my head, I don't know. If you have a way to measure the wavelength of light, though, then that stop sign there and the fire truck younder are both red to my eyes, so the light they're reflecting is in that wavelength.

Alien: Gotcha.

... If I went straight to the examples, I'd have ended up at pretty much the same point, but a lot quicker.

Comment author: mendel 08 May 2011 02:28:30AM 3 points [-]

Assuming the person who asks the question wants to learn something and not hold a socratic argument, what they need is context. They need context to anchor the new information (there's a word "red", in this case) to what they already know. You can give this context in the abstract and specific (the "one step up, one step down" method that jimrandomh descibes above achieves this), but it doesn't really matter. The more different ways you can find, the better the other person will understand, and the richer a concept they will take away from your conversation. (I'm obviously bad at doing this.)

An example is language learning: a toddler doesn't learn language by getting words explained, they learn language by hearing sounds used in certain contexts and recalling the association where appropriate.

I suspect that the habit of answering questions badly is being taught in school, where an answer is often not meant to transfer knowledge, but to display it. If asked "What is a car?", answering that is has wheels and an engine will get you a better grade than to state that your mom drives a Ford, even though talking about your experience with your mom's car would have helped a car-less friend to better understand what it means to have one.

So what we need to learn (and what good teachers have learned) is to take questions and, in a subconscious reaction, translate them to a realisation what the asking person needs to know: what knowledge they are missing that made them ask the question, and to provide it. And that depends on context as well: the question "what is red" could be properly answered by explaining when the DHS used to issue red alerts (they don't color code any more), it could be explaining the relation of a traffic light to traffic, it could be explaining what red means in Lüscher's color psychology or in Chinese chromotherapy. If I see a person nicknamed Red enter at the far side of the room wearing a red sweater, and I shudder and remark "I don't like red", then someone asks me "what do you mean, red" I ought to simply say that I meant the color - any talk of stop signs or fire engines would be very strange. To be specific, I would answer "that sweater".

To wrap this overlong post up, I don't think there's an innate superiority of the specific over the abstract. What I'll employ depends on what the person I'm explaining stuff to already understands. A 5-second "exercise" designed to emphasise the specific over the abstract can help me overcome a mental bias of not considering specifics in my explanations (possibly instilled by the education system). It widens the pool that I can draw my answers from, and that makes me a potentially better answerer.

Comment author: cousin_it 07 May 2011 11:55:49AM *  27 points [-]

"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:

  1. "Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

  2. "Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).

  3. "Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Comment author: MartinB 13 May 2011 12:47:20PM 4 points [-]

"Don't wait."

A nice hack from GTD is to keep a 'wait-for' list. I use that for orders, reactions to inquires, everything where someone has to get back to me. Put it on a list and forget about it.

Extra points if you do not check the arrival time of you internet purchases at all during the first week of waiting.

Comment author: Swimmer963 08 May 2011 01:03:18PM 2 points [-]

"Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Same. And it has also served me well, although maybe not solely because of that preference–I was in a better financial situation to start with than many university students, and I'm a workaholic with a part-time job that I enjoy, and I also enjoy living frugally and don't consider it to diminish my quality of life the way some people do.

Comment author: gjm 08 May 2011 09:49:11AM 6 points [-]

I have the same debt-flinch, and the same feeling about how well it works, but with one qualification: I was persuaded to treat mortgage debt differently (though I've always been very conservative about how much I'd take on) and that seems to have served me very well too.

This isn't meant as advice about mortgages: housing markets vary both spatially and temporally. More as a general point: it's probably difficult to make very sophisticated flinch-triggers, which means that even good flinching habits are likely to have exceptions from time to time, and sometimes they might be big ones.

Comment author: taryneast 09 May 2011 08:46:05PM 8 points [-]

Agreed. Kiyosaki's "Rich dad Poor dad" has lots of good advice about the difference between "good debt" and "bad debt".

AFAI recall it boiled down to "only borrow money for assets, not liabilities"

ie - good debt is borrowing for things that will continue to make you more money (including your appreciating house or your business) and bad debt is for things like holidays or house redecorating projects - things that simply take cash our of your hand.

This has worked pretty well for me so far too.

Comment author: rhollerith_dot_com 26 May 2011 03:45:03PM 2 points [-]

AFAI recall it boiled down to "only borrow money for assets, not liabilities"

Only borrow money for assets, not expenses.

Comment author: taryneast 26 May 2011 09:39:03PM 0 points [-]

The book defines a liability as "something that takes money from your pocket" - so the two can be considered roughly equivalent.

Comment author: rhollerith_dot_com 26 May 2011 10:51:22PM *  3 points [-]

OK, but that's not the standard definition of a liability used by accountants and such.

Comment author: taryneast 28 May 2011 08:53:26AM *  1 point [-]

Yes, that is discussed in the book. He makes a big deal about the difference. In fact he discuses the seeming inconsistency of accountant putting large items into the "assets" column that do nothing but depreciate in value...

I'd argue that the main point of Rich dad, poor dad can be summarised as:

1) assets put money into your pocket, liabilities take money out of it 2) you gain wealth by adding to your assets instead of your liabilities

It's roughly equivalent to the dietary advice of "you lose weight by making sure there are more calories being spent than eaten"

Comment author: rhollerith_dot_com 28 May 2011 09:58:04AM *  6 points [-]

Well, it makes me sad to see a very standardized and crisp term like "liability" used in such a confusing and nonstandard way. Especially when there is another equally crisp and very standardized term ("expense") that could be used instead. And I do not want to talk about it anymore.

Comment author: gjm 10 May 2011 06:27:54PM 5 points [-]

Kiyosaki's "Rich Dad, Poor Dad" has also received some extremely harsh criticism, some of it at least from people who seem to have a clue what they're talking about. I haven't looked at it myself and am not a financial expert, but would advise anyone considering reading it and/or taking Kiyosaki's advice to exercise caution.

Comment author: JohnH 15 May 2011 06:15:44AM 3 points [-]

The same can and should be said about any book that purports to advise people on how to become rich.

I wish people were required to include in the appendix of such a book their net worth as independently assessed by an external audit and tax returns and other filings presented to show that they are wealthy and have actually gained that wealth in the manner described by the book.

Even then caution would still be needed as if markets are efficient (or even slightly efficient) then something that provided market beating returns 3-5 years ago (or however long it has been since they gained their wealth) should be expected to only provide market rates of return currently.

Comment author: lukeprog 15 May 2011 03:27:06AM 11 points [-]

The classic takedown of Kiyosaki is from John T. Reed.

Comment author: taryneast 26 May 2011 08:51:21AM *  2 points [-]

Thanks for the link. ok, that made me reconsider entirely. Lots of good points here.

I guess I liked the motivational tone of the book - but yep, it looks like his facts are not so hot (and in a lot of cases entirely fictional).

Comment author: BillyOblivion 13 May 2011 10:54:24AM 0 points [-]

Is there any financial advisor or financial book that you can recommend without reservation and that people can take without exercising caution?

Comment author: Blueberry 30 March 2012 11:23:27AM 0 points [-]

The classic is Andrew Tobias, "The Only Investment Guide You'll Ever Need." You can trust it because he's not selling anything and teaches common-sense, conservative advice: no risky speculation or anything.

Comment author: BillyOblivion 16 April 2012 12:01:48PM 0 points [-]

Sorry, I was attempting to be clever, cynical and hip. This apparently impeded effective communication.

Let me rephrase it so that it is more difficult to misunderstand:

All financial advice should be received with reservation and taken with caution.

Better?

Comment author: gjm 13 May 2011 02:19:49PM 2 points [-]

I doubt it. But there are some for which no more caution is needed than could be taken largely for granted with an intelligent bunch of people like the readership of Less Wrong, and some that aren't very approachable by anyone who isn't quite expert already. There's no need to say "exercise caution" about those. It appears that Kiyosaki's book is very approachable and may be very unreliable. That's an especially dangerous combination, if true.

Comment author: MartinB 13 May 2011 12:44:35PM -1 points [-]

Ramith Sethi: iwillteachyoutoberich.com

Kiyosaki is nice for some mindset and basic approach, but horrible on the concrete advise. Do not go into buying houses due to his books.

My small favorite is George Clayson: the richest man in Babylon. Then there is a galore of more modern books. Check out Ramiths recommended readings.

Comment author: Swimmer963 08 May 2011 01:16:07PM 2 points [-]

This is what my mother said to me: all types of debt are bad, but mortgage debt is unavoidably. My chosen career field is nursing, which is a pretty reliable income source, so I'm not worried about taking on a mortgage when the time comes.

Comment author: gjm 08 May 2011 09:44:51AM 1 point [-]

The trouble with not waiting is that it increases your number of mental context switches, and they can be really expensive. Whether "don't wait" is good advice probably depends on details like the distribution of waiting times, what sort of tasks one's working on, and one's mental context-switch speed.

Comment author: Sniffnoy 08 May 2011 08:09:23AM 0 points [-]

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

For purposes of avoiding ambiguity this might be better phrased as "don't block" or "don't busy-wait". Although combined with #2 it might indeed become "don't wait" in the more general sense to some extent!

Comment author: [deleted] 07 May 2011 01:51:10PM 3 points [-]

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

Could you elaborate a bit on that?

I noticed that I often wait for small tasks that end up taking a lot of time. For example, I need to compile a library or finish a download and estimate that it won't take long, maybe a few minutes at most. But I find it really hard to just do something else instead of waiting. I can't just go read a book or do some Anki reps. Whenever I tried that, I either have the urge to constantly check up on the blocking task or I get caught up in the replacement (or on reddit). So I end up staring at a screen, doing nothing, just so I don't lose my mental context. At worst, I can sit for half an hour and get really frustrated with myself.

Comment author: Antisuji 07 May 2011 07:00:41PM *  8 points [-]

I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.

For compiles, do something like

$ make; growlnotify -m "compile done!"

or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)

[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.

Comment author: rhollerith_dot_com 10 May 2011 01:21:28AM *  0 points [-]

I couldn't get growlnotify to work reliably on my Snow Leopard. And some of Growl's preference panes are absurd. And Growl insists on growling at you every time it auto-updates itself, with no way to turn that off. My friend Darius dislikes it, too.

Comment author: Antisuji 10 May 2011 05:22:30AM 1 point [-]

Is there a better alternative?