The 5-Second Level

111 Post author: Eliezer_Yudkowsky 07 May 2011 04:51AM

To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less.  Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.

As our first example, let's take the vital rationalist skill, "Be specific."

Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"

A couple of formative childhood readings that taught me to be specific:

"What is meant by the word red?"
"It's a color."
"What's a color?"
"Why, it's a quality things have."
"What's a quality?"
"Say, what are you trying to do, anyway?"

You have pushed him into the clouds.  If, on the other hand, we habitually go down the abstraction ladder to lower levels of abstraction when we are asked the meaning of a word, we are less likely to get lost in verbal mazes; we will tend to "have our feet on the ground" and know what we are talking about.  This habit displays itself in an answer such as this:

"What is meant by the word red?"
"Well, the next time you see some cars stopped at an intersection, look at the traffic light facing them.  Also, you might go to the fire department and see how their trucks are painted."

-- S. I. Hayakawa, Language in Thought and Action

and:

"Beware, demon!" he intoned hollowly.  "I am not without defenses."
"Oh yeah?  Name three."

-- Robert Asprin, Another Fine Myth

And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?"  Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."

But the real subject of today's lesson is how to see skills like this on the 5-second level.  And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.

Over-abstraction happens because it's easy to be abstract.  It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign.  Abstraction is a path of least resistance, a form of mental laziness.

So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.

Then, you have actionable stored procedures that associate to the trigger.  And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task.  An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."

Or to be more specific on the last mental procedure:  Why were you trying to describe redness to someone?  Did they just run a red traffic light?

(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights?  What could teach someone to distinguish red from green?)

When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?"  Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?"  Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?"  Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"

Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.

To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill.  Break down their internal experience into the smallest granules you can manage:  perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration.  And then generalize what they're doing while staying on the 5-second level.

Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common."  What did they do on the 5-second level?

  1. Perceptually recognize a statement they made as overly abstract.
  2. Feel the need for an accompanying concrete example.
  3. Be sufficiently averse to the lack of such an example to avoid the path of least resistance where they just let themselves be lazy and abstract.
  4. Associate to and activate a stored, actionable, procedural skill, e.g:
    4a.  Try to remember a memory which matches that abstract thing you just said.
    4b.  Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
    4c.  Ask why you said the abstract thing in the first place and see if that suggests anything.

and

  • Before even 1:  They recognize that the notion of "concrete" means things like folding chairs, events like a young woman buying a vanilla ice cream, and the number 17, i.e. specific enough to be visualized; and they know "red is a color" is not specific enough to be satisfying.  They perceptually recognize (this is what Hayakawa was trying to teach) the cardinal directions "more abstract" and "less abstract" as they apply within the landscape of the mind.

If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.

Next example of thinking on the 5-second scale:  I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back?  One of the primary qualities cited was "Being non-judgmental."  Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time.  (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.)  So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this:  If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds.  I may judge.  But I won't start saying in immense indignation what a terrible person you must be for suggesting it.

Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities.  That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun.  There'd be a new accusation to worry about if you said the wrong thing - "Hey!  Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.

In this case I think it's actually easier to define the thing-we-avoid on the 5-second level.  Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person.  On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.

(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists.  It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.  You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)

The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences.  Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"

What would be an exercise which develops that habit?  I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant.  One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead.  Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs.  Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone.  (Did that last sentence offend you?  Pause and reflect!)  The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill.  The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced.  But we could try that teaching method, at any rate.

(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it.  To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas.  You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)

I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it.  This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.

Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like?  Of course you have!  You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.

Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments.  Hope to hear from you soon!

Comments (310)

Comment author: JohnH 07 May 2011 06:49:47AM 7 points [-]

Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common

They are both physical objects, usually containing some metal and of roughly the same height, that have the ability to stop traffic, thus are found on a road, and have the colors of silver and white and (presumably by the specification of "that") also red in common?

(by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences

(sarcasm) Really? I hadn't noticed in the slightest... (/sarcasm)

What would be an exercise which develops that habit?

Talking with people that do not agree with you as though they were people. That is taking what they say seriously and trying to understand why they are saying what they say. Asking questions helps. Also, assume that they have reasons that seem rational to them for what they say or do, even if you disagree.

This also helps in actually reasoning with people. To show that something is irrational, it is needed to show that it is irrational within the system that they are using, not your own. Bashing someone over the head with ones reasonings in ones own system doesn't (usually) work (unless one believes there is an absolute correct reasoning system that is universally verifiable, understandable, and acceptable to everyone (and the other person thinks likewise, or one happens to actually be right about that assumption)). Often times, such reasonings when translated to what the other person's system is become utter nonsense. This is why materialists have such a hard time dealing with much of religion and platonic thought, and vice versa.

Taking as an assumption that the thing one is trying to show is irrational (or doesn't exist) is actually irrational (or actually doesn't exist) is perhaps the worst thing to do when constructing an argument meant to convince people that believe otherwise. For example see, The Amazing Virgin Birth and try and think of it from a Catholics perspective.

Comment author: jimmy 07 May 2011 06:53:55AM *  28 points [-]

I'm a big fan of breaking things down to the finest grain thoughts possible, but it still surprises me how quickly this gets complicated when trying to actually write it down.

http://lesswrong.com/lw/2l6/taking_ideas_seriously/

Example: Bob is overweight and an acquaintance mentions some "shangri-la" diet that helps people lose weight through some "flavor/calorie association". Instead of dismissing it immediately, he looks into it, adopts the diet, and comfortably achieves his desired weight.

1) Notice the feeling of surprise when encountering a claim that runs counter to your expectations.

2) Check in far mode the importance of the claim if it were true by running through a short list of concrete implications (eg "I can use this diet and as a result, I can enjoy exercise more, I can feel better about my body, etc")

  • If any thoughts along the lines of "but it's not true!" come up, remind yourself that you need to be able to clearly understand the implications of the statement and its importance separately from deciding its truth value, and that this is good practice even if this example is obviously false.

3) Imagine reaping the benefits in near mode to help build appropriate motivation.

  • Ask yourself "What would the world look like if this were true?", and if no glaring contradictions come up, mentally explore this world.

  • If necessary, imagine a reversal test to help the situation feel normal, even if flagged with uncertainty.

  • Cultivate and keep this mindset and feeling of "this is really important!" for things that are really important.

4) Using this calculation (ie NOT the 'gut feel' calculation) and econ101 based heuristics to determine how much effort to put into verifying/analyzing implications of the statement.

Done thoroughly and sequentially, this will take much more than 5 seconds, but a crude nonverbal run through can be done quickly, and the process can be repeated in increasing detail once the ball is rolling.

To train people, start by having them run though an example of taking things seriously in an obvious case (eg "What? I was supposed to drive south!?"). Do this in as much imaginative detail as possible to help bring to mind the associated mindset and feelings as strongly as possible.

Encourage them to 'try out' this new habit by having them imagine increasingly non obvious examples with the mindset of "this is how I really think and it's crazy to not have this habit" until it can be done quickly, 'automatically', and in a way that feels natural. Keep going until they have a sense of what it would feel like for some important and confidently held beliefs to be wrong.

This "imagine it as if it were real" part is really really important, and I have personally had success with that method in general.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:37:52PM 9 points [-]

Upvoted for being the only one to try the exercise.

Comment author: NancyLebovitz 07 May 2011 07:54:29AM 6 points [-]

why is it that once you try out being in a rationalist community you can't bear the thought of going back

Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".

Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!)

Unfortunately I recognize that as the bitter truth, so it's of no use for me for training purposes.

Here's something which might work as an indignation test-- could it be a good move for an FAI to set a limit on human intelligence?

If an AI can be built, it has been shown that humanity is an AI-creating species. As technology and the promulgation of human knowledge improves, it will become easier and easier to make AIs, and the risk of creating a UFAI that the FAI can't defeat goes up.

It will be easier to have people who can't make AIs than to try to control the tech and knowledge comprehensively enough to make sure there are no additional FOOMs.

I considered limiting initiative (imposing akrasia) rather than intelligence, but I think that would impact a wider range of human values.

Comment author: ArisKatsaris 07 May 2011 01:57:39PM 5 points [-]

Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".

Same here. I suggest Eliezer edit it to make the intent more clear at first reading.

Comment author: wedrifid 07 May 2011 07:54:50AM 12 points [-]

rationalists don't moralize

I like the theory but 'does not moralize' is definitely not a feature I would ascribe to Eliezer. We even have people quoting Eliezer's moralizing for the purpose of spreading the moralizing around!

"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."

In terms of general moralizing tendencies of people who identify as rationalists they seem to moralize slightly less than average but the most notable difference is what they choose to moralize about. When people happen to have similar morals to yourself it doesn't feel like they are moralizing as much.

Comment author: BenAlbahari 07 May 2011 12:57:04PM *  5 points [-]

people who identify as rationalists they seem to moralize slightly less than average

Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.

Comment author: wedrifid 07 May 2011 03:11:20PM 4 points [-]

Really? The LW website attracts aspergers types and apparently morality is stuff aspergers people like.

That's true, and usually I say 'a lot more' rather than 'slightly less'. However in this instance Eliezer seemed to be referring to a rather limited subset of 'moralizing'. He more or less excluded being obnoxiously judgemental but phrasing your objections with consequentialist language. So the worst of nerd-moralizing was cut out.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:37:15PM 5 points [-]

Not everything that is not purely consequentialist reasoning is moralizing. You can have consequentialist justifications of virtue ethics or even consequentialist justifications of deontological injunctions, and you are allowed to feel strongly about them, without moralizing. It's a 5-second-level emotional direction, not a philosophical style.

Sigh. This is why I said, "But trying to define exactly what constitutes 'moralizing' isn't going to get us any closer to having nice rationalist communities."

Comment author: wedrifid 07 May 2011 07:43:59PM 3 points [-]

I agree with the parent but maintain everything in the grandparent. There just isn't any kind of contradiction of the kind that from the sigh I assume is intended.

Comment author: matt 08 May 2011 04:40:58AM 3 points [-]

I find myself frequently confused by Eliezer's "sigh"s.

Comment author: katydee 08 May 2011 05:12:38AM *  0 points [-]
Comment author: wedrifid 08 May 2011 07:14:08AM *  2 points [-]

Noticing your confusion is the first step to understanding.

Poster child for ADBOC.

Comment author: katydee 08 May 2011 07:45:02AM 0 points [-]

Good point, link added.

Comment author: fubarobfusco 07 May 2011 10:11:12PM *  10 points [-]

Eliezer, did you mean something different by the "does not get bullet" line than I thought you did? I took it as meaning: "If your thinking leads you to the conclusion that the right response to criticism of your beliefs is to kill the critic, then it is much more likely that you are suffering from an affective death spiral about your beliefs, or some other error, than that you have reasoned to a correct conclusion. Remember this, it's important."

This seems to be a pretty straightforward generalization from the history of human discourse, if nothing else. Whether it fits someone's definition of "moralizing" doesn't seem to be a very interesting question.

Comment author: Eliezer_Yudkowsky 08 May 2011 01:10:21AM 1 point [-]

Agreed.

Comment author: BenAlbahari 08 May 2011 10:39:12AM 19 points [-]

Sigh.

A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".

FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "suspended judgement" could work? My head hurts reading these comments trying to figure out how each person is using the term "moralize" and I now have to think twice when reading the term on LW, including even your old posts. This is an unnecessary cognitive burden. In any case, my final note here would be to consider that you'd be lucky if your target audience for your upcoming book(s) was anywhere near as sharp as wedrifid. So if he's confused, that's a valuable signal.

Comment author: BenAlbahari 07 May 2011 08:13:29AM *  4 points [-]

don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers

OK, so you're saying that to change someone's mind, identify mental behaviors that are "world view building blocks", and then to instill these behaviors in others:

...come up with exercises which, if people go through them, causes them to experience the 5-second events

Such as:

...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...

Or:

...to feel the temptation to doubt, and to make the choice not to doubt, and to associate alternative procedural patterns such as pausing, prayer...

The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.

Comment author: Eliezer_Yudkowsky 07 May 2011 09:15:20AM 13 points [-]

The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.

Um, yes. This is supposed to increase your general ability to teach a human to do anything, good or bad. In much the same way, having lots of electricity increases your general ability to do anything that requires electricity, good or bad. This does not make electrical generation a Dark Art.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:26:20PM 6 points [-]

Actually, it occurs to me that this can be generalized. We might feel morally worried about a technique for initial epistemic persuasion which can operate equally to convince people of true statements or false statements, which is being used without the person's knowledge and before they've come to an initial decision about the worth of the idea (i.e., it's not like they already believe it and you're trying to help them alieve it). This is what some people (not me, please note) termed the Dark Arts.

Instrumental techniques which are useful for accomplishing anything, good or bad, depending on the user's utility function? Those are fine. Those are great. Nothing Dark about them.

Comment author: Cyan 07 May 2011 07:38:55PM 2 points [-]

I think the usual statement of this idea is something like, "Tool X can be used for good or evil."

Comment author: Eliezer_Yudkowsky 08 May 2011 01:11:54AM 3 points [-]

Most tools can be. Tools with moral dimensions are rare.

Comment author: Mitchell_Porter 07 May 2011 09:00:38AM 11 points [-]

On the topic of the "poisonous pleasure" of moralistic critique:

I am struck by the will to emotional neutrality which appears to exist among many "aspies". It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the "emotional games", and they refuse to resist in the usual way when those games are directed against them - the usual form of defense being a counterattack - because that would make them just as bad as the aggressor normals.

For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason - that being able to fight back is empowering - but because it's actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.

Comment author: Leonhart 07 May 2011 12:27:54PM *  7 points [-]

It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up.

Mitchell, yes, that was me back in high school. But IIRC I thought I was doing this.

Comment author: lukstafi 07 May 2011 12:34:47PM 0 points [-]

Could you be more specific? Is the "inner moralizer", as opposed to, say, "inner consequentialist", a virtue by the human condition (by how the brain is wired), or is it "objectively good solution given limited cognitive resources"? Is your statement rather about humans, or rather about moralization?

Comment author: Mitchell_Porter 08 May 2011 08:32:48AM *  1 point [-]

I am still thinking this through. It's a very subtle topic. But having begun to think about it, the sheer number of arguments that I have found (which are in favor of preserving and employing the moral perspective) encourages me to believe that I was right - I'm just not sure where to place the emphasis! Of course there is such a thing as moral excess, addiction to moralizing, and so forth. But eschewing moral categories is psychologically and socially utopian (in a bad sense), the intersubjective character of the moral perspective has a lot going for it (it's cognitively holistic since it is about whole agents criticizing whole agents; you can't forgive someone unless you admit that they have wronged you; something about how you can't transcend the moral perspective, in the attractive emotional sense, unless you understand it by passing through it)... I wouldn't say it's just about computational utility.

Comment author: lukstafi 08 May 2011 12:28:39PM 0 points [-]

I must clarify that I've been concerned with contrasting the function of moralization, and the mechanism of moralization, which is ingrained very deeply to the effect that without enough praise children develop dysfunctionally, etc.

Comment author: Barry_Cotter 07 May 2011 01:03:36PM 3 points [-]

You don't need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them. If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider. You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.

Comment author: fiddlemath 07 May 2011 08:13:32PM 7 points [-]

You do not need to be angry to decide that someone is in your way and that it will be necessary to fuck them up.

No; but it certainly makes it likelier that you will bring yourself to action.

Comment author: Swimmer963 07 May 2011 08:19:06PM 3 points [-]

You don't need to be angry to hit someone, or to spread gossip, or to otherwise retaliate against them.

If you're not angry, what would motivate you to do any of those things? If someone injures me in some way or takes something that I wanted, usually neither hitting them nor spreading gossip about them will in any way help me repair my injury or get back what they took from me. So I don't. Unless I'm angry, in which case it kind of just happens, and then I regret it because it usually makes the situation worse.

Comment author: TheOtherDave 07 May 2011 09:11:04PM 2 points [-]

I might hit someone because they're pointing a gun at me and I believe hitting them is the most efficient way to disarm them. I might hit someone because they did something dangerous and I believe hitting them is the most efficient way to condition them out of that behavior. I might spread gossip about them because they are using their social status in dangerous ways and I believe gossiping about them is the best available way of reducing their status.

None of those cases require anger, and they might even make the situation better. (Or they might not.)

Or, less nobly, I might hit someone because they have $100 I want, and I think that's the most efficient way to rob them. I might spread gossip about them because we're both up for the same promotion and I want to reduce their chance of getting it.

None of those cases require anger, either. (And, hey, they might make the situation better, too. Or they might not.)

Comment author: Swimmer963 08 May 2011 01:00:14AM 2 points [-]

I suppose the context of my comment was limited to a) me personally (I don't have any desire to steal money or reduce other people's chances of promotion) and b) to the situations I have encountered in the past (no guns or danger involved). Your points are very valid though.

Comment author: Barry_Cotter 08 May 2011 09:14:37PM 0 points [-]

If you're not angry, what would motivate you to do any of those things?

If you are dealing with someone in your social circle, or can be seen by someone in your social circle and you want to build or maintain a reputation as someone it is not wise to cross. Even if it's more or less a one shot game, if you make a point of not being a doormat it is likely to impact your self-image, which will impact your behaviour, which will impact how others treat you.

Even if in the short run retaliating helps nobody and slightly harms you, it can be worth it for repuatational and self-concept reasons.

Comment author: Swimmer963 08 May 2011 10:23:25PM 3 points [-]

Point taken. I am a doormat. People have told me this over and over again, so I probably have a reputation as a doormat, but that has certain value in itself; I have a reputation as someone who is dependable, loyal, and does whatever is asked of me, which is useful in a work context.

Comment author: AdeleneDawner 08 May 2011 09:31:45PM 5 points [-]

If you're not angry, what would motivate you to do any of those things?

Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you're serious about something. This seems to be particularly true when dealing with people who aren't inclined to use more 'intellectual' communication methods.

Comment author: Swimmer963 08 May 2011 10:17:37PM 0 points [-]

Seems true. Nevertheless I've never used it in this way. This may have more to do with my personality than anything: from what I've read here, I'm more of a conformist than the average Less Wrong reader, and I put a higher value on social harmony. I hate arguments that turn personal and emotional.

Comment author: wedrifid 08 May 2011 11:22:59PM 1 point [-]

Put simply, sometimes displaying a strong emotional response (genuine or otherwise) is the only way to convince someone that you're serious about something. This seems to be particularly true when dealing with people who aren't inclined to use more 'intellectual' communication methods.

I think you're right. Mind you as someone who is interested in communication that doesn't involve control via strong emotional responses I most definitely don't reward bad behaviour by giving the other what they want. This applies especially if they use the aggressive tactics of the kind mentioned here. I treat those as attacks and respond in such a way as to discourage any further aggression by them or other witnesses.

This is not to say I don't care about the other's experience or desires, nor does it mean that a strong emotional response will rule out me giving them what they want. If the other is someone that I care about I will encourage them towards expressions that actually might work for getting me to give them what they want. I'll guide them towards asking me for something and perhaps telling me why it matters to them. This is more effective than making demands or attempting to emotionally control.

I'm far more generous than I am vulnerable to dominance attempts and I'm actually willing to consciously make myself vulnerable to personal requests to just behind the line of being an outright weakness because I have a strong preference for that mode of communication. Mind you even this tends to be strongly conditional on a certain degree of reciprocation.

Point being that I agree with the sometimes qualifier; the benefit to such displays (genuine or otherwise) is highly variable. We also have the ability to influence whether people make such displays to us. Partly by the incentive they have and partly by simple screening.

Comment author: Vladimir_M 07 May 2011 10:08:31PM *  8 points [-]

If you recognise that someone is a threat or an obstacle you can deal with them as such without the cloud of rage that makes you stupider.

Then why didn't humans evolve to perform rational calculations of whether retaliation is cost-effective instead of uncontrollable rage? The answer, of course, is largely in Schelling. The propensity to lose control when enraged is a strategic precommitment to lash out if certain boundaries are overstepped.

Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I'd say that in most situations in which it enters the strategic calculations it's still greatly beneficial.

Comment author: Barry_Cotter 08 May 2011 09:07:11PM 2 points [-]

Now of course, in the modern world there are many more situations where this tendency is maladaptive than in the human environment of evolutionary adaptedness. Nevertheless, I'd say that in most situations in which it enters the strategic calculations it's still greatly beneficial.

I agree, or at least agree for situations where people are in their native culture or one they're intimately familiar with, so that they're relatively well-calibrated. What I wrote was poorly phrased to the point of being wrong without lawyerly cavilling.

To rephrase more carefully; you can act in a manner that gets the same results as anger without being angry. You can have a better, more strategic response. I'm not claiming it's easy to rewire yourself like this, but it's possible. If your natural anger response is anomalously low, as is the case for myself and many others on the autism spectrum, and you're attempting some relatively hardcore rewiring anyway, why not go for the strategic analysis instead of trying to decrease your threshold for blowing up?

Comment author: Vladimir_M 09 May 2011 06:15:09AM *  7 points [-]

I'm not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent's incentives to create these conditions, so if the strategy works, you don't actually have to perform the irrational act, which remains just a counterfactual threat.

In particular, if you enter confrontations only when it is cost-effective to do so, this may leave you vulnerable to a strategy that maneuvers you into a situation where surrender is less costly than fighting. However, if you're precommitted to fight even irrationally (i.e. if the cost of fighting is higher than the prize defended), this makes such strategies ineffective, so the opponent won't even try them.

So for example, suppose you're negotiating the price you'll charge for some work, and given the straightforward cost-benefit calculations, it would be profitable for you to get anything over $10K, while it would be profitable for the other party to pay anything under $20K, so the possible deals are in that range. Now, if your potential client resolutely refuses to pay more than $11K, and if it's really impossible for you to get more, it is still rational for you to take that price rather than give up on the deal. However, if you are actually ready to accept this price given no other options, this gives the other party the incentive to insist with utter stubbornness that no higher price is possible. On the other hand, if you signal credibly that you'd respond to such a low offer by getting indignant that your work is valued so little and leaving angrily, then this strategy won't work, and you have improved your strategic position -- even though getting angry and leaving is irrational assuming that $11K really is the final offer.

(Clearly, the strategy goes both ways, and the buyer is also better off if he gets "irrationally" indignant at high prices that still leave him with a net plus. Real-life negotiations are complicated by countless other factors as well. Still, this is a practically relevant example of the basic principle.)

Now of course, an ideally rational agent with a perfect control of his external behavior would play the double game of signaling such precommitment convincingly but falsely and yielding if the bluff is called (or perhaps not if there would be consequences on his reputation). This however is normally impossible for humans, so you're better off with real precommitment that your emotional propensity to anger provides. Of course, if your emotional propensities are miscalibrated in any way, this can lead to strategic blunders instead of benefits -- and the quality of this calibration is a very significant part of what differentiates successful from unsuccessful people.

Comment author: wedrifid 09 May 2011 06:29:39AM 1 point [-]

I'm not sure if you understand the real point of precommitment. The idea is that your strategic position may be stronger if you are conditionally committed to act in ways that are irrational if these conditions are actually realized. Such precommitment is rational on the whole because it eliminates the opponent's incentives to create these conditions, so if the strategy works, you don't actually have to perform the irrational act, which remains just a counterfactual threat.

I agree with what you are saying and would perhaps have described it as "ways that would otherwise have been irrational".

Comment author: TimFreeman 07 May 2011 01:21:16PM *  0 points [-]

For someone [with at least a shade of Asperger's Syndrome], it may be important to get in touch with their inner moralizer!

Agreed, although I don't know that I have any Asperger's. Here's a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer. I didn't record it, so it's paraphrased from memory:

X: It's really important to me what happens to the species a billion years from now. (X actually made a much longer statement, with examples.)

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time. It seems much more likely that you perceive talking about things a billion years off to be high status, and what you really want is the short term status gain from saying you have impressive plans. People aren't really that altruistic.

X: I hate it when people point out that there are two of me. The status-gaming part is separate from the long-term planning part.

Me: There are only one of you, and only one of me.

X: You're selfish! (This actually made more sense in the real conversation than it does here. This was some time ago and my memory has faded.)

Me: (I exited the conversation at this point. I don't remember how.)

I exited because I judged that X was making something he perceived to be an ad-hominem argument, and I knew that X knew that ad-hominem arguments were fallacious, and I couldn't deal with the apparent dishonesty. It is actually true that I am selfish, in the sense that I acknowledge no authority over my behavior higher than my own preferences. This isn't so bad given that some of my preferences are that other people get things they probably want. Today I'm not sure X was intending to make an ad-hominem argument. This alternative for my last step would have been better:

Me if I were in touch with my inner moralizer: Do I correctly understand that you are trying to make an ad-hominem argument?

If I had taken that path, I would either have clear evidence that X is dishonest, or a more interesting conversation if he wasn't; either way would have been better.

When I visualize myself taking the alternative I presently prefer, I also imagine myself stepping back so I would be just out of X's reach. I really don't like physical confrontation.

My original purpose here was give an example, but the point at the end is interesting: if you're going to denounce, there's a small chance that things might escalate, so you need to get clear on what you want to do if things escalate.

Comment author: Peter_de_Blanc 07 May 2011 02:09:52PM 3 points [-]

Me: Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

Comment author: TimFreeman 07 May 2011 08:37:22PM 1 point [-]

In what sense are you using the word imagine, and how hard have you tried to imagine a billion years?

I have a really poor intuition for time, so I"m the wrong person to ask.

I can imagine a thousand things as a 10x10x10 cube. I can imagine a million things as a 10x10x10 arrangements of 1K cubes. My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.

In order to have desires about something, you have to have a compelling internal representation of that something so you can have a desire about it.

X didn't say "I can too imagine a billion years!", so none of this pertains to my point.

Comment author: Peter_de_Blanc 08 May 2011 02:33:45AM 0 points [-]

First, I imagine a billion bits. That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits. Then I imagine that each of those bits represents some proposition about a year - for example, whether or not humanity still exists. If you want to model a second proposition about each year, just add another billion bits.

Comment author: ArisKatsaris 09 May 2011 09:59:20AM 4 points [-]

That's maybe 15 minutes of high quality video, so it's pretty easy to imagine a billion bits.

Perhaps I don't understand your usage of the word 'imagine' because this example doesn't really help me 'imagine' them at all. Imagine their result (the high quality video) sure, but not the bits themselves.

Comment author: RichardKennaway 09 May 2011 11:31:39AM *  8 points [-]

My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.

Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?

A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)

Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.

It takes 1 billion of those millimetre cubes to fill that volume.

Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.

Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.

A bigger problem I have with the original is where X says "It's really important to me what happens to the species a billion years from now." The species, a billion years from now? That sounds like a failure to comprehend just what a billion years is: the time that life has existed on Earth so far. I confidently predict that a billion years hence, not a single presently existing species, including us, will still exist in anything much like its present form, even imagining "business as usual" and leaving aside existential risks and singularities.

Comment author: TimFreeman 09 May 2011 01:28:12PM 2 points [-]

Excellent. I can visualize a billion now. Thank you.

Comment author: wedrifid 08 May 2011 03:17:35AM 2 points [-]

Agreed, although I don't know that I have any Asperger's. Here's a sample dialogue I actually had that would have gone better if I had been in touch with my inner moralizer.

One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.

I suspect the inner moralizer would also probably not treat the "You're selfish" as an ad hominem argument. It technically does apply but from within a moral model what is going on isn't of the form of the ad hominem fallacy. It is more of the form:

  • Not expressing and expecting others to express a certain moral position is bad.
  • You are bad.
  • You should fear the social consequences of being considered bad.
  • You should change your moral position.

I'm not saying the above is desirable reasoning - it's annoying and has its own logical probelms. But it is also a different underlying mistake than the typical ad hominem.

Comment author: TimFreeman 08 May 2011 12:30:40PM 0 points [-]

One of the great benefits that being in touch with the inner moralizer can have is that can warn you about how what you say will be interpreted by another. It would probably recommend against speaking your first paragraph, for example.

If it works that way, I don't want it. My relationship with X has no value to me if the relevant truths cannot be told, and so far as I can tell that first paragraph was both true and relevant at the time.

Now if that had been a coworker with whom I needed ongoing practical cooperation, I would have made some minimal polite response just like I make minimal polite responses to statements about who is winning American Idol.

...But it is also a different underlying mistake than the typical ad hominem.

Okay, there might be some detailed definition of ad hominem that doesn't exactly match the mistake you described. I presently fail to see how the difference is important. The purpose of both ad hominem and your offered interpretation is to use emotional manipulation to get the target (me in this example) to shut up. Would I benefit in some way from making a distinction between the fallacy you are describing and ad hominem?

Comment author: shokwave 09 May 2011 02:48:41PM 3 points [-]

Well, you're human, so I don't think you can really have concerns about what happens a billion years from now because you can't imagine that period of time.

I can't imagine the difference between sixteen million dollars and ten million dollars - in my imagination, the stuff I do with the money is exactly the same. I definitely prefer 16 to 10 though. In much the same way, my imagination of a million dollars and a billion dollars doesn't differ too much; I would also prefer the billion. I don't know if I need to imagine a billion years accurately in order to prefer it, or have concerns about it becoming less likely.

Comment author: mutterc 08 May 2011 12:22:41AM 2 points [-]

WIth Aspies it's probably less that they won't take part in emotional games, than can't.

Comment author: Plasmon 08 May 2011 04:34:19PM 7 points [-]

If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do? You don't seem to think that ignoring the "attacks" is the correct course of action.

This is a genuine question. I do not know the answer and I would like to know what others think.

Comment author: mendel 08 May 2011 09:37:52PM 0 points [-]

My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.

Comment author: wedrifid 09 May 2011 12:02:06AM 4 points [-]

My opinion? I'd not lie. You've noticed the attempt, why claim you didn't? Display your true reaction.

Noticing the attempt and doing nothing is not a lie. It is a true reaction.

Comment author: mendel 09 May 2011 10:21:43AM 0 points [-]

beneath my notice

I'm referring to that. Sending that message is an implicit lie -- well, you could call it a "social fiction", if you like a less loaded word.

It is also a message that is very likely to be misunderstood (I don't yet know my way around lesswrong well enough to find it again, but I think there's an essay here someplace that deals with the likelyhood of recipients understanding something completely different than what you intended to mean, but you not being able to detect this because the interpretation you know shapes your perception of what you said).

So if your true reaction is "you are just trying to reduce my status, and I don't think it's worth it for me to discuss this further", my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.

I hope I was able to clarify my distinction between having a true reaction, and displaying it. In a nutshell, if you notice something, you have a reaction, and by not displaying it (when it is expected of you), you create an ambiguous situation that is not likely to communicate to the other person what you want it to communicate.

Comment author: wedrifid 09 May 2011 11:46:42AM *  1 point [-]

So if your true reaction is "you are just trying to reduce my status, and I don't think it's worth it for me to discuss this further", my choice, given the option to not display it or to display it, would usually be to display it, if a reaction was expected of me.

This can work sometimes but it in most contexts it is difficult to pull off without sounding awkward or crude. At best it conveys that you are aware that social dynamics exist but aren't quite able to navigate them smoothly yet. Mind you unless there is a pre-existing differential in status or social skills in their favour they will tend to come off slightly worse than you in the exchange. A costly punishment.

Comment author: wedrifid 09 May 2011 12:07:37AM 0 points [-]

If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do?

Ignoring the attempts is a good default. It gives a decent payoff while being easy to implement. More advanced alternatives are the witty, incisive comeback or the smooth, delicately calibrated communication of contempt for the attacker to the witnesses. In the latter case especially body language is the critical component.

Comment author: Wei_Dai 09 May 2011 05:40:53AM 1 point [-]

It has a function and we neutralize it at our peril.

Can you be more specific? What exactly are the dangers of neutralizing our "inner moralizers"?

Also, see my previous comments, which may be applicable here. I speculate that "aspies" free up a large chunk of the brain for other purposes when they ignore "emotional games", and it's not clear to me that they should devote more of their cognitive resources toward such games.

Comment author: Mitchell_Porter 09 May 2011 08:53:55AM 1 point [-]

Can you be more specific? What exactly are the dangers of neutralizing our "inner moralizers"?

Having brought up this topic, I find that I'm reluctant to now do the hard work of organizing my thoughts on the matter. It's obvious that the ability to moralize has a tactical value, so doing without it is a form of personal or social disarmament. However, I don't want to leave the answer at that Nietzschean or Machiavellian level, which easily leads to the view that morality is a fraud but a useful fraud, especially for deceptive amoralists. I also don't want to just say that the human utility function has a term which attaches significance to the actions, motives and character of other agents, in such a way that "moralizing" is sometimes the right thing to do; or that labeling someone as Bad is an efficient heuristic.

I have glimpsed two rather exotic reasons for retaining one's capacity for "judging people". The first is ontological. Moral judgments are judgments about persons and appeal to an ontology of persons. It's important and useful to be able to think at that level, especially for people whose natural inclination is to think in terms of computational modules and subpersonal entities. The second is that one might want to retain the capacity to moralize about oneself. This is an intriguing angle because the debate about morality tends to revolve around interactions between persons, whether morality is just a tool of the private will to power, etc. If the moral mode can be applied to one's relationship to reality in general (how you live given the facts and uncertainties of existence, let's say), and not just to one's relationship to other people, that gives it an extra significance.

The best answer to your question would think through all that, present it in an ordered and integrated fashion, and would also take account of all the valid reasons for not liking the moralizing function. It would also have to ground the meaning of various expressions that were introduced somewhat casually. But - not today.

Comment author: mendel 09 May 2011 09:55:14AM *  3 points [-]

In another comment on this post, Eugine Nier linked to Schelling. I read that post, and the Slate page that mentions Schelling vs. Vietnam, and it became clear to me that acting moral acts as an "antidote" to these underhanded strategies that count on your opponent being rational. (It also serves as a Gödelian meta-layer to decide problems that can't be decided rationally.)

If, in Schellings example, the guy who is left with the working radio set is moral, he might reason that "the other guy doesn't deserve the money if he doesn't work for it", and from that moral strongpoint refuse to cooperate. Now if the rationalist knows he's working with a moralist, he'll also know that his immoral strategy won't work, so he won't attempt it in the first place - a victory for the moralist in a conflict that hasn't even occurred (in fact, the moralist need never know that the rationalist intended to cheat him).

This is different from simply acting irrationally in that the moralist's reaction remains predictable.

So it is possible that moral indignation helps me to prevent other people from manouevering me into a position where I don't want to be.

Comment author: gjm 07 May 2011 09:23:37AM 2 points [-]

I suggest a different and possibly better way of thinking about what Eliezer says about "moralizing" and "judging": don't judge other people. Enabling reasonable discussion and being fun to be around depend more on whether one turns disagreement into personal disdain than on what sort of disagreements one has.

(Some "moralizing" talk doesn't explicitly pass judgement on another person's worth, but I think such judgement, even if implicit, is the thing that's corrosive.)

Comment author: TrE 07 May 2011 10:59:41AM *  0 points [-]

What could one do about rationalization? It probably won't be enough to ask oneself what arguments there are for the opposite position. Also, one could think about why one would want to confirm their position and if this is worth less or more than coming to know the truth (it will almost always be worth less). Do you have more ideas on how to beat this one?

Comment author: Cayenne 07 May 2011 10:22:08PM *  0 points [-]

Always play devil's advocate, and really try to destroy your position?

Whenever you argue, make a point to look up information regarding your argument and if you find that you were mistaken about it immediately let the other person know that you were wrong about it. The more certain you are about the information, the easier it should be to look up.

Think about who you know that would argue against your position, and how they would do it, and make sure that their (hypothetical) argument doesn't apply.

Make sure the null hypothesis 1) makes sense, and 2) isn't right.

Don't view an argument as a chance to be right. View it as an attempt to find facts or a useful model, or as John Maxwell IV says in another comment:

remembering that a purpose of engaging in argument is to update your map

I'm not sure how many of these things you can do reflexively, but I do look up facts as I argue, and I find that I am frequently wrong. I try not to care about being right as much as finding out something useful.

Edit - please disregard this post

Comment author: Vladimir_Nesov 07 May 2011 11:00:57PM *  1 point [-]

Ask, "What exactly do I believe? Why do I believe it?", separately from "Why is what I believe true? Is it true?". This will call attention to the process that could or could not privilege your hypotheses, before they are granted special rights. Also, a lot of confusion originates from vague ideas that don't even correspond to a clear meaning, so that the question of their correctness is mostly ambiguity.

Comment author: wedrifid 08 May 2011 02:41:14AM 2 points [-]
Comment author: Vladimir_Nesov 08 May 2011 10:17:29AM *  0 points [-]

Both questions are important, and have potential for bringing good info. They shouldn't be mixed up, one of them shouldn't be considered while forgetting the other, and where one of them can't be readily answered, you should just work with the other. Pursuing "Why" is how you improve on a faulty heuristic, for example, fixing a bug in a program without rewriting it from scratch.

Comment author: wedrifid 08 May 2011 10:32:25AM 0 points [-]

Both questions

All three. You already had two, neither of which matches Eliezer's.

Comment author: Vladimir_Nesov 08 May 2011 10:42:05AM 0 points [-]

I don't see it, list the three. When applied to the context of these comments, the post says, "If you don't remember why you decided to believe X, ask yourself, is X true? (That is, should you believe X?)". Which is one of the options I listed. What I didn't explicitly consider here is the condition of not remembering the reasons, in which case, Eliezer suggests, you are safer off not going there lest you come up with new rationalizations, and stick to the question you have a better chance of answering based on the facts.

Comment author: thomblake 09 May 2011 02:22:04PM 1 point [-]

I notice wedrifid still did not explicitly answer you, so for completeness:

  • What exactly do I believe? Why do I believe it?
  • Why is what I believe true? Is it true?
  • Whatever question was brought up by linking to "Ask whether, not why".

(Given the abundance of question marks, I'm not sure how that obviously parses into "three" questions)

And what Vladimir_Nesov meant by "both" was presumably:

  • Whether
  • Why
Comment author: TrE 08 May 2011 07:05:26PM 0 points [-]

Thank you, Vladimir, wedrifid, Cayenne.

Now, how would an exercise to train this 5-second-skill look like?

Read out to a group questions of the form 'why X?' where X itself is a controversial statement for which arguments for and against can be found. This shall encourage them to think of whether X is true itself. X could be very probable, something like 'rationality is the best way of life', or something improbable. This way, the group shall learn to avoid the urge to rationalize while at the same time trying to avoid the opposite, namely feel the urge to crush every statement.

Could this work? How could one modify it?

Comment author: cousin_it 07 May 2011 11:55:49AM *  27 points [-]

"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:

  1. "Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

  2. "Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).

  3. "Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Comment author: [deleted] 07 May 2011 01:51:10PM 3 points [-]

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

Could you elaborate a bit on that?

I noticed that I often wait for small tasks that end up taking a lot of time. For example, I need to compile a library or finish a download and estimate that it won't take long, maybe a few minutes at most. But I find it really hard to just do something else instead of waiting. I can't just go read a book or do some Anki reps. Whenever I tried that, I either have the urge to constantly check up on the blocking task or I get caught up in the replacement (or on reddit). So I end up staring at a screen, doing nothing, just so I don't lose my mental context. At worst, I can sit for half an hour and get really frustrated with myself.

Comment author: cousin_it 07 May 2011 03:47:03PM *  1 point [-]

I usually continue coding during long recompiles (over a minute or so), just don't save the my edits until it's finished.

Comment author: John_Maxwell_IV 07 May 2011 08:00:47PM 1 point [-]

You could also make a version control commit before compiling and then use "git stash" or equivalent to save your while-compiling edits.

Comment author: Antisuji 07 May 2011 07:00:41PM *  8 points [-]

I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.

For compiles, do something like

$ make; growlnotify -m "compile done!"

or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)

[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.

Comment author: matt 08 May 2011 04:31:57AM *  1 point [-]

If it's something that takes a fixed amount of time I'll usually just set a timer on my phone

Consider…

<ctrl><space> invokes Quicksilver.app
. enters text mode
<message>
<tab> to action pane
Large Type
<ctrl><enter> to make a compound object
Run after Delay… or Run at Time…

… and Quicksilver.app does this very nicely without your fingers ever leaving the keyboard (if you're making tea… your fingers probably already left the keyboard).

Consider also

<ctrl><space> invokes Quicksilver.app
. enters text mode
<message>
<tab> to action pane
Speak Text (Say)

(These suggestions live in mac land. If you live in Windows land, consider moving. If you live in Linux land you'll probably figure our how to do this yourself pretty quickly :)

Comment author: MBlume 08 May 2011 05:34:14AM 1 point [-]

This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.

I assume someone's already told you you'll be better off with Git?

Comment author: Sniffnoy 08 May 2011 08:09:23AM 0 points [-]

"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.

For purposes of avoiding ambiguity this might be better phrased as "don't block" or "don't busy-wait". Although combined with #2 it might indeed become "don't wait" in the more general sense to some extent!

Comment author: gjm 08 May 2011 09:44:51AM 1 point [-]

The trouble with not waiting is that it increases your number of mental context switches, and they can be really expensive. Whether "don't wait" is good advice probably depends on details like the distribution of waiting times, what sort of tasks one's working on, and one's mental context-switch speed.

Comment author: gjm 08 May 2011 09:49:11AM 6 points [-]

I have the same debt-flinch, and the same feeling about how well it works, but with one qualification: I was persuaded to treat mortgage debt differently (though I've always been very conservative about how much I'd take on) and that seems to have served me very well too.

This isn't meant as advice about mortgages: housing markets vary both spatially and temporally. More as a general point: it's probably difficult to make very sophisticated flinch-triggers, which means that even good flinching habits are likely to have exceptions from time to time, and sometimes they might be big ones.

Comment author: Swimmer963 08 May 2011 01:16:07PM 2 points [-]

This is what my mother said to me: all types of debt are bad, but mortgage debt is unavoidably. My chosen career field is nursing, which is a pretty reliable income source, so I'm not worried about taking on a mortgage when the time comes.

Comment author: Swimmer963 08 May 2011 01:03:18PM 2 points [-]

"Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.

Same. And it has also served me well, although maybe not solely because of that preference–I was in a better financial situation to start with than many university students, and I'm a workaholic with a part-time job that I enjoy, and I also enjoy living frugally and don't consider it to diminish my quality of life the way some people do.

Comment author: twanvl 07 May 2011 12:08:58PM 4 points [-]

Answering "a color" to the question "what is red?" is not irrational or wrong in any way. In fact, it is the answer that is usually expected. Often when people ask "what is X?" they do in fact mean "to what category does X belong?". I think this is especially true when teaching. A teacher will be happy with the answer "red is a color".

Comment author: DSimon 07 May 2011 08:31:56PM *  4 points [-]

Agreed, though I think this depends a lot on who you're talking to and what they already know. Typically if someone I know asks me something like "What is red?" they're trying to start some kind of philosophical conversation, and in that case "It's a color" is the proper response (because it lets them move on to their next Socratic question, and eventually to the point they're making).

On the other hand, if we were talking to color-blind aliens, answering "It's what is in common between light reflected by the stop sign there and the fire truck yonder, but not the light reflected by this mailbox here" is a lot more useful starting response than "it's a color". If I answered "It's a color", and the alien is fairly smart and thinks like a human, the conversation would probably then go:

Alien: So what's a color then?

Me: Well a color is a particular kind of light...

Alien: Wait, hold on. Light, like the stuff that bounces off objects and that I use to see with?

Me: Yep, that's it.

Alien: What distinguishes light of one color from that of another?

Me: The wavelength of the light wave.

Alien: What wavelength is red light?

Me: Off the top of my head, I don't know. If you have a way to measure the wavelength of light, though, then that stop sign there and the fire truck younder are both red to my eyes, so the light they're reflecting is in that wavelength.

Alien: Gotcha.

... If I went straight to the examples, I'd have ended up at pretty much the same point, but a lot quicker.

Comment author: mendel 08 May 2011 02:28:30AM 3 points [-]

Assuming the person who asks the question wants to learn something and not hold a socratic argument, what they need is context. They need context to anchor the new information (there's a word "red", in this case) to what they already know. You can give this context in the abstract and specific (the "one step up, one step down" method that jimrandomh descibes above achieves this), but it doesn't really matter. The more different ways you can find, the better the other person will understand, and the richer a concept they will take away from your conversation. (I'm obviously bad at doing this.)

An example is language learning: a toddler doesn't learn language by getting words explained, they learn language by hearing sounds used in certain contexts and recalling the association where appropriate.

I suspect that the habit of answering questions badly is being taught in school, where an answer is often not meant to transfer knowledge, but to display it. If asked "What is a car?", answering that is has wheels and an engine will get you a better grade than to state that your mom drives a Ford, even though talking about your experience with your mom's car would have helped a car-less friend to better understand what it means to have one.

So what we need to learn (and what good teachers have learned) is to take questions and, in a subconscious reaction, translate them to a realisation what the asking person needs to know: what knowledge they are missing that made them ask the question, and to provide it. And that depends on context as well: the question "what is red" could be properly answered by explaining when the DHS used to issue red alerts (they don't color code any more), it could be explaining the relation of a traffic light to traffic, it could be explaining what red means in Lüscher's color psychology or in Chinese chromotherapy. If I see a person nicknamed Red enter at the far side of the room wearing a red sweater, and I shudder and remark "I don't like red", then someone asks me "what do you mean, red" I ought to simply say that I meant the color - any talk of stop signs or fire engines would be very strange. To be specific, I would answer "that sweater".

To wrap this overlong post up, I don't think there's an innate superiority of the specific over the abstract. What I'll employ depends on what the person I'm explaining stuff to already understands. A 5-second "exercise" designed to emphasise the specific over the abstract can help me overcome a mental bias of not considering specifics in my explanations (possibly instilled by the education system). It widens the pool that I can draw my answers from, and that makes me a potentially better answerer.

Comment author: Anny1 07 May 2011 12:28:09PM *  1 point [-]

What would be an exercise which develops that habit?

Speaking from personal experience, I would propose that moralizing is mostly caused by anger about the presumed stupidity/ irrationality behind the statement we want to moralize about. The feeling of "Oh no they didn't just say that, how could they!". What I try to do against it, is to simply let that anger pass by following simple rules like taking a breath, counting to 10 or whatever works. When the anger is gone, usually the need for moralizing is as well.

Also I feel there is a lot of discussion about Eliezer moralizing in his posts that can be broken down to the distinction between moralizing as an automated response und moralizing after careful deliberation (as in blog posts). I wouldn't say that the latter is wrong per se.

In daily life I often meet people that I feel are so far off, so tangled up in their rationalizations, that even after my anger about their comments is passed I decide that a discussion would be a waste of everybody's time. In this case I use a sarcastic remark to a least get them of back. Maybe if the person in question gets a similar reaction from enough people, they will reconsider. It can also be for the benefit of bystanders.

So I think this would be the steps that work for me:

1) Recognize anger

2) Wait it out

3) Ask some questions to clarify/falsify your understanding of the questionable statement

4) Think about good, precise counterarguments and/or find the errors that I think the other one made.

5) Decide whether or not your arguing will probably be productive and then

a) Do it (in a civilized manner of course) or

b) Make a sarcastic comment that pinpoints the irrationality you see or simply say that you don't agree and leave.

I realize that this can't really be done in 5 seconds, but I think I got far enough myself that I can do the first two steps in a couple of seconds and keeping option 5b) in mind helps me in calming myself down.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:31:17PM 2 points [-]

The goal invoked in the post, though, is to avoid moralizing in conversations between rationalists so that they don't feel like they're walking through a minefield. Having the anger and suppressing it, doesn't work for that. The person next to you is still walking the minefield. They're just not getting feedback.

Comment author: Anny1 08 May 2011 10:07:35AM *  0 points [-]

From some of the above posts I get the impression that at least in a community of aspiring rationalists, there is still some anger around. I think it is one of the hardest things to get rid of.

There is a point about my personal technique I wanted to make that I feel I didn't really transport so far... I find it hard to explain though. Thinking about something like option 5b) somehow helps me to combat the feeling of helplessness that is often mixed in with the anger. Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

Also there is a difference between suppressing anger and what I am trying to describe that feels totally clear internally but is also hard to explain.

The point about the missing feedback is a very good one and I'm wondering if and how and how often rationalists give each other feedback about how the discussion makes them feel.

Comment author: Alicorn 08 May 2011 10:33:57AM 1 point [-]

Somehow in saying myself "you can act on that later, if you still feel it is necessary" I take the edge off. Can someone relate to that and maybe help in clarifying?

I think I may know what you're talking about. I find it immensely helpful to tell myself (when it's true) "there is no hurry", sometimes repeatedly. When there's no hurry, I can double-check. When there's no hurry, I can ask someone for help. When there's no hurry, there's no reason to panic. When there's no hurry, I can put it down, come back to it later whenever I feel like it, and see if anything's changed about how I want to react to it. So it's more general than just anger, but perhaps the same class of thing.

Comment author: jimrandomh 07 May 2011 03:09:29PM *  50 points [-]

IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:

What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.

In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:

X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)

I would expect the first of these three explanations to succeed, and the other two to fail miserably.

Comment author: Eliezer_Yudkowsky 07 May 2011 07:29:41PM 13 points [-]

"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.

Comment author: TrE 08 May 2011 07:08:36PM *  11 points [-]

Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'

And as well, this could be done with categories. 'Red is a color. Red is not a sound.'

I guess this one has something to do with confirmation bias, as cwillu suggested.

Comment author: scientism 07 May 2011 03:40:35PM 4 points [-]

One of the things I think virtue ethics gets right is that if you think, say, lying is wrong then you should have a visceral reaction to liars. You shouldn't like liars. I don't think this is irrational at all (the goal isn't to be Mr. Spock). Having a visceral reaction to liars is part of how someone who thinks lying is wrong embodies that principle as much as not lying is. If somebody claims to follow a moral principle but fails to have a visceral reaction those who break it, that's an important cue that something is wrong. That goes doubly for yourself. Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Comment author: cousin_it 07 May 2011 05:04:54PM *  1 point [-]

Why must my personal understanding of right and wrong also apply to other people? What if I think something's wrong for me to do, but I don't care if other people do it (e.g. procrastination)?

Comment author: shokwave 07 May 2011 05:07:40PM 0 points [-]

Is there some law of nature saying my pesonal understanding of right and wrong should also apply to other people?

Principles derivable from game theory, maybe.

Comment author: Peterdjones 07 May 2011 05:27:46PM -1 points [-]

If it's purely personal, why call it moral?

Comment author: wedrifid 07 May 2011 06:47:59PM 0 points [-]

If it's purely personal, why call it moral?

Why not? (A somewhat quirky twist that seems to crop up is that of having a powerful moral intuition that people's morals should be personal. It can sometimes get contradictory but morals are like that.)

Comment author: Peterdjones 07 May 2011 06:50:10PM 0 points [-]

Usual reasons...for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

Comment author: wedrifid 07 May 2011 07:28:11PM 0 points [-]

for one things, there are other ways of describing it, such as "personal code". For another, it renders morality pretty meaningless if someone can say "murders' OK for me".

And yet if the same neurological hardware is being engaged in order to make social moves of a similar form 'morality' still seems appropriate. Especially since morals like "people should not force their view of right and wrong on others" legitimate instances of moralizing even when the moralizer tends to take other actions which aren't consistent with the ideal. Because, as I tend to say, morals are like that.

Comment author: a363 08 May 2011 12:04:27PM -1 points [-]

What about "war is OK for me"?

It really gets to me that when a bunch of people gather together under some banner then it suddenly becomes moral for them to do lots of things that would never be allowed if they were acting independently: the difference between war and murder...

The only morality I want is the kind where people stop doing terrible things and then saying "they were following orders". Personal responsibility is the ONLY kind of responsibility.

Comment author: eugman 08 May 2011 06:52:17PM 0 points [-]

I think it makes sense in the negative sense, as things that aren't OK. What's wrong with holding oneself to a higher standard? What's wrong with saying "It'd be immoral for ME to murder?"

Comment author: Cayenne 07 May 2011 09:09:46PM *  0 points [-]

I tend to think of 'the things I have to do to be me' as moral, and 'the things I have to do to fit into society' to be ethics. In a lot of cases when someone is calling someone else immoral, it seems to me that they're saying that that person has done something that they couldn't do and remain who they are.

Edit - please disregard this post

Comment author: Gabriel 07 May 2011 11:57:40PM 5 points [-]

Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.

Feedback arrives in the form of a split-second impression of "this is wrong". However long you spend being indignant after that, it won't provide you with any new ethical insight. Indignance isn't about ethics, it's about verbally crushing your enemy while signalling virtue to onlookers.

Comment author: gjm 08 May 2011 09:35:52AM 2 points [-]
  1. Why do you think merely having a visceral reaction to lying (one's own or others'; actual or hypothetical) isn't enough?

  2. Conditional on having that visceral reaction, what is the advantage of then becoming indignant? Or do you think that becoming indignant is identical to that visceral reaction?

Comment author: Cayenne 07 May 2011 08:01:08PM *  15 points [-]

I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.

At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.

Edit - please disregard this post

Comment author: wedrifid 07 May 2011 08:09:32PM *  1 point [-]

I (really) like what you're saying here and it is something I often recommend (where appropriate) to people that have no interest in rationality whatsoever.

Well, except for drawing a line at 'true/false' with respect to when it an be wise to take actions to counter the statements. Truth is only one of the relevant factors. This doesn't detract at all for your core point.

I extend this philosophy to when evaluating socially relevant interactions of others. When things become a public scene that for some reason I care about I do not automatically attribute the offense, indignation or anger of the recipient to be the responsibility of the person who provided the stimulus.

Comment author: Cayenne 07 May 2011 08:19:22PM *  0 points [-]

The true/false isn't the only line, but I feel that it's the most important. If something someone says to or about you is true, then no matter what you should own it in some way. Acknowledge that they're right, try to internalize it, try to change it, but never never just ignore it! (edit: If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.)

If the thing they say is false, then don't get mad first! Think it through carefully, and then do the minimum you can to deal with it. The most important thing is to not obsess over it afterward, because if you're doing that you're handing a piece of your life away for a very low or even negative return. Laugh about it, ignore it, get over it, but don't let it sit and fester in your mind.

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 02:16:41AM 3 points [-]

If you're getting mad when someone says something truthful about you, then this should raise other warning flags as well! Examine the issue carefully to figure out what's really happening here.

When it comes to making the most beneficial responses feeling anger is almost never useful when you have a sufficient foundation in the mechanisms of social competition, regardless of truth. It tends to show weakness - the vulnerability to provocation that you are speaking of gives an opportunity for one upmanship that social rivals will instinctively hone in on.

In terms of the benefits and necessity of making a response it is the connotations that are important. Technical truth is secondary.

Comment author: Cayenne 08 May 2011 03:16:58AM *  2 points [-]

Very true.

I didn't mean to suggest that the truth/falsehood line was as usefully socially as I believe it is internally. The social reaction you may decide on is mostly independent from truth.

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 03:19:24AM *  2 points [-]

Internally, it's important to recognize that truth, since it is vital feedback that can tell you when you may need to change.

And, when false, when you may need to change what you do such that others don't get that impression (or don't think they can get away with making the public claim even though they know it is false).

Comment author: wilkox 08 May 2011 12:37:59PM 5 points [-]

In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.

It seems really, really difficult to convey to people who don't understand it already that becoming offended is a choice, and it's possible to not allow someone to control you in that way. Maybe "offendibility" is linked to a fundamental personality trait.

Comment author: Cayenne 08 May 2011 05:35:39PM *  3 points [-]

It could be. It seems not just difficult but actually against most culture on the planet. Consider that crimes of passion, like killing someone when you find them sleeping around on you, often get a lower sentence than a murder 'in cold blood'. If someone says 'he made me angry' we know exactly what that person means. Responding to a word with a bullet is a very common tactic, even in a joking situation; I've had things thrown at me for puns!

It does seem like a learn-able skill even so. I did not have this skill when I was child, but I do have it now. The point I learned it in my life seems to roughly correspond to when I was first trained and working as technical support. I don't know if there's a correlation there.

In any case, merely being aware that this is a skill may help a few people on this forum to learn it, and I can see only benefit in trying. It is possible to not control anger but instead never even feel it in the first place, without effort or willpower.

Edit - please disregard this post

Comment author: mendel 08 May 2011 09:27:25PM 0 points [-]

And yet, not to feel an emotion in the first place may obscure you to yourself - it's a two-sided coin. To opt to not know what you're feeling when I struggle to find out seems strange to me.

Comment author: Cayenne 08 May 2011 10:04:27PM *  2 points [-]

I think you're misunderstanding what I said. I'm not obscuring my feelings from myself. I'm just aware of the moment when I choose what to feel, and I actively choose.

I'm not advocating never getting angry, just not doing it when it's likely to impair your ability to communicate or function. If you choose to be offended, that's a valid choice... but it should also be an active choice, not just the default.

I find it fairly easy to be frustrated without being angry at someone. It is, after all, my fault for assuming that someone is able to understand what I'm trying to argue, so there's no point in being angry at them for my assumption. They might have a particularly virulent meme that won't let them understand... should I get mad at them for a parasite? It seems pointless.

Edit - please disregard this post

Comment author: mendel 09 May 2011 12:08:16AM 0 points [-]

Well, it seems I misunderstand your statement, "It is possible to not control anger but instead never even feel it in the first place, without effort or willpower."

I know it is possible to experience anger, but control it and not act angry - there is a difference between having the feeling and acting on it. I know it is also possible to not feel anger, or to only feel anger later, when distanced from the situation. I'm ok with being aware of the feeling and not acting on it, but to get to the point where you don't feel it is where I'm starting to doubt whether it's really a net benefit.

And yes, I do understand that with understand / assumptions about other people, stuff that would have otherwise bothered me (or someone else) is no longer a source of anger. You changed your outlook and understanding of that type of situation so that your emotion is frustration and not anger. If that's what you meant originally, I understand now.

Comment author: John_Maxwell_IV 07 May 2011 08:27:25PM *  3 points [-]

I thought of a few five-second skills like this:

  • remembering that a purpose of engaging in argument is to update your map
  • realizing you should actually spend time on activities that have proven to be helpful in the past (related to this)
  • noticing when you have a problem and actually applying your creativity to solve it (similar to this)
  • recognizing a trivial inconvenience for what it is

I noticed that all of my 5-second skills (and Eliezer's also) involve doing more mental work than you're instinctively inclined to do at a key point. This makes sense if the main reason people are irrational is due to taking cognitive shortcuts (see this great article; feel free to skip down to "Time for a pop quiz"). So maybe we could save some labor identifying or at least acquiring 5-second skills if we learn to be comfortable with constant reflectivity and hard mental work.

Comment author: Psy-Kosh 07 May 2011 10:02:04PM 4 points [-]

Something I still need to work on, but which I think would be an important one (perhaps instead a general class of 5-second-skills rather than a single one) would be "remember what you know when you need it"

Example: you're potentially about to escalate a already heated political debate and make in personal. 5-second-skill: actually remembering that politics is the mind-killer, thus giving yourself a chance to pause, reconsider what you're about to do, and thus have a chance to avoid doing something stupid.

I'd also apply this notion to what you said about testability. Not so much being able to think of a quick test as much as being able to quickly remember to think about how it could be tested.

Perhaps this general category of 5-second-skills could be called "pause and think" or "pause and remember".

ie, the critical thing this 5-second-skill here isn't so much being able to swiftly execute some other rationalist skill quickly as much as remembering to use that skill at all when you actually need to.

Comment author: Cayenne 07 May 2011 10:46:14PM *  2 points [-]

How about 'flinch away from drama'?

Never argue opinions, only facts.

If you must argue an opinion, then pin it down so that it can't wriggle around. Example: if you have the opinion 'AI can/will paperclip', then try to pin down how and why it can as strictly as you can, and then take the argument from 'it can happen' to 'perhaps we can test this'. Bring it out of the clouds and into reality as quickly as possibly.

If you manage to kill someone's opinion, showing that it is just wrong, then pause and mourn its passing instead of gloating. It can't hurt to apologize for winning, since feelings are so easily hurt.

Edit - please disregard this post

Comment author: Psy-Kosh 07 May 2011 10:58:21PM 0 points [-]

Hrm... That could work for the specific "remember that politics is the mindkiller" rule (Although, of course, while one can distinguish issues of preference from issues of fact... issues of opinion vs issues of fact seems more questionable. :))

Comment author: Cayenne 07 May 2011 11:10:06PM *  1 point [-]

Well, I view opinions as inherently meaningless to attempt to test. A fact can be looked up or tested, but an opinion either can't be tested yet or is worthless to test.

'The sky is blue' is testable unless you've been stuck for generations underground. 'I like pink' is worthless to test, and really worthless to argue against. 'When we can do X it will then proceed to Y' is hard to do anything about until we can actually X, but if we pin the specifics down enough then it isn't totally useless to argue about it.

Some opinions can also just be completely infeasible to test as well, due to the steps the test would need to take. (Hayek vs. Keynes, I'm looking at you.)

Edit - please disregard this post

Comment author: Gabriel 08 May 2011 02:12:11AM 2 points [-]

So here is a procedure I actually developed for myself couple of months ago. It's self-helpy (the purpose was to solve my self-esteem issues) but I think indignant moralizing uses some of the same mental machinery so it's relevant to the task of becoming less judgemental in general.

I believed that self-esteem doesn't say anything about the actual world so it would be a good idea to disconnect it from external feedback and permanently set to a comfortable level. At some point I realized that this idea was too abstract and I had to be specific to actually change something. And here's roughly what it led to:

  1. Notice that I'm engaging in judgement. If the judgement is internally-directed and negative then the trigger will be anxiety. If it were positive then it would be some sort of narcissistic enthusiasm. If the judgement were directed at another person then it could be a feeling of smugness, if negative, and probably some sort of reverential admiration if positive.

  2. Realize that the emotions I'm feeling don't represent objective reality. They are a heuristic hacked together by evolution to guide my behaviour in a savannah-dwelling hunter-gatherer tribe. And I'm definitely not currently a member of such a collective.

  3. Remember that thinking abstractly about a 'sense of self-esteem' doesn't capture the way it is experienced and that thinking that it should be disconnected from external stimuli isn't something that can be translated into action and I need something specific to target.

  4. Focus on how an algorithm feels from the inside -- that the sense of self-esteem doesn't feel like a sense of self-esteem. It feels like a feature of the world. As if everyone, including me, had an inherent, non-specific aura of awesomeness that I were able to directly perceive, though not with any of the 'standard' senses.

  5. Reflect on the silliness of that way of perceiveing. Look at the world and notice the distinct lack of worthiness everywhere I turn to. Tell myself, verbally, that there is no inherent awesomeness or worthiness and that therefore nothing can affect it. Don't just try to disconnect the emotions from experience, aim to outright destroy them (note: I don't claim that destroying them is actually possible).

Comment author: mendel 08 May 2011 03:02:57AM *  1 point [-]

Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.

First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.

It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on how "teaching exercises" for this should be arrived at.

You suggest a form of self-teaching: The 5-second analysis identifies situations when I want some desired behaviour to trigger, and to pre-think my reaction to the point where it doesn't take me more than 5 seconds to use. In effect, I am installing a memory of thoughts that I wish to have in a future situation. (I could understand this as communcating with "future me" if I like science fiction. ;) Your method of limiting this to the "5-second-level" aims to make this pre-thinking specific enough so that it actually works. With practice, this response will trigger subconsciously, and I'll have modified my behaviour.

It would be nice if that would actually help to talk about rationality more clearly (but won't we be too specific and miss the big picture?), and it would be nice if that would help us arrive at a "rationality syllabus" and a way to teach it. I'm looking forward to reports of using this technique in an educational setting; what the experience of you and your students were in trying to implement this. Until your theory's tested in that kind of setting, it's no more than a theory, and I'm disinclined to believe your "you need to" from the first sentence in your article.

Is rationality just a behaviour, or is it more? Can we become (more) rational by changing our behaviour, and then have that changed behaviour change our mind?

Comment author: mendel 09 May 2011 10:28:35AM *  0 points [-]

Of course, these analyses and exercises would also serve beautifully as use-cases and tests if you wanted to create an AI that can pass a Turing test for being rational. ;-)

Comment author: Cayenne 08 May 2011 03:26:37AM *  1 point [-]

It might be useful to form a habit of reflexively trying to think about a problem in the mode you're not currently in, trying to switch to near mode if in far, or vice-versa. Even just a few seconds of imagining a hypothetical situation as if it were imminent and personal could provoke insight, and trying to 'step back' from problems is already a common technique.

I've used this to convince myself that a very long or unbounded life wouldn't get boring. When I try to put myself in near-mode, I simply can't imagine a day 2000 years from now when I wouldn't want to go talk to a friend one last time, or go and reread a favorite book, or cook a favorite meal, or any one of a thousand other small things. I might get bored off and on, but not permanently.

Edit - please disregard this post

Comment author: lessdazed 08 May 2011 03:26:59AM 9 points [-]

When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.

Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.

If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.

If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.

If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty and ridiculous deontological system", or some variation thereof, is probably unwelcome.

On the other hand, responding with "You'll get over being depressed", when your interlocutor does not feel depressed, will frustrate them. "Being depressed is a sin" would be an even worse response, combining both misinterpretation and moralizing.

Refraining from filling in the blanks in others' arguments happens to be a good way to avoid moralizing, since in order to be indignant about something you have to believe in its existence.

Scott Adams has a good example of something that only causes offense to some people, supposedly dependent on their general penchant for smashing distinct statements together, which is one way people inappropriately fill in blanks.

The dog might eat your mom's cake if you leave it out. A dog also might eat his own turd.

When you read those two statements, do you automatically suppose I am comparing your mom's cake to a dog turd? Or do you see it as a statement that the dog doesn't care what it eats, be it a delicious cake or something awful?

In this pair, it is easy to get someone to agree with both statements and also say they think they would hypothetically feel offense towards the speaker were it not a mere test...at least I am one for one, and I imagine it would work for others. I also think the person I asked actually felt real offense

Something like this pair would be good for teaching because the student agrees with the component statements. Offense is a result of inappropriately combining them to infer a particular intent by the speaker.

If you are offended, ask yourself: "What am I assuming about the other person (that makes me think they are innately evil)?

Comment author: RobinZ 08 May 2011 02:31:46PM 5 points [-]

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

Comment author: wedrifid 09 May 2011 12:00:29AM *  4 points [-]

My usual method when confronted with a situation where a speaker appears to be stupid, crazy, or evil is to assume I misunderstood what they said. Usually by the time I understand what the opposite party is saying, I no longer have any problematic affective judgment.

I usually find that I do understand what they are saying and it belongs in one of the neglected categories of 'bullshit' or "<OvercomingBias style nonsense/>".

Comment author: RobinZ 09 May 2011 01:08:56AM 0 points [-]

"things that people say that" what? The grammar gets a little odd toward the latter half of that.

Comment author: endoself 09 May 2011 01:19:54AM 1 point [-]

Presumably "things that people say that aren't really actionable beliefs"; though this reply feels awkward in a discussion about misunderstanding, I'm pretty sure that was the intended phrase.

Comment author: wedrifid 09 May 2011 05:39:43AM 0 points [-]

Fixed.

Comment author: RobinZ 09 May 2011 01:15:14PM 0 points [-]

Thanks!

Comment author: wilkox 09 May 2011 01:09:24AM 2 points [-]

"things that people say that really actionable beliefs even though they may not be clear on the difference"

This sounds interesting, but I can't parse it.

Comment author: wedrifid 09 May 2011 05:38:23AM 0 points [-]

This sounds interesting, but I can't parse it.

That's because you are using an English parser while my words were not valid English.

Comment author: RobinZ 09 May 2011 01:38:43PM 2 points [-]

Those don't usually give me much trouble - I find that the nonsense people propose is usually self-consistent in an interesting way, much like speculative fiction. On reflection, what really gives me trouble is viewpoints I understand and disagree with all within five seconds, like [insert politics here].

Comment author: atucker 08 May 2011 03:58:26AM *  7 points [-]

"Don't be stopped by trivial inconveniences"

I used to do really stupid things and waste lots of time trying to do something in the path of least resistance. I'm not sure if other people have the same problem, but might as well post.

An example of being stopped: "Hmm, I can't find any legitimate food stands around here. I guess I'll go eat at the ice cream stand right here then."

An example of overcoming: "Hmm, I can't find any legitimate food stands around here. That's weird. Lemme go to the information desk and ask where there is one."

What it feels like:

  1. You have a goal

  2. You realize that there are particular obstacles in your way

  3. You decide to take a suboptimal road as a result

What you do to prevent it:

Notice that the obstacle isn't that big of a deal, and figure out if there are ways to circumvent this. If those ways are easy, do them. Basically, move something from not reachable to reachable.

Comment author: Cayenne 08 May 2011 08:08:56AM *  1 point [-]

Yak Shaving? http://sethgodin.typepad.com/seths_blog/2005/03/dont_shave_that.html

Edit - please disregard this post

Comment author: atucker 08 May 2011 03:15:25PM 1 point [-]

I should have made it clear when a trivial inconvenience ceases to be trivial.

Basically, if you have an object level understanding of what's in your way, can think of a way to avoid the problem, and don't see any other steps involved, then you should go ahead and do it.

I personally am normed to give up waaay too easily compared to what I can do.

Comment author: Cayenne 08 May 2011 06:13:53PM *  -1 points [-]

Oh, ok. I see the difference you mean..

Edit - please disregard this post

Comment author: roland 08 May 2011 04:19:45AM 8 points [-]

I know that I'll probably be downvoted again, but nevertheless.

Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

To give a concrete example of Eliezer himself

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/

I don't believe there were explosives planted in the World Trade Center. ... I believe that all these beliefs are not only wrong but visibly insane.

I politely asked for clarification only to be not only ignored but also downvoted to -4:

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1t7r

On another comment I presented evidence to the contrary(a video interview) to be downvoted to -15: http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1r5v

So when just asking the most basic rationality question(why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Comment author: LHJablonski 08 May 2011 05:47:37AM 4 points [-]

And I feel people moralize here especially using the downvote function.

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

I think probably both happen, but it's tilted heavily toward the latter. Feel free to downvote if you disagree. :)

Comment author: TimFreeman 08 May 2011 07:58:30PM 5 points [-]

Do you think that people use the downvote to tell another user that they are a terrible person... or do they simply use it to express disagreement with a statement?

There's another possibility. I downvote when I felt that reading the post was a waste of my time and I also believe it wasted most other people's time.

(This isn't a veiled statement about Roland. I do not recall voting on any of his posts before.)

Comment author: mendel 08 May 2011 08:52:39PM 3 points [-]

The problem with the downvote is that it mixes the messages "I don't agree" with "I don't think others should see this". There is no way to say "I don't agree, but that post was worth thinking about", is there? Short of posting a comment of your own, that is.

Comment author: Swimmer963 08 May 2011 08:54:33PM 3 points [-]

Short of posting a comment of your own, that is.

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

Comment author: wedrifid 08 May 2011 11:31:12PM 0 points [-]

That's exactly what I do. I try to downvote comments based on how they're written (if they're rude or don't make sense, I downvote them) instead of what they're written about. (Though I may upvote comments based on agreeing with the content.)

That's exactly what I do too. (Although my downvote threshold is likely a tad more sensitive. :P)

Comment author: Swimmer963 09 May 2011 12:26:05AM 0 points [-]

(Although my downvote threshold is likely a tad more sensitive.

Likely. Mine will probably become more sensitive with time.

Comment author: AdeleneDawner 08 May 2011 08:55:38PM 2 points [-]

I've been known to upvote in such cases, if the post is otherwise neutral-or-better. I like to see things here that are worth thinking about.

Comment author: lessdazed 09 May 2011 02:35:31AM 2 points [-]

I think there is a positive outcome from the system as it is, at least for sufficiently optimistic people. The feature is that it should be obvious that downvoting is mixed with those and other things, which helps me not take anything personally.

Downvotes could be anything, and individuals have different criteria for voting, and as I am inclined to take things personally, this obviousness helps me. If I knew 50% of downvotes meant "I think the speaker is a bad person", every downvote might make me feel bad. As downvotes currently could mean so many things, I am able to shrug them off. They could currently mean: the speaker is bad, the comment is bad, I disagree with the comment, I expect better from this speaker, it's not fair/useful for this comment to be voted so highly rated compared to a similar adjacent comment that I would rather people read instead/I would like to promote as the communal norm, etc.

If one has an outlook that is pessimistic in a particular way, any mixing of single messages to multiple meanings will cause one to overly react as if the worst meaning is intended by a message, and this sort of person would be most helped by ensuring each message has only one meaning.

Comment author: Cayenne 08 May 2011 06:04:15AM *  4 points [-]

I know that I'll probably be downvoted again, but nevertheless.

This is precisely the wrong way to start off a post like this, a very passive-aggressive tone.

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

Are you certain that it isn't simply the tone of your posts?

So when just asking the most basic rationality question (why do you believe what you believe) and presenting evidence that contradicts a point is downvoted I don't feel that LW is about rationality as much as others like to believe. And I also feel that basic elements of politeness are missing and yes, I feel like I have to walk on eggs.

Also bitterness. I think that you would benefit a lot by rephrasing your questions in a less confrontational manner.

Eliezer, could you explain how you arrived at the conclusion that this particular believe is visibly insane?

could have become

Eliezer, I don't understand how you arrived at this conclusion, could you explain the reasoning behind it?

Soften up your posts.

I never downvote, as I think it's counterproductive. Others don't agree, but that is their right. Taking it personally is not the right approach.

Edit - please disregard this post

Comment author: roland 08 May 2011 07:12:32PM 1 point [-]

I would welcome factual criticisms of my posts instead of just attacking the "tone" you read in them.

Right, the posts could be softened up, but isn't it funny that you don't direct the same criticism to the ones who called a certain point of view insane? How confrontational is that?

Comment author: Cayenne 08 May 2011 07:52:23PM *  4 points [-]

I'm limited in my scope, I'm not going to follow links and criticize every single post. I happened to be reading yours, and thought that I might be able to help you with tone... others are probably better at dealing with actual content. If you would prefer me to not try to help you, let me know and I'll focus my efforts elsewhere.

Edit - please disregard this post

Comment author: lessdazed 08 May 2011 06:05:19AM *  11 points [-]

A point about counteracting evidence: if I believe I have a weighted six sided die that yields a roll of "one" one out of every ten rolls rather than one out of every six rolls as a fair die would, a single roll yielding a "one" is evidence against my theory. In a trial in which I repeatedly roll the die, I should expect to see many rolls of "one", even though each "one" is more likely under the theory the die is fair than it is under the theory the die is weighted against rolls of "one".

You really didn't present evidence that contradicted anything, the most this sort of testimony could be is as you said, "evidence to the contrary", but not as you also said, "contradicts". One thing to look out for is idiosyncratic word usage. Apparently, I interpret the word "contradict" to be much stronger than you do. It would be great to find out how others interpret it, there are all sorts of possibilities.

When I consider whether or not the things I am directed to are good evidence of a conspiracy behind the destruction of the World Trade Center, I discount apparent evidence indicating a conspiracy against what I would expect to see if there were no actual conspiracy.

As an analogy: if I hear a music album and find 75% of the songs are about troubled relationships or love, I don't conclude the songwriter's life is or was particularly troubled, because that's what gets sung about by people of somewhat normal background, even though much of their lives are spent sleeping, eating, standing in line, etc. Only when every song sounds like the same complaint do I conclude something is uniquely wrong with them. This is somewhat counterintuitive, one might have thought 75% love/troubled songs indicated unique problems, but its not so.

Similarly, the conspiracy stuff surrounding the Twin Towers has been underwhelming to me. What I see is exactly what I would expect were the towers collapsed by Al-Quaeda hijacked planes. This absolutely includes what you presented, an interview after the fact by someone saying that in the confusion he heard sounds that sounded like explosions beneath him. Seeing this evidence is like rolling a "one" or hearing a love song on an album: totally expected according to the theories that the die lands on "one" 10% of the time, that the singer is normal, or that the towers were brought down by the planes.

Comment author: roland 08 May 2011 07:08:55PM *  -1 points [-]

I concede the point about language.

Discounting evidence is dangerous considering we are all biased and if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

Comment author: [deleted] 08 May 2011 07:20:59PM *  12 points [-]

There was a time, many years ago, when I paid close attention to the arguments of the "truthers", and came to the conclusion that they were wrong. What you're doing now is bringing up the same old arguments with no obviously new evidence. I'm not going to give you my full attention, not because I want to close my eyes to the truth, but because I already looked at the evidence and already, in Bayesian terminology, updated my priors. Revisiting old evidence and arguments as if they were fresh evidence would arguably be an irrational thing for me to do, because it would be treating one piece of evidence as if it were two, updating twice on the same, rehashed, points that I've already considered.

I did not downvote you, because I have a soft spot for that sort of thing, but if other people have already, long ago, considered the best arguments and evidence, then at this point you really are wasting their time. It's not that they're rejecting evidence, I suspect, but that they're rejecting having their time being taken up with old evidence that they've already taken into account.

Comment author: lessdazed 09 May 2011 12:36:51AM *  6 points [-]

if you dismiss any evidence to the contrary you have to answer: what evidence would be strong enough to change your mind?

As a separate point, I have always argued against the validity of a certain argument against theists, that they are obligated to say what would constitute evidence sufficient to change their minds. The demand is an argument from ignorance. Nonetheless, being able to articulate what sufficiently contradictory evidence would be is a point in an arguers favor, even though the inability to do so is not fatal.

In this case, I'd say the question is somewhat ill-formed for two reasons. First, many entirely different things would be sufficient evidence to get me to change my mind, but if other things were also the case, they would no longer be sufficient. Certain statements by the CIA might be sufficient, but not if there were also other statements from the FBI.

Second, there are many sorts of mind changing possible. The more sane conspiracy theorists simply say the official account is not credible. The others articulate theories that, even granting all of there premises, are still less likely than the official story. A related point is what it means to be wrong according to different logics. If I believe in Coca-Cola's version of Santa Claus and also believe that Kobe Bryant is left-handed, in one sense there is no "Kobe Bryant" in the same way that there is no "Santa Claus". In a more useful sense, we say "Kobe Bryant really exists, but is right handed, and Santa Claus does not exist." This is so even though there is no one thing preventing us from saying "Santa Claus is really young, not old, tall and thin, not fat, has no beard and shaves his head, is black, and not white, and plays shooting guard for the Lakers under the alias "Kobe Bryant", and does nothing unusual on Christmas." Whether you say things I learn falsify the official story or modify it is a matter of semantics, but certain elements-like the involvement of Al-Quaeda-are more central to it than others. These elements are better established by existing evidence and would take correspondingly more evidence to dislodge.

So the answer to "what evidence would be strong enough to change your mind?" varies a lot depending on exactly what is being asked.

But my problem is not with people discounting evidence(everyone is free to close their eyes) but outright downvoting evidence that goes against their beliefs is social punishment.

I think it is notable and important that the different but similar things you said got different responses. One was downvoted unto automatic hiding (the threshold is set to hide at -3 or less (more negative) by default). One was downvoted much more. We can speculate as to why but its important to acknowledge different community responses to different behavior (I won't prejudge it by saying "Different going against social beliefs").

Onto speculation: one problem with the video as evidence for explosions was a certain kind of jumping to conclusions. The guy said he heard explosions, but this is skipping a step. I could just as well say I heard people in a box, when I had actually heard sound waves emitted by a speaker attached to a computer. The guy's insistence that explosions were causing the sound is very strange, even granted that he had heard explosions before and the sounds he heard may have sounded exactly like those. Likewise for his claim they were coming from beneath him, considering what was going on.

Similarly, your assumption about the reason for your downvotes is certainly skipping steps. Most noticeable is how you don't distinguish what you are being socially punished for among your several downvoted posts, but the response to them was so different.

It's not so simple as that you were "go[ing] against their beliefs". Not everyone uses the voting function identically, but assuming many others use it as I do I can offer an analysis. I use it to push things to where I think they should be, rather than as an expression that I was glad I read a post (in hopes others will do the same, such that votes reflect what individuals were glad to have read. I believe something like this was the intent of the system's creators). I see -4 and -15 as not inappropriate final marks for your posts, and so didn't weigh in on them through the voting mechanism.

The problem with your first post was that it unfairly pushed the work of argument onto Eliezer. This is the same problem with the poll sent out by the fundamentalists to philosophers a few months ago, I couldn't find it, but it included questions such as "Do you agree: life begins at conception?" and "Do you agree: humans are unique and unlike the other animals?" The problem with that question is that the work/number of words needed to adequately disentangle and answer that exceed those required to ask it. Your question also didn't start from anywhere, you would have gotten a better response if you had said you thought the beliefs either actually right or wrong but not insane.

The tl;dr is that it was a passive-aggressive question. A small sin, for which it gets a -4, as implicitly the one voicing it disagrees with it and is against the communal norm, how important that factor is, I can't know.

The video evidence was a larger sin, as it was basically a waste of time to listen to it. First, the guy emphasized that he certainly heard explosions beneath him, as if by disbelieving that one would be thinking him to be a liar. Like I said above, this is the same thing ghost observers do: I don't necessarily disbelieve you heard what you heard and saw what you saw, I just am unsure about the original cause of that noise, especially considering how humans hear what they hear based on what they are familiar with hearing and expecting to hear (the multiple-drafts model of cognition).

What's more, when the advocate of a position has an opportunity to direct someone to evidence supporting his or her position and must elect to give them one piece of evidence in an attempt to spread the belief, I expect them to go with their best argument, which in turn ought to sound pretty impressive, as even incorrect positions often have one compelling argument in their favor.

If I had come across the video you showed as the first video I saw in the course of randomly watching accounts of 9/11 survivors (if a random sample of survivors were filmed and archived), it would be maybe perhaps be somewhat suspicious. As a video cherry picked by someone trying to justify skepticism, it's catastrophically weak, shockingly so actually. I expect cherry picked evidence in favor of any conspiracy to at least induce a physiological response, e.g. OMG bush has reptilian eyes he is a reptile he is a lizard person, oh wait that's stupid, it's an artifact of light being shined on dozens of presidents millions of times and this video has been cherry-picked.

Comment author: wedrifid 08 May 2011 07:29:30AM 2 points [-]

I upvoted your comment prospectively. That is, it'll be worth an upvote when you edit out the passive aggressive intro and I'm being optimistic. :)

Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.

We do. Not all the downvoting is moralizing but a significant subset is. And not all the moralizing is undesirable to me, even though a significant subset is.

For what it is worth, believing the WTC was loaded with explosives really is insane.

Comment author: roland 08 May 2011 07:02:34PM 0 points [-]

For what it is worth, believing the WTC was loaded with explosives really is insane.

How did you arrive at this conclusion? Did you really think it through or is it just a knee-jerk reaction?

Comment author: jsalvatier 08 May 2011 05:47:16PM 2 points [-]

Are there lots of other topics you feel this way about?

If it's just this topic, that doesn't seem like a very big deal to me. I have no doubt LW has at least a few topics where people have an unproductive moralizing response. However, if such toxicity uncommon and doesn't affect important topics then I don't think it's a very big deal (though certainly worth avoiding).

Comment author: [deleted] 08 May 2011 06:14:48PM 4 points [-]

It was made pretty clear in the other thread that the evidence linked was extremely weak.

Maybe that doesn't justify -15, but a priori I'd downvote it.

Comment author: wedrifid 08 May 2011 06:20:19PM 1 point [-]

but a priori I'd downvote it.

ceteris paribus?

Comment author: [deleted] 08 May 2011 07:25:30PM 2 points [-]

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

Comment author: wedrifid 08 May 2011 10:16:42PM 1 point [-]

If I didn't already know it'd been downvoted into the asthenosphere, I would have downvoted it. But as it stands now, there's no reason for me to downvote it, because it's already been downvoted enough.

I understood the message. But the latin phase was off. Ceteris paribus is the one that would fit.

Comment author: [deleted] 08 May 2011 11:54:56PM 1 point [-]

Fair enough.

Comment author: BrandonReinhart 08 May 2011 06:13:20AM 5 points [-]

Grunching. (Responding to the exercise/challenge without reading other people's responses first.)

Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority on the subject and to avoid embarrassment.

An example of success: I stop myself, admit that I have changed my mind, that the idea was in error, and then relinquish the belief.

A 5-second-level description:

  • I notice that my actual belief state and my professed belief state do not match. This is a trigger that signals that further conscious analysis is needed. What I believe (the suggestion will have undesirable side effects) and what I desire to profess (the suggestion is good) are in conflict.

  • I notice that I feel impending embarrassment or similar types of social pain. This is also a trigger. The feeling that a particular action may be painful is going to influence me to act in a way to avoid the pain. I may continue to defend a bad idea if I'm worried about pain from retreat.

  • Noticing these states triggers a feeling of caution or revulsion: I may act in a way opposed to what I believe merely to defend my ego and identity.

  • I take a moment to evaluate my internal belief state and what I desire to profess. I actively override my subconscious desire to evade pain with statements that follow from my actual internal belief. I say "I'm sorry. I appear to be wrong."

An exercise to cause these sub-5-second events:

I proposed a scenario to my wife wherein she was leading an important scientific project. She was known among her team as being an intelligent leader and her team members looked up to her with admiration. A problem on the project was presented: without a solution the project could not move forward. I told my wife that she had had a customary flash of insight and began detailing the solution. A plan to resolve the problem and moving the project forward.

Then, I told her that a young member of her team revealed new data about the problem. Her solution wouldn't work. Even worse, the young team member looked smug about the fact she had outsmarted the project lead. Then I asked "what do you do?"

My wife said she would admit her solution was wrong and then praise the young team member for finding a flaw. Then she said this was obviously the right thing to do and asked me what the point of posing the scenario was.

I'm not sure my scenario/exercise is very good. The conversation that followed the scenario was more informative for us than the scenario itself.

Comment author: Cayenne 08 May 2011 06:31:48AM *  1 point [-]

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

Edit - please disregard this post

Comment author: Alicorn 08 May 2011 06:39:25AM 0 points [-]

And under this model, we like learning because...?

Comment author: katydee 08 May 2011 06:59:55AM *  2 points [-]

Well, it isn't being wrong that you cherish under Cayenne's model, just finding out about it so that you can correct it. To put it in other terms, being wrong is bad, but learning that you are wrong is good, because all of a sudden something gets shifted out of the "unknown unknown" category.

Comment author: Cayenne 08 May 2011 07:29:09AM *  0 points [-]

This is it exactly!

Edit - please disregard this post

Comment author: wedrifid 08 May 2011 07:16:03AM *  4 points [-]

Don't cherish being right, instead cherish finding out that you're wrong. You learn when you're wrong.

I prefer to cherish being right enough that I appreciate finding out that I was wrong. It feels like more of a positive frame! (And the implicit snubbing to the typical "don't care about being right" injunction appeals.)

Comment author: Oscar_Cunningham 08 May 2011 09:43:40AM *  14 points [-]

My attempt at the exercise for the skill "Hold Off On Proposing Solutions"

Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:

1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.

2) Come up with a witty and brilliant solution. This happens automatically.

3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!

4) Warn other people to hold off on proposing solutions.

Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.

Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.

Comment author: alexflint 08 May 2011 02:13:47PM 4 points [-]

Wouldn't it be better to realise right after step (1) that one needs to avoid coming up with solutions and deliberately focus one's mind on understanding the problem. Avoiding verbalization of solutions is good, but they can still pollute your own thinking, even if not others'.

Comment author: HopeFox 09 May 2011 12:04:56AM *  8 points [-]

I think I've started to do this already for Disputing Definitions, as has my girlfriend, just from listening to me discussing that article without reading it herself. So that's a win for rationality right there.

To take an example that comes up in our household surprisingly often, I'll let the disputed definition be " steampunk ". Statements of the form "X isn't really steampunk!" come up a lot on certain websites, and arguments over what does or doesn't count as steampunk can be pretty vicious. After reading "Disputing Definitions", though, I learnt how to classify those arguments as meaningless, and get to the real question, being "Do I want this thing in my subculture / on my website"? I think the process by which I recognise these questions goes something like this:

1) Make the initial statement. "A hairpin made out of a clock hand isn't steampunk!"

2) Visualise, even briefly, every important element in what I've just said. Visualising a hairpin produces an image of a thing stuck through a woman's hair arrangement. Visualising a clock hand produces a curly, tapered object such as one might see on an antique clock. Visualising "steampunk" produces... no clearly defined mental image.

3) Notice that I am confused. Realise that I've just made a statement about something that I can't properly visualise, something that I don't think I've properly defined in my own brain, so how can I expect anyone else to have a proper definition at all, let alone one that agrees with mine? (Honestly, the fact that I keep writing "steampunk" in quotation marks should have been a clue already.)

4) Correct my mistake. "Hmm, now that I think about it, what I just said didn't actually mean anything. What's the point of this discussion again? Are we arguing about whether or not this picture should be on the website, or whether this person should be going to conventions, or what? If so, let's talk about that specifically. Let's not pretend that "steampunk" exists as a concrete category boundary in the phase space of fashion accessories, okay?"

Now, this process can fall down at step 2 when I, personally, have a very well-defined mental image of what a word means (such as "sound", which I will always take to mean "compression waves of the sort that a human or other animal might detect as auditory input, whether or not a listener is actually present"), but which other people might interpret differently. Here, the trick to step 2 is to imagine my listener's most obvious responses, based on my experience in discussing the topic previously (such as "But there's nobody to hear it, so by definition there's no sound!"). If I can imagine somebody saying this, without also being forced to imagine that the speaker is hopelessly misinformed, mentally deficient, or some other kind of irrational mutant, then what I'm saying must have some defect, and I should re-examine my words.

As for a training exercise, step 2 seems to be the one to train. The "rationalist taboo" technique seems pretty effective here. Discuss a topic with the student, and when they use a word that doesn't seem to mean anything, or means too many things at once, taboo it and get them to restate their point. Encourage the student to visualise everything they say, if only briefly, and explain that anything they can't visualise properly is suspect.

Alternatively, allow the student to get into a couple of disputes over definitions, let them experience firsthand how frustrating it is, then point them to this blog and show them that there's a solution. Their frustration will drive them to adopt a method of implementing the solution in their own discourse. Worked for me!

Comment author: atucker 09 May 2011 03:38:48AM 2 points [-]

Why is so much of the discussion about the "avoid moralizing" statement?

Comment author: Eliezer_Yudkowsky 09 May 2011 04:58:43AM 2 points [-]

I made the mistake of using a word for something people shouldn't do. Then they started disputing the definition of the word, even after I told them not to. I will edit to take out the evil word.

Comment author: Eliezer_Yudkowsky 09 May 2011 05:02:59AM 5 points [-]

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

Comment author: Eugine_Nier 09 May 2011 05:29:29AM *  7 points [-]

Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Edit: A better way to express what I find ironic about Eliezer's statement, is that at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!" This fact is useful to keep in mind when predicting their reactions to big bright warning signs.

Comment author: rhollerith_dot_com 09 May 2011 06:04:12AM *  1 point [-]

It's ironic only to those who have different ideas about what it means to reason. Reason need not be applied indiscriminately. (And it's not equivalent to arguing.)

Comment author: Eugine_Nier 09 May 2011 06:46:35AM 2 points [-]

Reason need not be applied indiscriminately.

This is a very interesting statement (with which I agree). I would also like to see your explanation for when it's inappropriate to apply reason, I'll post mine afterwords.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make. Especially in this context since the posters arguing about morality were certainly trying to reason about it and not just arguing for the sake of arguing.

Comment author: rhollerith_dot_com 09 May 2011 07:15:57AM *  1 point [-]

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else. My point is that I do not see the irony in Eliezer's advising his readers that some particular issue is not worth applying reason to.

(And it's not equivalent to arguing.)

I don't quite see the distinction you're trying to make.

Can I just declare my statement in parens above to be withdrawn? :)

Comment author: Eugine_Nier 09 May 2011 07:47:51AM *  7 points [-]

would also like to see your explanation for when it's inappropriate to apply reason

It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else.

Interesting, I had in mind something much stronger. For example, if you attempt to apply too much reasoning to a Schelling point, you'll discover that the Schelling point's location was ultimately arbitrary and greatly weaken it in the process.

Another related example, is that you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.

Comment author: byrnema 09 May 2011 03:18:06PM 2 points [-]

You've articulated a couple of ideas that have been lurking in the collective concern of ideas here on Less Wrong, but which, as far as I know, haven't been made definite. About why some topics shouldn't have too much light directed at them -- ironically, as you originally claim, in the interest of reason. It's been a very vague concern and precisely because it hasn't been articulated it persists stronger than it might otherwise. I would encourage development of these points (not specifically by you, or specifically in this thread, but by anyone, wherever) .

Comment author: wedrifid 09 May 2011 06:33:28AM 1 point [-]

I would just like to point out the irony of telling people you're training to be rationalists not to reason about a concept.

Not quite ironic. More just arbitrary.

Comment author: lessdazed 09 May 2011 03:00:49PM *  3 points [-]

at least half the people here started their journey into rationalism by ignoring the big bright warning sign saying "Don't question God!"

Your edit is perfectly sufficient and I have no criticisms of it. However, the point can be expanded upon such that it will seem different and it may appear I am disagreeing.

The metaphorical signs that exist invoke the idea "Don't question God!", but in the West, that's not too close what they actually say. In religious communities at least moderately touched by the enlightenment, enough distaste of signs reading "Don't question God!" has been absorbed that such signs would be disrespected as low status.

This is something a member of a moderate strain of fundamentalism might pride himself or herself on, as a factor that distinguishes him or her from literalists, perhaps as an important part of his or her identity.

To make someone think "Don't question God (this time)!", the sign might say something like "You don't know what the consequences would have been had those people lived. God does, so rely on his judgment."

The "this time" will happen to be every time, but the universality of it won't be derived from so general a rule; it will be a contingent truth but not a logical one exactly.

Comment author: wedrifid 09 May 2011 06:15:12AM *  0 points [-]

The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.

In case it isn't clear let me say that my reply continues to apply to the current version. I refer to the underlying concept described, not the word so consider my reply to be edited to match.

Comment author: [deleted] 09 May 2011 07:03:36AM 3 points [-]

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

Comment author: wedrifid 09 May 2011 09:45:18AM 7 points [-]

"Moralizing is the mind-killer"?

Nah, just kidding. Making a joke.

No, that's more or less right. Which is unsurprising since moralizing is just politics.

Comment author: calcsam 09 May 2011 07:43:41AM -1 points [-]

Good post. This invokes, of course, the associated problem, of phrasing this in a way that might encourage listening on the other end.