IAWYC, and introspective access to what my mind was doing on this timescale was one of the bigger benefits I got out of meditation. (Note: Probably not one of the types of meditation you've read about). However, I don't think you've correctly identified what went wrong in the example with red. Consider this analogous conversation:
What's a Slider? It's a Widget.
What's a Widget? It's a Drawable.
What's a Drawable? It's an Object.
In this example, as with the red/color example, the first question and answer was useful and relevant (albeit incomplete), while the next two were useless. The lesson you seem to have drawn from this is that looking down (subclassward) is good, and looking up (superclassward) is bad. The lesson I draw from this is that relevance falls off rapidly with distance, and that each successive explanation should be of a different type. It is better to look a short distance in each direction rather than to look far in any one direction. Compare:
X is a color. This object is X. (One step up, one step down)
X is a color. A color is a quality that things have. (Two steps up)
This object is X. That object is also X. (Two steps down)
I would expect the first of these three explanations to succeed, and the other two to fail miserably.
"One step up and one step down" sounds like a valuable heuristic; it's what I actually did in the post, in fact. Upvoted.
A few months later, I've been teaching Anna and Luke and Will Ryan and others this rule as the "concrete-abstract pattern". Give a specific example with enough detail that the listener can visualize it as an image rather than as a proposition, and then describe it on the level of abstraction that explains what made it relevant. I.e., start with an application of Bayes's Theorem, then show the abstract equation that circumscribes what is or isn't an example of Bayes's Theorem.
Also, it is very important to give counter-examples: 'This crow over there belongs to the bird category. But the plane in the sky and the butterfly over there do not.' Or, more fitting the 'red' example: 'That stop sign and that traffic light are red. But this other traffic sign (can't think of an example) doesn't.'
And as well, this could be done with categories. 'Red is a color. Red is not a sound.'
I guess this one has something to do with confirmation bias, as cwillu suggested.
I'm a big fan of breaking things down to the finest grain thoughts possible, but it still surprises me how quickly this gets complicated when trying to actually write it down.
http://lesswrong.com/lw/2l6/taking_ideas_seriously/
Example: Bob is overweight and an acquaintance mentions some "shangri-la" diet that helps people lose weight through some "flavor/calorie association". Instead of dismissing it immediately, he looks into it, adopts the diet, and comfortably achieves his desired weight.
1) Notice the feeling of surprise when encountering a claim that runs counter to your expectations.
2) Check in far mode the importance of the claim if it were true by running through a short list of concrete implications (eg "I can use this diet and as a result, I can enjoy exercise more, I can feel better about my body, etc")
3) Imagine reaping the benefits in ...
"Be specific" is a nice flinch, I've always had it and it helps a lot. "Don't moralize" is a flinch I learned from experience and it also helps. Here's some other nice flinches I have:
"Don't wait." Waiting for something always takes more time than I thought it would, so whenever I notice myself waiting, I switch to doing something useful in the meanwhile and push the waiting task into the background. Installing the habit took a little bit of effort, but by now it's automatic.
"Don't hesitate." With some effort I got a working version of this flinch for tasks like programming, drawing or physical exercise. If something looks like it would make a good code fix or a good sketch, do it immediately. Would be nice to have this behavior for all other tasks too, but the change would take a lot of effort and I'm hesitating about it (ahem).
"Don't take on debt." Anything that looks even vaguely similar to debt, I instinctively run away from it. Had this flinch since as far as I can remember. In fact I don't remember ever owing >100$ to anyone. So far it's served me well.
Well, it makes me sad to see a very standardized and crisp term like "liability" used in such a confusing and nonstandard way. Especially when there is another equally crisp and very standardized term ("expense") that could be used instead. And I do not want to talk about it anymore.
I find that I worry a lot less about checking up on background tasks (compiles, laundry, baking pies, brewing tea, etc.) if I know I'll get a clear notification when the process is complete. If it's something that takes a fixed amount of time I'll usually just set a timer on my phone — this is a new habit that works well for tea in particular. Incidentally, owning an iPhone has done a surprising amount for my effectiveness just by reducing trivial inconveniences for this sort of thing.
For compiles, do something like
$ make; growlnotify -m "compile done!"
or run a script that sends you an SMS or something. This is something that I'm not in the habit of doing, but I just wrote myself a note to figure something out when I get into work on Monday.[1] (For most of my builds it's already taken care of, since it brings up a window when it's done. This would be for things like building the server, which runs in a terminal, and for svn updates, which are often glacial.)
[1] This is another thing that helps me a lot. Write things down in a place that you look at regularly. Could be a calendar app, could be a text file in Dropbox, whatever.
My attempt at the exercise for the skill "Hold Off On Proposing Solutions"
Example: At a LessWrong meet up someone talks about some problem they have and asks for advice, someone points out that everyone should explore the problem before proposing solutions. Successful use of the skill involves:
1) Noticing that a solution is being asked for. This is the most important sub-skill. It involves listening to everything you ever hear and sorting it into appropriate categories.
2) Come up with a witty and brilliant solution. This happens automatically.
3) Suppress the urge to explain the solution to everyone, even though it is so brilliant, and will make you look so cool, and (gasp) maybe someone else has thought of it, and you better say it before they do, otherwise it will look like it was their idea!
4) Warn other people to hold off on proposing solutions.
Exercise: Best done in a group, where the pressure to show intelligence is greatest. Read the group a list of questions. Use many different types of questions, some about matters of fact, some about opinion, and some asking for a solution. The first two types are to be answered immediately. The last type are to be met with absolute silence. Anyone found talking after a solution has been requested loses points.
Encourage people to write down any solutions they do come up with. After the exercise is finished, destroy all the written solutions, and forbid discussion of them.
I think that the big skill here is not being offended. If someone can say something and control your emotions, literally make you feel something you had no intention to feel beforehand, then perhaps it's time to start figuring out why you're allowing people to do this to you.
At a basic level anything someone can say to you is either true or false. If it's true then it's something you should probably consider and accept. If it's false then it's false and you can safely ignore/gently correct/mock the person saying it to you. In any case there really isn't any reason to be offended and especially there is no reason to allow the other person to provoke you to anger or acting without thought.
This isn't the same as never being angry! This is simply about keeping control for yourself over when and why you get angry or offended, rather than allowing the world to determine that for you.
Edit - please disregard this post
rationalists don't moralize
I like the theory but 'does not moralize' is definitely not a feature I would ascribe to Eliezer. We even have people quoting Eliezer's moralizing for the purpose of spreading the moralizing around!
"Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever."
In terms of general moralizing tendencies of people who identify as rationalists they seem to moralize slightly less than average but the most notable difference is what they choose to moralize about. When people happen to have similar morals to yourself it doesn't feel like they are moralizing as much.
Sigh.
A 5-second method (that I employ to varying levels of success) is whenever I feel the frustration of a failed interaction, I question how it might have been made more successful by me, regardless of whose "fault" it was. Your "sigh" reaction comes across as expressing the sentiment "It's your fault for not getting me. Didn't you read what I wrote? It's so obvious". But could you have expressed your ideas almost as easily without generating confusion in the first place? If so, maybe your reaction would be instead along the lines of "Oh that's interesting. I thought it was obvious but I guess I can see how that might have generated confusion. Perhaps I could...".
FWIW I actually really like the central idea in this post, and arguably too many of the comments have been side-tracked by digressions on moralizing. However, my hunch is that you probably could have easily gotten the message across AND avoided this confusion. My own specific suggestion here is that stipulative definitions are semantic booby traps, so if possible avoid them. Why introduce a stipulative definition for "moralize" when a less loaded phrase like "susp...
Eliezer, did you mean something different by the "does not get bullet" line than I thought you did? I took it as meaning: "If your thinking leads you to the conclusion that the right response to criticism of your beliefs is to kill the critic, then it is much more likely that you are suffering from an affective death spiral about your beliefs, or some other error, than that you have reasoned to a correct conclusion. Remember this, it's important."
This seems to be a pretty straightforward generalization from the history of human discourse, if nothing else. Whether it fits someone's definition of "moralizing" doesn't seem to be a very interesting question.
When people say they appreciate rationalists for their non-judgmentalism, I think they mean more than just that rationalists tend not to moralize. What they also mean is that rationalists are responsive to people's actual statements and opinions. This is separate from moralizing and in my opinion is more important, both because it precedes it in conversation and because I think people care about it more.
Being responsive to people means not (being interpreted as [inappropriately or] incorrectly) assuming what a person you are listening to thinks.
If someone tells says "I think torture, such as sleep deprivation, is effective in getting information," and they support, say, both the government doing such and legalizing it, judging them to be a bad person for that and saying so won't build communal ties, but it's unlikely to be frustrating for the first person.
If, on the other hand, they don't support the legalization or morality of it despite their claim it is effective, indignation will irritate them because it will be based on false assumptions about their beliefs.
If someone says "I'm thinking of killing myself", responding with "That violates my arbitraty an...
On the topic of the "poisonous pleasure" of moralistic critique:
I am struck by the will to emotional neutrality which appears to exist among many "aspies". It's like a passive, private form of nonviolent resistance directed against neurotypical human nature; like someone who goes limp when being beaten up. They refuse to take part in the "emotional games", and they refuse to resist in the usual way when those games are directed against them - the usual form of defense being a counterattack - because that would make them just as bad as the aggressor normals.
For someone like that, it may be important to get in touch with their inner moralizer! Not just for the usual reason - that being able to fight back is empowering - but because it's actually a healthy part of human nature. The capacity to denounce, and to feel the sting of being denounced without exploding or imploding, is not just some irrational violent overlay on our minds, without which there would be nothing but mutual satisficing and world peace. It has a function and we neutralize it at our peril.
If the message you intend to send is "I am secure in my status. The attacker's pathetic attempts at reducing my status are beneath my notice.", what should you do? You don't seem to think that ignoring the "attacks" is the correct course of action.
This is a genuine question. I do not know the answer and I would like to know what others think.
My visualization for a billion looks just like my visualization for a million, and a year seems like a long time to start with, so I can't imagine a billion years.
Would it help to be more specific? Imagine a little cube of metal, 1mm wide. Imagine rolling it between your thumb and fingertip, bigger than a grain of sand, smaller than a peppercorn. Yes?
A one-litre bottle holds 1 million of those. (If your first thought was the packing ratio, your second thought should be to cut the corners off to make cuboctahedra.)
Now imagine a cubic metre. A typical desk has a height of around 0.75m, so if its top is a metre deep and 1.33 metres wide (quite a large desk), then there is 1 cubic metre of space between the desktop and the floor.
It takes 1 billion of those millimetre cubes to fill that volume.
Now find an Olympic-sized swimming pool and swim a few lengths in it. It takes 2.5 trillion of those cubes to fill it.
Fill it with fine sand of 0.1mm diameter, and you will have a few quadrillion grains.
A bigger problem I have with the original is where X says "It's really important to me what happens to the species a billion years from now." The species, a billion years from now? Tha...
take a rationalist skill you think is important
Facing Reality, applied to self-knowledge
come up with a concrete example of that skill being used successfully;
"It sure seems I can't get up. Yet this looks a lot like laziness or attention-whoring. No-no-I'm-not-this-can't-be-STOP. Yes, there is a real possibility I could get up but am telling myself I can't, and I should take that into account. But upon introspection, and trying to move the damn things, it does feel like I can't, which is strong evidence.
So I'm going to figure out some tests. Maybe see a doctor; try to invoke reflexes that would make me move (careful, voluntary movement can truly fail even if reflexes don't); ask some trusted people, telling them the whole truth. Importantly, I'm going to refuse to use it as an excuse to slack off. I can crawl!"
crawls to nearest pile of homework, and works lying prone, occasionally trying to get up
decompose that use to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures;
I think I've started to do this already for Disputing Definitions, as has my girlfriend, just from listening to me discussing that article without reading it herself. So that's a win for rationality right there.
To take an example that comes up in our household surprisingly often, I'll let the disputed definition be " steampunk ". Statements of the form "X isn't really steampunk!" come up a lot on certain websites, and arguments over what does or doesn't count as steampunk can be pretty vicious. After reading "Disputing Definitions", though, I learnt how to classify those arguments as meaningless, and get to the real question, being "Do I want this thing in my subculture / on my website"? I think the process by which I recognise these questions goes something like this:
1) Make the initial statement. "A hairpin made out of a clock hand isn't steampunk!"
2) Visualise, even briefly, every important element in what I've just said. Visualising a hairpin produces an image of a thing stuck through a woman's hair arrangement. Visualising a clock hand produces a curly, tapered object such as one might see on an antique clock. Visualising "...
"Don't be stopped by trivial inconveniences"
I used to do really stupid things and waste lots of time trying to do something in the path of least resistance. I'm not sure if other people have the same problem, but might as well post.
An example of being stopped: "Hmm, I can't find any legitimate food stands around here. I guess I'll go eat at the ice cream stand right here then."
An example of overcoming: "Hmm, I can't find any legitimate food stands around here. That's weird. Lemme go to the information desk and ask where there is one."
What it feels like:
You have a goal
You realize that there are particular obstacles in your way
You decide to take a suboptimal road as a result
What you do to prevent it:
Notice that the obstacle isn't that big of a deal, and figure out if there are ways to circumvent this. If those ways are easy, do them. Basically, move something from not reachable to reachable.
I haven't seen meditative practices described much here and I've known first hand how they can help with this level of introspection. So, for those who might wish to try, I'll briefly describe the plain instruction given to zen students. If you want to read in a bit more detail, the thin book "zen in plain English" is an excellent intro.
Sit in a quiet place, with lights dimmed, facing a wall, with your back straight (ex: use a cushion for lower back support). Half-close your eye lids. Adjust your breathing by taking a few deep breaths and then fa...
why is it that once you try out being in a rationalist community you can't bear the thought of going back
Nitpick: It took me a bit to realize you meant "going back to being among non-rationalists" rather than "going back to the meeting".
Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!)
Unfortunately I recognize that as the bitter truth, so it's of no use for me for training purposes.
Here's something ...
Grunching. (Responding to the exercise/challenge without reading other people's responses first.)
Letting go is important. A failure in letting go is to cling to the admission of belief in a thing which you have come not to believe, because the admission involves pain. An example of this failure: I suggest a solution to a pressing design problem. Through conversation, it becomes apparent to me that my suggested solution is unworkable or has undesirable side effects. I realize the suggestion is a failure, but defend it to protect my identity as an authority ...
Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common
They are both physical objects, usually containing some metal and of roughly the same height, that have the ability to stop traffic, thus are found on a road, and have the colors of silver and white and (presumably by the specification of "that") also red in common?
...(by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Le
The word "moralize" has now been eliminated from the blog post. Apparently putting a big warning sign up saying "Don't argue about how to verbally define this problem behavior, it won't be fun for anyone and it won't get us any closer to having a relaxed rationalist community where people worry less about stepping in potholes" wasn't enough.
Rationalists should also strive to be precise, but you should not try to express precisely what time it was that you stopped beating your wife.
Much of rationality is choosing what to think about, We've seen this before in the form of righting a wrong question, correcting logical fallacies (as above), using one method to reason about probabilities in favor of another, and culling non-productive search paths. (which might be the most general form here.
The proper meta-rule is not 'jump past warning signs'. I'm not yet ready to propose a good phrasing of the proper rule.
would also like to see your explanation for when it's inappropriate to apply reason
It is inappropriate -- well, let us say it is a mistake in reasoning -- to apply reason to something whenever it is obvious that the time and mental energy are better applied to something else.
Interesting, I had in mind something much stronger. For example, if you attempt to apply too much reasoning to a Schelling point, you'll discover that the Schelling point's location was ultimately arbitrary and greatly weaken it in the process.
Another related example, is that you shouldn't attempt to (re)create hermeneutic truths/traditions from first principals. You won't be able to create a system that will work in practice, but might falsely convince yourself that you have.
"Moralizing is the mind-killer"?
Nah, just kidding. Making a joke.
No, that's more or less right. Which is unsurprising since moralizing is just politics.
don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers
OK, so you're saying that to change someone's mind, identify mental behaviors that are "world view building blocks", and then to instill these behaviors in others:
...come up with exercises which, if people go through them, causes them to experience the 5-second events
Such as:
...to feel the temptation to moralize, and to make the choice not to moralize, and to associate alternative procedural patterns such as pausing, reflecting...
Or:
......
The 5-second method is sufficiently general to coax someone into believing any world view, not just a rationalist one.
Um, yes. This is supposed to increase your general ability to teach a human to do anything, good or bad. In much the same way, having lots of electricity increases your general ability to do anything that requires electricity, good or bad. This does not make electrical generation a Dark Art.
I wanted to do the 5-second decomposition on what I think is one of the most important quality of a rationalist: s/he is able to say "oops!", but I found that it's probably a rationalist primitive. Anyway, here's my attempt:
I know that I'll probably be downvoted again, but nevertheless.
Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you.
Sorry, but I don't feel that I have this freedom on LW. And I feel people moralize here especially using the downvote function.
To give a concrete example of Eliezer himself
A point about counteracting evidence: if I believe I have a weighted six sided die that yields a roll of "one" one out of every ten rolls rather than one out of every six rolls as a fair die would, a single roll yielding a "one" is evidence against my theory. In a trial in which I repeatedly roll the die, I should expect to see many rolls of "one", even though each "one" is more likely under the theory the die is fair than it is under the theory the die is weighted against rolls of "one".
You really didn't present evidence that contradicted anything, the most this sort of testimony could be is as you said, "evidence to the contrary", but not as you also said, "contradicts". One thing to look out for is idiosyncratic word usage. Apparently, I interpret the word "contradict" to be much stronger than you do. It would be great to find out how others interpret it, there are all sorts of possibilities.
When I consider whether or not the things I am directed to are good evidence of a conspiracy behind the destruction of the World Trade Center, I discount apparent evidence indicating a conspiracy against what I w...
There was a time, many years ago, when I paid close attention to the arguments of the "truthers", and came to the conclusion that they were wrong. What you're doing now is bringing up the same old arguments with no obviously new evidence. I'm not going to give you my full attention, not because I want to close my eyes to the truth, but because I already looked at the evidence and already, in Bayesian terminology, updated my priors. Revisiting old evidence and arguments as if they were fresh evidence would arguably be an irrational thing for me to do, because it would be treating one piece of evidence as if it were two, updating twice on the same, rehashed, points that I've already considered.
I did not downvote you, because I have a soft spot for that sort of thing, but if other people have already, long ago, considered the best arguments and evidence, then at this point you really are wasting their time. It's not that they're rejecting evidence, I suspect, but that they're rejecting having their time being taken up with old evidence that they've already taken into account.
5 second level for evidence as soldiers
I was thinking about how Beliefs Must Pay Rent the other day, because my wife is much better than me at noticing when this isn't happening. One major trick to this is that she always asks (at least internally), "So what?"
That is, rather than immediately finding a way to attack whatever it is that the other person said, she considers whether what they've said affects anything in their argument. One line of inquiry is, "can I concede this point and still win?" But "so what?" goes further than that -- it helps her internall...
For my attempt at the exercise I pick a sub-skill of "reading, pen-in-hand" that I call "spotting opportunities to engage." My attemp runs to 2020 words and was rejected by the LessWrong software for being too long. I've put the raw text on a web page. Sorting out the html will have to wait for another day.
Why so long? I see the skill as very important. I'm crap at it. I've just had a success that I'm pleased with, but it is too recent, I haven't had time to boil it down so that I can describe it briefly.
Something I still need to work on, but which I think would be an important one (perhaps instead a general class of 5-second-skills rather than a single one) would be "remember what you know when you need it"
Example: you're potentially about to escalate a already heated political debate and make in personal. 5-second-skill: actually remembering that politics is the mind-killer, thus giving yourself a chance to pause, reconsider what you're about to do, and thus have a chance to avoid doing something stupid.
I'd also apply this notion to what you sa...
One of the things I think virtue ethics gets right is that if you think, say, lying is wrong then you should have a visceral reaction to liars. You shouldn't like liars. I don't think this is irrational at all (the goal isn't to be Mr. Spock). Having a visceral reaction to liars is part of how someone who thinks lying is wrong embodies that principle as much as not lying is. If somebody claims to follow a moral principle but fails to have a visceral reaction those who break it, that's an important cue that something is wrong. That goes doubly for yourself. Purposefully breaking that connection by avoiding becoming indignant seems like throwing away important feedback.
Answering "a color" to the question "what is red?" is not irrational or wrong in any way. In fact, it is the answer that is usually expected. Often when people ask "what is X?" they do in fact mean "to what category does X belong?". I think this is especially true when teaching. A teacher will be happy with the answer "red is a color".
I suggest a different and possibly better way of thinking about what Eliezer says about "moralizing" and "judging": don't judge other people. Enabling reasonable discussion and being fun to be around depend more on whether one turns disagreement into personal disdain than on what sort of disagreements one has.
(Some "moralizing" talk doesn't explicitly pass judgement on another person's worth, but I think such judgement, even if implicit, is the thing that's corrosive.)
The first fictional example I thought of was the Wax Lips scene from The Simpsons. "Try our wax lips: the candy of 1000 uses!" "Like what?" "One, a humourous substitute for your own lips." "Keep going..." "Two, err...oh, I'm needed in the basement!"
I thought of a few five-second skills like this:
I noticed that all of my 5-second skills (and Eliezer's also) involve doing more mental work than you're instinctively inclined to do at a key point. This makes sense if...
So here is a procedure I actually developed for myself couple of months ago. It's self-helpy (the purpose was to solve my self-esteem issues) but I think indignant moralizing uses some of the same mental machinery so it's relevant to the task of becoming less judgemental in general.
I believed that self-esteem doesn't say anything about the actual world so it would be a good idea to disconnect it from external feedback and permanently set to a comfortable level. At some point I realized that this idea was too abstract and I had to be specific to actually ch...
this post was almost-useless for me - i learn from it much less then from any post for the sequences. what i did learn: how over-generalization look like. that someone think that other people learn rationality skills in a way that i never saw anyone learn from, with totally different language and way of thinking about that. that translating is important.
the way i see it is: people look on the world with different lens. my rationality skills are the lens that are instinctive to me and include in the rationality-skills subset.
i learned them mostly be s...
Example
Recognizing the distractions. I'm struggling to come up with an idea on how to do this other than a form of awareness or attention meditation.
I'm going to attempt the exercise of turning "Don't dominate conversations" into something that you can train yourself to do on the 5-second level.
I count the number of active (or would-be active) members of a conversation and convert that into a rough percentage that I can hold in my mind quite easily. Three people = 33%, Four people = 25% and so on.
I keep an approximate running guess of the percentage of the conversation time where I was speaking (no more accurate than an amateur card counter in Blackjack needs to be) and use it to guide my behaviour whe...
I'm new here and couldn't find a better place to ask this: Are there any exercises to train such skills on the site? For example a list of statements to assess their testability?
Also I was wondering if there is some sort of pleasant way to access this site using an Android phone. I would like to read the sequences on mine.
Oh and hello everybody! :) I hope I can find the time and motivation to spend some time in this place, I think I might like to have your skills. ^^
If I violate any of your rules or anything just let me know I have barely scratched the surface of this seemingly massive site.
Could someone give me the reasoning for why silver lining thinking itself is bad? Making mistakes is inevitable and so I would have thought this is a way to start to look past the mistake and try to give it a sense of perspective. Falsely rationalising a bad thing into a good thing is not valuable, however taking a bad thing and working out how to use the situation you are now in into a more positive experience or if you are completely stuck, realising that it is time to move on I would have thought to be a useful skill. Please explain if you believe that I am wrong.
It might be useful to form a habit of reflexively trying to think about a problem in the mode you're not currently in, trying to switch to near mode if in far, or vice-versa. Even just a few seconds of imagining a hypothetical situation as if it were imminent and personal could provoke insight, and trying to 'step back' from problems is already a common technique.
I've used this to convince myself that a very long or unbounded life wouldn't get boring. When I try to put myself in near-mode, I simply can't imagine a day 2000 years from now when I wouldn't ...
Eliezer, you state in the intro that the 5-second-level is a "method of teaching rationality skills". I think it is something different.
First, the analysis phase is breaking down behaviour patterns into something conscious; this can apply to my own patterns as I figure out what I need to (or want to) teach, or to other people's patterns that I wish to emulate and instill into myself.
It breaks down "rationality" into small chunks of "behaviour" which can then be taught using some sort of conditioning - you're a bit unclear on ...
What would be an exercise which develops that habit?
Speaking from personal experience, I would propose that moralizing is mostly caused by anger about the presumed stupidity/ irrationality behind the statement we want to moralize about. The feeling of "Oh no they didn't just say that, how could they!". What I try to do against it, is to simply let that anger pass by following simple rules like taking a breath, counting to 10 or whatever works. When the anger is gone, usually the need for moralizing is as well.
Also I feel there is a lot of dis...
I decided on using "Motivated stopping" and "Motivated continuation" as my two examples.
To successfully avoid motivated stopping, someone who thinks he can use Solomonoff Induction to simulate "what is it like to be the epistemology of a mind" should think if he has or not considered in detail how much of our understanding of gross-level affective neuroscience can be mapped into a binary ´01010001´ kind of description, and if he has sufficiently detailed evidence to go on and write smth like http://arxiv.org/PS_cache/arxiv/pd...
Taking a look at Hug the Query for the exercise:
We have an ordered hierarchy:
In which we should be going as far down the chain as possible when considering a factual dispute.
Thus, if you find yourself thinking about whether someone can be trusted based on reputation or prestige, ask, "Can I look at their arguments instead?". If you find yourself looking at their arguments, ask, "Can I look at their calculations?". If you find yourself looking at their calculations, ask, "Can I perform a...
What could one do about rationalization? It probably won't be enough to ask oneself what arguments there are for the opposite position. Also, one could think about why one would want to confirm their position and if this is worth less or more than coming to know the truth (it will almost always be worth less). Do you have more ideas on how to beat this one?
Good post. This invokes, of course, the associated problem, of phrasing this in a way that might encourage listening on the other end.
To develop methods of teaching rationality skills, you need to learn to focus on mental events that occur in 5 seconds or less. Most of what you want to teach is directly on this level; the rest consists of chaining together skills on this level.
As our first example, let's take the vital rationalist skill, "Be specific."
Even with people who've had moderate amounts of exposure to Less Wrong, a fair amount of my helping them think effectively often consists of my saying, "Can you give me a specific example of that?" or "Can you be more concrete?"
A couple of formative childhood readings that taught me to be specific:
and:
And now, no sooner does someone tell me that they want to "facilitate communications between managers and employees" than I say, "Can you give me a concrete example of how you would do that?" Hayakawa taught me to distinguish the concrete and the abstract; and from that small passage in Asprin, I picked up the dreadful personal habit of calling people's bluffs, often using the specific phrase, "Name three."
But the real subject of today's lesson is how to see skills like this on the 5-second level. And now that we have a specific example in hand, we can proceed to try to zoom in on the level of cognitive events that happen in 5 seconds or less.
Over-abstraction happens because it's easy to be abstract. It's easier to say "red is a color" than to pause your thoughts for long enough to come up with the example of a stop sign. Abstraction is a path of least resistance, a form of mental laziness.
So the first thing that needs to happen on a timescale of 5 seconds is perceptual recognition of highly abstract statements unaccompanied by concrete examples, accompanied by an automatic aversion, an ick reaction - this is the trigger which invokes the skill.
Then, you have actionable stored procedures that associate to the trigger. And "come up with a concrete example" is not a 5-second-level skill, not an actionable procedure, it doesn't transform the problem into a task. An actionable mental procedure that could be learned, stored, and associated with the trigger would be "Search for a memory that instantiates the abstract statement", or "Try to come up with hypothetical examples, and then discard the lousy examples your imagination keeps suggesting, until you finally have a good example that really shows what you were originally trying to say", or "Ask why you were making the abstract statement in the first place, and recall the original mental causes of your making that statement to see if they suggest something more concrete."
Or to be more specific on the last mental procedure: Why were you trying to describe redness to someone? Did they just run a red traffic light?
(And then what kind of exercise can you run someone through, which will get them to distinguish red traffic lights from green traffic lights? What could teach someone to distinguish red from green?)
When you ask how to teach a rationality skill, don't ask "How can I teach people to be more specific?" Ask, "What sort of exercise will lead people through the part of the skill where they perceptually recognize a statement as overly abstract?" Ask, "What exercise teaches people to think about why they made the abstract statement in the first place?" Ask, "What exercise could cause people to form, store, and associate with a trigger, a procedure for going through hypothetical examples until a good one or at least adequate one is invented?"
Coming up with good ways to teach mental skills requires thinking on the 5-second level, because until you've reached that level of introspective concreteness, that fineness of granularity, you can't recognize the elements you're trying to teach; you can't recognize the patterns of thought you're trying to build inside a mind.
To come up with a 5-second description of a rationality skill, I would suggest zooming in on a concrete case of a real or hypothetical person who (a) fails in a typical fashion and (b) successfully applies the skill. Break down their internal experience into the smallest granules you can manage: perceptual classifications, contexts that evoke emotions, fleeting choices made too quick for verbal consideration. And then generalize what they're doing while staying on the 5-second level.
Start with the concrete example of the person who starts to say "Red is a color" and cuts themselves off and says "Red is what that stop sign and that fire engine have in common." What did they do on the 5-second level?
4a. Try to remember a memory which matches that abstract thing you just said.
4b. Try to invent a specific hypothetical scenario which matches that abstract thing you just said.
4c. Ask why you said the abstract thing in the first place and see if that suggests anything.
and
If you are thinking on this level of granularity, then you're much more likely to come up with a good method for teaching the skill "be specific", because you'll know that whatever exercise you come up with, it ought to cause people's minds to go through events 1-4, and provide examples or feedback to train perception 0.
Next example of thinking on the 5-second scale: I previously asked some people (especially from the New York LW community) the question "What makes rationalists fun to be around?", i.e., why is it that once you try out being in a rationalist community you can't bear the thought of going back? One of the primary qualities cited was "Being non-judgmental." Two different people came up with that exact phrase, but it struck me as being not precisely the right description - rationalists go around judging and estimating and weighing things all the time. (Noticing small discordances in an important description, and reacting by trying to find an exact description, is another one of those 5-second skills.) So I pondered, trying to come up with a more specific image of exactly what it was we weren't doing, i.e. Being Specific, and after further visualization it occurred to me that a better description might be something like this: If you are a fellow member of my rationalist community and you come up with a proposal that I disagree with - like "We should all practice lying, so that we feel less pressure to believe things that sound good to endorse out loud" - then I may argue with the proposal on consequentialist grounds. I may judge. But I won't start saying in immense indignation what a terrible person you must be for suggesting it.
Now I could try to verbally define exactly what it is we don't do, but this would fail to approach the 5-second level, and probably also fail to get at the real quality that's important to rationalist communities. That would merely be another attempt to legislate what people are or aren't allowed to say, and that would make things less fun. There'd be a new accusation to worry about if you said the wrong thing - "Hey! Good rationalists don't do that!" followed by a debate that wouldn't be experienced as pleasant for anyone involved.
In this case I think it's actually easier to define the thing-we-avoid on the 5-second level. Person A says something that Person B disagrees with, and now in Person B's mind there's an option to go in the direction of a certain poisonous pleasure, an opportunity to experience an emotional burst of righteous indignation and a feeling of superiority, a chance to castigate the other person. On the 5-second level, Person B rejects this temptation, and instead invokes the procedure of (a) pausing to reflect and then (b) talking about the consequences of A's proposed policy in a tone that might perhaps be worried (for the way of rationality is not to refuse all emotion) but nonetheless is not filled with righteous outrage and indignation which demands that all others share that indignation or be likewise castigated.
(Which in practice, makes a really huge difference in how much rationalists can relax when they are around fellow rationalists. It's the difference between having to carefully tiptoe through a minefield and being free to run and dance, knowing that even if you make a mistake, it won't socially kill you. You're even allowed to say "Oops" and change your mind, if you want to backtrack (but that's a whole 'nother topic of 5-second skills)...)
The point of 5-second-level analysis is that to teach the procedural habit, you don't go into the evolutionary psychology of politics or the game theory of punishing non-punishers (by which the indignant demand that others agree with their indignation), which is unfortunately how I tended to write back when I was writing the original Less Wrong sequences. Rather you try to come up with exercises which, if people go through them, causes them to experience the 5-second events - to feel the temptation to indignation, and to make the choice otherwise, and to associate alternative procedural patterns such as pausing, reflecting, and asking "What is the evidence?" or "What are the consequences?"
What would be an exercise which develops that habit? I don't know, although it's worth noting that a lot of traditional rationalists not associated with LW also have this skill, and that it seems fairly learnable by osmosis from watching other people in the community not be indignant. One method that seems worth testing would be to expose people to assertions that seem like obvious temptations to indignation, and get them to talk about evidence or consequences instead. Say, you propose that eating one-month-old human babies ought to be legal, because one-month-old human babies aren't as intelligent as pigs, and we eat pigs. Or you could start talking about feminism, in which case you can say pretty much anything and it's bound to offend someone. (Did that last sentence offend you? Pause and reflect!) The point being, not to persuade anyone of anything, but to get them to introspectively recognize the moment of that choice between indignation and not-indignation, and walk them through an alternative response, so they store and associate that procedural skill. The exercise might fail if the context of a school-exercise meant that the indignation never got started - if the temptation/choice were never experienced. But we could try that teaching method, at any rate.
(There's this 5-second skill where you respond to mental uncertainty about whether or not something will work, by imagining testing it; and if it looks like you can just go test something, then the thought occurs to you to just go test it. To teach this skill, we might try showing people a list of hypotheses and asking them to quickly say on a scale of 1-10 how easy they look to test, because we're trying to teach people a procedural habit of perceptually considering the testableness of ideas. You wouldn't give people lots of time to think, because then that teaches a procedure of going through complex arguments about testability, which you wouldn't use routinely in real life and would end up associating primarily to a school-context where a defensible verbal argument is expected.)
I should mention, at this point, that learning to see the 5-second level draws heavily on the introspective skill of visualizing mental events in specific detail, and maintaining that introspective image in your mind's eye for long enough to reflect on it and analyze it. This may take practice, so if you find that you can't do it right away, instinctively react by feeling that you need more practice to get to the lovely reward, instead of instinctively giving up.
Has everyone learned from these examples a perceptual recognition of what the "5-second level" looks like? Of course you have! You've even installed a mental habit that when you or somebody else comes up with a supposedly 5-second-level description, you automatically inspect each part of the description to see if it contains any block units like "Be specific" which are actually high-level chunks.
Now, as your exercise for learning the skill of "Resolving cognitive events to the 5-second level", take a rationalist skill you think is important (or pick a random LW post from How To Actually Change Your Mind); come up with a concrete example of that skill being used successfully; decompose that usage to a 5-second-level description of perceptual classifications and emotion-evoking contexts and associative triggers to actionable procedures etcetera; check your description to make sure that each part of it can be visualized as a concrete mental process and that there are no non-actionable abstract chunks; come up with a teaching exercise which seems like it ought to cause those sub-5-second events to occur in people's minds; and then post your analysis and proposed exercise in the comments. Hope to hear from you soon!