The other side of this is to try to be aware if people are trying to load up your mind with fake experiences to influence your intuition.
This can actually be done unintentionally as well. One of the things that might have caused the original haunted rationalist problem could have been watching/reading too much horror fiction: if most experiences you've seen regarding an old house end up with people tortured and dead, even if you know they were all known to be fictitious, you will still anticipate, however strongly, bad things happening in old houses. This also makes me wary that my anticipations regarding the future are likely highly influenced by all the science fiction I read, so I know to watch my aliefs in that regard very very closely.
Want to alieve snakes are generally not dangerous?
No! Those things can kill you! Perhaps I am safe here in Berkeley for the next month or so but back home I expect most of the snakes I encounter to be capable of killing me if they bite me. They aren't particularly likely to bite me unless I touch them, corner them or stand on them - that's where the fear comes in handy. It makes me feel uncomfortable when walking through long grass, particularly when wearing light footwear. That way I at least pay attention to movements and sounds and so give the snake a chance to move out of the way before I run on him.
This example was intended as a possible alief you might want to hold, whether it is accurate to your beliefs or not. There are some people who can reasonably expect to never encounter a dangerous snake in the wild who are nonetheless very afraid of them (and all other snakes as well); while respect and fear for dangerous and potentially poisonous animals is worthwhile for some, for others it can be a handicap.
I should also mention (though I took this part out of the article) that there are some situations where one might want to alieve things entirely counter to ones beliefs. The technique allows for cultivation of these types of aliefs as well, and not fearing snakes might be one of them. Other examples could be the alief that cake is not delicious, or that drinking/being drunk is boring and often painful. Note that I do not personally advocate lying to oneself in an overly convincing manner, as that way darkness lies.
You mean alieve, not believe. This is a technique to alieve what you already believe.
Fixed.
Rationalist Judo, or Using the Availability Heuristic to Win
During the sessions at the 2011 rationality minicamp, we learned that some of our biases can be used constructively, rather than just tolerated and avoided.
For example, in an excellent article discussing intuitions and the way they are formed, psychologist Robin Hogarth recommends that "if people want to shape their intuitions, [they should] make conscious efforts to inhabit environments that expose them to the experiences and information that form the intuitions that they want."
Another example: Carl Shulman remarked that due to the availability heuristic we anticipate car crashes with frequencies determined by how many people we know of or have heard about who have gotten into one. So if you don't fear car crashes but you want to acquire a more accurate level of concern about driving, you could seek out news or footage of car crashes. Video footage may work best, because experiential data unconsciously inform our intuitions more effectively than, say, written data.
This fact may lie behind many effective strategies for getting your brain to do what you want it to do:
- Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.
- Rejection therapy, which many of us minicampers found helpful and effective. The point is to ask people for things they will probably deny you, which trains your body to realize that nothing bad happens when you are rejected. After a time, this improves social confidence.
- As looking glass self theory states,1 we are shaped by how others see us. This is largely due to the experience of having people react to us in certain ways.
In The Mystery of the Haunted Rationalist we see a someone whose stated beliefs don't match their anticipations. Now we can actually use the brain's machinery to get it to do what we want it to: alieve that ghosts aren't real or dangerous. One method would be for our ghost stricken friend to get people to tell her detailed stories about pleasant nights they spent in haunted houses (complete with spooky details) where nothing bad happened. Alternatively, she could read some books or watch some videos with similar content. Best of all would be if she spent a month living in a 'haunted' house, perhaps after doing some of the other things to soothe her nerves. There are many who will attest that eventually one 'gets used to' the scary noises and frightening atmosphere of an old house, and ceases to be scared when sleeping in similar houses.
I attribute the effectiveness of these tactics mostly to successful persuasion of the non-conscious brain using experiential data.
So, it seems we have a (potentially very powerful) new technique to add to our rationalist arsenal. To summarize:
- Find something you want to alieve.
- Determine what experiences that alief should cause you to anticipate.
- Have those experiences, by proxy if necessary, artificial or not.
- Test whether you now anticipate what you want to.
- If the test reveals progress, but not enough, repeat.
Examples:
- Want to alieve that boxing is dangerous2? Watch some footage of boxers being punched painfully in the face, and ask a good boxer to win a fight against you in a painful but non-damaging manner. Now are you reluctant to box someone you have a good chance of beating?
- Want to alieve that driving is dangerous? Watch footage of lots of car crashes, see Red Asphalt, and take a class from professional stunt drivers on how to crash safely. Now are you more reluctant to drive?
- Want to alieve that flying is not very dangerous? Get a pilot's view of a flight, and pay attention to how boring it is. Sit next to a pilot while they undergo a very realistic flight simulation that covers many possible accidents, and watch them successfully navigate each scenario. Now are you more willing to fly?
- Want to alieve snakes are generally not dangerous? Watch videos of safe snake interactions. Watch a pet store employee deal with a snake safely. Play with a snake under supervision without incident. Now do you exhibit less fear when encountering a snake?
- Want to alieve you are part of the Less Wrong community? Interact with other community members as though you are one, attend meetups, make friends in the community. Now do you empathize more strongly with contributors on Less Wrong than with those elsewhere on the internet?
It can be annoying when our unconsciously moderated aliefs don't match our rationality-influenced beliefs, but luckily our aliefs can be trained.
1 Thanks to Hugh Ristik for talking about this at minicamp.
2 Credit for this example goes to Brandon Reinhart.
Special thanks to Luke for all the help
Not that being right means you're necessarily not uninformed, ignorant, or in denial. And being right is probably positively correlated with being a jerk, as most people measure things.
True. I was actually considering omitting the last sentence, as it doesn't really contribute much, but I wasn't sure if that would have been misleading as to the original meaning.
It would be really convenient if rationality, the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life.
As I've seen it used here, "rationality" most commonly refers to "the best [memecluster] for winning at life" whatever that actual memecluster may be. If it could be shown that believing in the christian god uniformly improved or did not affect every aspect of believers lives regardless of any other beliefs held, I think a majority of lesswrongers would take every effort necessary to actually believe in a christian god. The problem seems to be how rationality and "the meme-cluster that we most enjoy and are best-equipped to participate in" are equated- these two are currently very similar memeclusters for the current lesswrong demographic, but they are not necessarily so. "It would be really convenient if the meme-cluster that we most enjoy and are best-equipped to participate in, also happened to be the best for winning at life, rationality." seems more sensical.
So, due to bad luck, bad timing, and lack of proper foresight, it seems this attempt was a total bust(well, not total, I got some work done). I'll try another one sometime this month. Any feedback would be helpful.
I had a bit of car trouble, but I managed to get here get my coffee and the wifi password, and then realize I forgot a sign or anything of the kind. I'm sitting in the corner near the register if anybody happens to be waiting
I'm wearing a dark red shirt and jeans and typing on a white laptop if that helps.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Hmm. My brain seems to do somehting very similar automatically, and I can't think of any clear problems that I have of this type (at the moment, it dosn't necessarily mean there ain't any). There is the possibility that some other less positive factor causes my abnormally high apparent alif-belif correlation thou. Still, figuring out what I did to acquire this habit might still be useful to others.
Do you read/watch a lot of fiction? I personally end up selecting for fiction which matches my beliefs somewhat closely, and that in retrospect has likely strongly enforced the connection. This seems like a reasonable candidate for an automatic yet unnoticeable process with those results.