The Proper Use of Humility

73 Eliezer_Yudkowsky 01 December 2006 07:55PM

It is widely recognized that good science requires some kind of humility.  What sort of humility is more controversial.

Consider the creationist who says: "But who can really know whether evolution is correct? It is just a theory. You should be more humble and open-minded." Is this humility? The creationist practices a very selective underconfidence, refusing to integrate massive weights of evidence in favor of a conclusion he finds uncomfortable. I would say that whether you call this "humility" or not, it is the wrong step in the dance.

continue reading »

Baltimore Meetup 4/10 1PM

7 groupuscule 31 March 2011 09:01PM

A LessWrong "rationality workbook" idea

21 jwhendy 09 January 2011 05:52PM

Note: This was originally posted in the discussion area, but motions to move it to the top level were made.
-----

My own desire to improve my rationality coupled with some posts criticizing LessWrong not too long ago led to an idea. For reference, the posts I mean are these:

The comment stream on that last link brought up whether or not LessWrong is fulfilling EY's vision of a "rationality dojo" of sorts (along with what I assume is the idea that LessWong is supposed to be fulfilling that vision).
Now, I'm not the one to evaluate whether LessWrong is or is not a true "rationality dojo" (not to mention reducing the problem as not to argue over semantics), but I do have an idea for how it could perhaps do so, and an idea I personally find quite exciting.
The idea is simple:  that of a LessWrong workbook.
What does that mean? For one, it means an awful lot of effort, primarily at distillation. I quite enjoy reading posts and appreciate the overall prose style used here. A lot of posts are easy reading, contain nice anecdotes, hypothetical real-world scenarios, and the like. Distillation would entail extraction of the "nuggets" that an aspiring rationalist should know. Teach me rationality in bullet points, flash card modules, and other mini-homework-sized methods (which I'll get to).
I see the above as reducing to some form of "Rationality Overview." What is rationality? What's the point? What is the end result? What are the typical tools used to be rational? Finally, perhaps, what does the life of a rationalist look like? Perhaps pages containing some helpful "mantras" or questions to ask one's self periodically, such as during some period of daily meditation? Inspirational/helpful prose like the  Twelve Virtues,  The Simple Truth, and the Litanies of  Gendlin  and  Tarski.
Secondly, I envision a literal "workbook" type of section. I was intrigued by a  comment  on my  first post, in which  JenniferRM  wrote [in discussing my  ongoing de-conversion]:
Unfortunately, I don't know of  any resources to help people traverse the path you're facing in a series of small safe steps.
Why not? Shouldn't there be some example of guidelines for evaluating beliefs and attempting to rationally adjust them according to new evidence, though processes, or trustworthy tools? Hence the birth of my workbook idea.
Most specifically, I envision the following:
  • Most interesting to me was the idea of some form of "rationality comb." An iterative evaluation process. Again, I hardly consider myself the one to design this, but perhaps something like:
    • Take 5 minutes and brainstorm about the beliefs you think affect your actions the most
    • Focus on the first belief, set(1):belief(1)
    • Can you recall how you came to hold this belief?
    • What are some common alternative views to your belief?
    • Do you think you could provide  testable  justification for your current belief over the above alternatives?
    • If not, can you  imagine  leaving your belief for one of the alternatives?
    • And so on...
    • Then repeat with set(1):belief(2). When set(1):belief(n) is finished... re-brainstorm for 5min to come up with set(2):belief(1)...belief(n).
  • A series of "homework" problems on Bayesian Probability, perhaps including EY's  tutorial  and other helpful material.
  • Brain teasers or similar items to focus on attentiveness to details, weighing evidence, knowing the limits of what you can know given certain information, etc. I think LW has already provided some good  examples  of neat things like this (even if they would require refinement).
  • Questions that intentionally try to deceive the reader with some form of  fallacy  or  bias
  • Tutorials on how to have rational discussions, rules of engagement,  reaching a mutual conclusion, etc.
This is just a stab at what the "meat" of the workbook could contain.
My primary interest is in the first bullet: a step-by-step guide to examining one's life. I think it needs to be iterative, since not all beliefs in need of rational attention will be apparent on the first pass, and each subsequent step needs to be able to be done in bite sized chunks. Steps should have concrete starting and ending points that seem achievable to promote continuing effort and a sense of accomplishment. I left out quite a few steps... suggestions could be made as to what kind of information should be sought in attempt to confirm or disconfirm one's belief/hypothesis (and being cautious if you find yourself looking for  expensive evidence in favor vs. cheap evidence against), how to know be wary of hypotheses that  explain too much, and so on.
The point is to have a handy reference guide to not only becoming a rationalist, but also to have a concrete guide to applying those skills to your own life. Steps summarized from trusted methods help avoid pitfalls, help us learn from what's already documented, and (I would suspect) help us avoid subconscious biases or shields we might be tempted to apply to cherished beliefs if left to our own devices/methods.
By this time, I'm hoping I have the point across. I've tried to link heavily to show that I'm not really proposing new material -- the content seems to be here. My idea consists of is  rearranging this content into a compact "dojo-like" format.
So, what are your thoughts? Is this done elsewhere already (if so, my apologies for wasting your time)? Does this sound helpful? Is it plausible/feasible? Would  one workbook be able to apply to varied personalities/learning methods? And what are  your thoughts (if you like the idea) on what it would contain?

Back to the Basics of Rationality

80 lukeprog 11 January 2011 07:05AM

My deconversion from Christianity had a large positive impact on my life. I suspect it had a small positive impact on the world, too. (For example, I no longer condemn gays or waste time and money on a relationship with an imaginary friend.) And my deconversion did not happen because I came to understand the Bayesian concept of evidence or Kolmogorov complexity or Solomonoff induction. I deconverted because I encountered some very basic arguments for non-belief, for example those in Dan Barker's Losing Faith in Faith.

Less Wrong has at least two goals. One goal is to raise the sanity waterline. If most people understood just the basics Occam's razor, what constitutes evidence and why, general trends of science, reductionism, and cognitive biases, the world would be greatly improved. Yudkowsky's upcoming books are aimed at this first goal of raising the sanity waterline. So are most of the sequences. So are learning-friendly posts like References & Resources for LessWrong.

A second goal is to attract some of the best human brains on the planet and make progress on issues related to the Friendly AI problem, the problem with the greatest leverage in the universe. I have suggested that Less Wrong would make faster progress toward this goal if it worked more directly with the community of scholars already tackling the exact same problems. I don't personally work toward this goal because I'm not mathematically sophisticated enough to do so, but I'm glad others are!

Still, I think the first goal could be more explicitly pursued. There are many people like myself and jwhendy who can be massively impacted for the better not by coming to a realization about algorithmic learning theory, but by coming to understand the basics of rationality like probability and the proper role of belief and reductionism.

continue reading »

Five-minute rationality techniques

55 sketerpot 10 August 2010 02:24AM

Less Wrong tends toward long articles with a lot of background material. That's great, but the vast majority of people will never read them. What would be useful for raising the sanity waterline in the general population is a collection of simple-but-useful rationality techniques that you might be able to teach to a reasonably smart person in five minutes or less per technique.

Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." We can talk about this very precisely, in terms of Bayesian updating and conditional probability, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.

What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. Here are some candidates, to get the discussion started:

Candidate 1 (suggested by DuncanS): Unlikely events happen all the time. Someone gets in a car-crash and barely misses being impaled by a metal pole, and people say it's a million-to-one miracle -- but events occur all the time that are just as unlikely. If you look at how many highly unlikely things could happen, and how many chances they have to happen, then it's obvious that we're going to see "miraculous" coincidences, purely by chance. Similarly, with millions of people dying of cancer each year, there are going to be lots of people making highly unlikely miracle recoveries. If they didn't, that would be surprising.

Candidate 2: Admitting that you were wrong is a way of winning an argument. (The other person wins, too.) There's a saying that "It takes a big man to admit he's wrong," and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like success. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. The hard part of retraining yourself to think this way is just realizing that feeling good about conceding an argument is even an option.

Candidate 3: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.

Anti-candidate: "Just because something feels good doesn't make it true." I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.

 

This was adapted from an earlier discussion in an Open Thread. One suggestion, based on the comments there: if you're not sure whether something can be explained quickly, just go for it! Write a one-paragraph explanation, and try to keep the inferential distances short. It's good practice, and if we can come up with some really catchy ones, it might be a good addition to the wiki. Or we could use them as rationalist propaganda, somehow. There are a lot of great ideas on Less Wrong that I think can and should spread beyond the usual LW demographic.

What Cost for Irrationality?

59 Kaj_Sotala 01 July 2010 06:25PM

This is the first part in a mini-sequence presenting content from Keith E. Stanovich's excellent book What Intelligence Tests Miss: The psychology of rational thought. It will culminate in a review of the book itself.

People who care a lot about rationality may frequently be asked why they do so. There are various answers, but I think that many of ones discussed here won't be very persuasive to people who don't already have an interest in the issue. But in real life, most people don't try to stay healthy because of various far-mode arguments for the virtue of health: instead, they try to stay healthy in order to avoid various forms of illness. In the same spirit, I present you with a list of real-world events that have been caused by failures of rationality, so that you might better persuade others of this being important.

What happens if you, or the people around you, are not rational? Well, in order from least serious to worst, you may...

Have a worse quality of living. Status Quo bias is a general human tendency to prefer the default state, regardless of whether the default is actually good or not. In the 1980's, Pacific Gas and Electric conducted a survey of their customers. Because the company was serving a lot of people in a variety of regions, some of their customers suffered from more outages than others. Pacific Gas asked customers with unreliable service whether they'd be willing to pay extra for more reliable service, and customers with reliable service whether they'd be willing to accept a less reliable service in exchange for a discount. The customers were presented with increases and decreases of various percentages, and asked which ones they'd be willing to accept. The percentages were same for both groups, only with the other having increases instead of decreases. Even though both groups had the same income, customers of both groups overwhelmingly wanted to stay with their status quo. Yet the service difference between the groups was large: the unreliable service group suffered 15 outages per year of 4 hours' average duration and the reliable service group suffered 3 outages per year of 2 hours' average duration! (Though note caveats.)

A study by Philips Electronics found that one half of their products had nothing wrong in them, but the consumers couldn't figure out how to use the devices. This can be partially explained by egocentric bias on behalf of the engineers. Cognitive scientist Chip Heath notes that he has "a DVD remote control with 52 buttons on it, and every one of them is there because some engineer along the line knew how to use that button and believed I would want to use it, too. People who design products are experts... and they can't imagine what it's like to be as ignorant as the rest of us."

Suffer financial harm. John Allen Paulos is a professor of mathematics at Temple University. Yet he fell prey to serious irrationality which began when he purchased WorldCom stock at $47 per share in early 2000. As bad news about the industry began mounting, WorldCom's stock price started falling - and as it did so, Paulos kept buying, regardless of accumulating evidence that he should be selling. Later on, he admitted that his "purchases were not completely rational" and that "I bought shares even though I knew better". He was still buying - partially on borrowed money - when the stock price was $5. When it momentarily rose to $7, he finally decided to sell. Unfortunately, he didn't get off from work until the market closed, and on the next market day the stock had lost a third of its value. Paulos finally sold everything, at a huge loss.

continue reading »

Seven Shiny Stories

104 Alicorn 01 June 2010 12:43AM

It has come to my attention that the contents of the luminosity sequence were too abstract, to the point where explicitly fictional stories illustrating the use of the concepts would be helpful.  Accordingly, there follow some such stories.

1. Words (an idea from Let There Be Light, in which I advise harvesting priors about yourself from outside feedback)

Maria likes compliments.  She loves compliments.  And when she doesn't get enough of them to suit her, she starts fishing, asking plaintive questions, making doe eyes to draw them out.  It's starting to annoy people.  Lately, instead of compliments, she's getting barbs and criticism and snappish remarks.  It hurts - and it seems to hurt her more than it hurts others when they hear similar things.  Maria wants to know what it is about her that would explain all of this.  So she starts taking personality tests and looking for different styles of maintaining and thinking about relationships, looking for something that describes her.  Eventually, she runs into a concept called "love languages" and realizes at once that she's a "words" person.  Her friends aren't trying to hurt her - they don't realize how much she thrives on compliments, or how deeply insults can cut when they're dealing with someone who transmits affection verbally.  Armed with this concept, she has a lens through which to interpret patterns of her own behavior; she also has a way to explain herself to her loved ones and get the wordy boosts she needs.

2. Widgets (an idea from The ABC's of Luminosity, in which I explain the value of correlating affect, behavior, and circumstance)

Tony's performance at work is suffering.  Not every day, but most days, he's too drained and distracted to perform the tasks that go into making widgets.  He's in serious danger of falling behind his widget quota and needs to figure out why.  Having just read a fascinating and brilliantly written post on Less Wrong about luminosity, he decides to keep track of where he is and what he's doing when he does and doesn't feel the drainedness.  After a week, he's got a fairly robust correlation: he feels worst on days when he doesn't eat breakfast, which reliably occurs when he's stayed up too late, hit the snooze button four times, and had to dash out the door.  Awkwardly enough, having been distracted all day tends to make him work more slowly at making widgets, which makes him less physically exhausted by the time he gets home and enables him to stay up later.  To deal with that, he starts going for long runs on days when his work hasn't been very tiring, and pops melatonin; he easily drops off to sleep when his head hits the pillow at a reasonable hour, gets sounder sleep, scarfs down a bowl of Cheerios, and arrives at the widget factory energized and focused.

continue reading »

On Enjoying Disagreeable Company

49 Alicorn 26 May 2010 01:47AM

Bears resemblance to: Ureshiku Naritai; A Suite of Pragmatic Considerations In Favor of Niceness

In this comment, I mentioned that I can like people on purpose.  At the behest of the recipients of my presentation on how to do so, I've written up in post form my tips on the subject.  I have not included, and will not include, any specific real-life examples (everything below is made up), because I am concerned that people who I like on purpose will be upset to find that this is the case, in spite of the fact that the liking (once generated) is entirely sincere.  If anyone would find more concreteness helpful, I'm willing to come up with brief fictional stories to cover this gap.

It is useful to like people.  For one thing, if you have to be around them, liking them makes this far more pleasant.  For another, well, they can often tell, and if they know you to like them this will often be instrumentally useful to you.  As such, it's very handy to be able to like someone you want to like deliberately when it doesn't happen by itself.  There are three basic components to liking someone on purpose.  First, reduce salience of the disliked traits by separating, recasting, and downplaying them; second, increase salience of positive traits by identifying, investigating, and admiring them; and third, behave in such a way as to reap consistency effects.

1. Reduce salience of disliked traits.

Identify the traits you don't like about the person - this might be a handful of irksome habits or a list as long as your arm of deep character flaws, but make sure you know what they are.  Notice that however immense a set of characteristics you generate, it's not the entire person.  ("Everything!!!!" is not an acceptable entry in this step.)  No person can be fully described by a list of things you have noticed about them.  Note, accordingly, that you dislike these things about the person; but that this does not logically entail disliking the person.  Put the list in a "box" - separate from how you will eventually evaluate the person.

continue reading »

Cached Selves

172 AnnaSalamon 22 March 2009 07:34PM

by Anna Salamon and Steve Rayhawk (joint authorship)

Related to: Beware identity

A few days ago, Yvain introduced us to priming, the effect where, in Yvain’s words, "any random thing that happens to you can hijack your judgment and personality for the next few minutes."

Today, I’d like to discuss a related effect from the social psychology and marketing literatures: “commitment and consistency effects”, whereby any random thing you say or do in the absence of obvious outside pressure, can hijack your self-concept for the medium- to long-term future

To sum up the principle briefly: your brain builds you up a self-image. You are the kind of person who says, and does... whatever it is your brain remembers you saying and doing.  So if you say you believe X... especially if no one’s holding a gun to your head, and it looks superficially as though you endorsed X “by choice”... you’re liable to “go on” believing X afterwards.  Even if you said X because you were lying, or because a salesperson tricked you into it, or because your neurons and the wind just happened to push in that direction at that moment.

For example, if I hang out with a bunch of Green Sky-ers, and I make small remarks that accord with the Green Sky position so that they’ll like me, I’m liable to end up a Green Sky-er myself.  If my friends ask me what I think of their poetry, or their rationality, or of how they look in that dress, and I choose my words slightly on the positive side, I’m liable to end up with a falsely positive view of my friends.  If I get promoted, and I start telling my employees that of course rule-following is for the best (because I want them to follow my rules), I’m liable to start believing in rule-following in general.

All familiar phenomena, right?  You probably already discount other peoples’ views of their friends, and you probably already know that other people mostly stay stuck in their own bad initial ideas.  But if you’re like me, you might not have looked carefully into the mechanisms behind these phenomena.  And so you might not realize how much arbitrary influence consistency and commitment is having on your own beliefs, or how you can reduce that influence.  (Commitment and consistency isn’t the only mechanism behind the above phenomena; but it is a mechanism, and it’s one that’s more likely to persist even after you decide to value truth.)

continue reading »

Too busy to think about life

85 Academian 22 April 2010 03:14PM

Many adults maintain their intelligence through a dedication to study or hard work.  I suspect this is related to sub-optimal levels of careful introspection among intellectuals.

If someone asks you what you want for yourself in life, do you have the answer ready at hand?  How about what you want for others?  Human values are complex, which means your talents and technical knowledge should help you think about them.  Just as in your work, complexity shouldn't be a curiosity-stopper.  It means "think", not "give up now."

But there are so many terrible excuses stopping you...

Too busy studying?  Life is the exam you are always taking.  Are you studying for that?  Did you even write yourself a course outline?

Too busy helping?  Decision-making is the skill you are aways using, or always lacking, as much when you help others as yourself.  Isn't something you use constantly worth improving on purpose?

Too busy thinking to learn about your brain?  That's like being too busy flying an airplane to learn where the engines are.  Yes, you've got passengers in real life, too: the people whose lives you affect.

Emotions too irrational to think about them?  Irrational emotions are things you don't want to think for you, and therefore are something you want to think about.  By analogy, children are often irrational, and no one sane concludes that we therefore shouldn't think about their welfare, or that they shouldn't exist.

So set aside a date.  Sometime soon.  Write yourself some notes.  Find that introspective friend of yours, and start solving for happiness.  Don't have one?  For the first time in history, you've got LessWrong.com!

Reasons to make the effort:

Happiness is a pairing between your situation and your disposition. Truly optimizing your life requires adjusting both variables: what happens, and how it affects you.

You are constantly changing your disposition.  The question is whether you'll do it with a purpose.  Your experiences change you, and you affect those, as well as how you think about them, which also changes you.  It's going to happen.  It's happening now.  Do you even know how it works?  Put your intelligence to work and figure it out!

The road to harm is paved with ignorance.  Using your capability to understand yourself and what you're doing is a matter of responsibility to others, too.  It makes you better able to be a better friend.

You're almost certainly suffering from Ugh Fields unconscious don't-think-about-it reflexes that form via Pavlovian conditioning.  The issues most in need of your attention are often ones you just happen not to think about for reasons undetectable to you.

How not to waste the effort:

Don't wait till you're sad.  Only thinking when you're sad gives you a skew perspective.  Don't infer that you can think better when you're sad just because that's the only time you try to be thoughtful.  Sadness often makes it harder to think: you're farther from happiness, which can make it more difficult to empathize with and understand.  Nonethess we often have to think when sad, because something bad may have happened that needs addressing.

Introspect carefully, not constantly.  Don't interrupt your work every 20 minutes to wonder whether it's your true purpose in life.  Respect that question as something that requires concentration, note-taking, and solid blocks of scheduled time.  In those times, check over your analysis by trying to confound it, so lingering doubts can be justifiably quieted by remembering how thorough you were.

Re-evaluate on an appropriate time-scale.  Try devoting a few days before each semester or work period to look at your life as a whole.  At these times you'll have accumulated experience data from the last period, ripe and ready for analysis.  You'll have more ideas per hour that way, and feel better about it.  Before starting something new is also the most natural and opportune time to affirm or change long term goals.  Then, barring large unexpecte d opportunities, stick to what you decide until the next period when you've gathered enough experience to warrant new reflection.

(The absent minded driver is a mathematical example of how planning outperforms constant re-evaluation.  When not engaged in a deep and careful introspection, we're all absent minded drivers to a degree.)

Lost about where to start?  I think Alicorn's story is an inspiring one.  Learn to understand and defeat procrastination/akrasia.  Overcome your cached selves so you can grow freely (definitely read their possible strategies at the end).  Foster an everyday awareness that you are a brain, and in fact more like two half-brains.

These suggestions are among the top-rated LessWrong posts, so they'll be of interest to lots of intellectually-minded, rationalist-curious individuals.  But you have your own task ahead of you, that only you can fulfill.

So don't give up.  Don't procrastinate it.  If you haven't done it already, schedule a day and time right now when you can realistically assess

  • how you want your life to affect you and other people, and
  • what you must change to better achieve this.

Eliezer has said I want you to live.  Let me say:

I want you to be better at your life.

View more: Next