Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Choosing prediction over explanation in psychology: Lessons from machine learning

1 Kaj_Sotala 17 January 2017 09:23PM

[Link] Be someone – be recognized by the system and promoted – or do something

3 James_Miller 15 January 2017 09:22PM

Categorizing Has Consequences

25 Eliezer_Yudkowsky 19 February 2008 01:40AM

Followup toFallacies of Compression

Among the many genetic variations and mutations you carry in your genome, there are a very few alleles you probably know—including those determining your blood type: the presence or absence of the A, B, and + antigens.  If you receive a blood transfusion containing an antigen you don't have, it will trigger an allergic reaction.  It was Karl Landsteiner's discovery of this fact, and how to test for compatible blood types, that made it possible to transfuse blood without killing the patient.  (1930 Nobel Prize in Medicine.)  Also, if a mother with blood type A (for example) bears a child with blood type A+, the mother may acquire an allergic reaction to the + antigen; if she has another child with blood type A+, the child will be in danger, unless the mother takes an allergic suppressant during pregnancy.  Thus people learn their blood types before they marry.

Oh, and also: people with blood type A are earnest and creative, while people with blood type B are wild and cheerful.  People with type O are agreeable and sociable, while people with type AB are cool and controlled. (You would think that O would be the absence of A and B, while AB would just be A plus B, but no...)  All this, according to the Japanese blood type theory of personality.  It would seem that blood type plays the role in Japan that astrological signs play in the West, right down to blood type horoscopes in the daily newspaper.

This fad is especially odd because blood types have never been mysterious, not in Japan and not anywhere.  We only know blood types even exist thanks to Karl Landsteiner.  No mystic witch doctor, no venerable sorcerer, ever said a word about blood types; there are no ancient, dusty scrolls to shroud the error in the aura of antiquity.  If the medical profession claimed tomorrow that it had all been a colossal hoax, we layfolk would not have one scrap of evidence from our unaided senses to contradict them.

There's never been a war between blood types.  There's never even been a political conflict between blood types.  The stereotypes must have arisen strictly from the mere existence of the labels.

continue reading »

Buckets and memetic immune disorders

11 Tyrrell_McAllister 03 January 2017 11:51PM

AnnaSalamon's recent post on "flinching" and "buckets" nicely complements PhilGoetz's 2009 post Reason as memetic immune disorder. (I'll be assuming that readers have read Anna's post, but not necessarily Phil's.) Using Anna's terminology, I take Phil to be talking about the dangers of merging buckets that started out as separate. Anna, on the other hand, is talking about how to deal with one bucket that should actually be several.

Phil argued (paraphrasing) that rationality can be dangerous because it leads to beliefs of the form "P implies Q". If you convince yourself of that implication, and you believe P, then you are compelled to believe Q. This is dangerous because your thinking about P might be infected by a bad meme. Now rationality has opened the way for this bad meme to infect your thinking about Q, too.

It's even worse if you reason yourself all the way to believing "P if and only if Q". Now any corruption in your thinking about either one of P and Q will corrupt your thinking about the other. In terms of buckets: If you put "Yes" in the P bucket, you must put "Yes" in the Q bucket, and vice versa. In other words, the P bucket and the Q bucket are now effectively one and the same.

In this sense, Phil was pointing out that rationality merges buckets. (More precisely, rationality creates dependencies among buckets. In the extreme case, buckets become effectively identical). This can be bad for the reasons that Anna gives. Phil argues that some people resist rationality because their "memetic immune system" realizes that rational thinking might merge buckets inappropriately. To avoid this danger, people often operate on the principle that it's suspect even to consider merging buckets from different domains (e.g., religious scripture and personal life).

This suggests a way in which Anna's post works at the meta-level, too.

Phil's argument is that people resist rationality because, in effect, they've identified the two buckets "Think rationally" and "Spread memetic infections". They fear that saying "Yes" to "Think rationally" forces them to say "Yes" to the dangers inherent to merged buckets.

But Anna gives techniques for "de-merging" buckets in general if it turns out that some buckets were inappropriately merged, or if one bucket should have been several in the first place.

In other words, Anna's post essentially de-merges the two particular buckets "Think rationally" and "Spread memetic infections". You can go ahead and use rational thinking, even though you will risk inappropriately merging buckets, because you now have techniques for de-merging those buckets if you need to.

In this way, Anna's post may diminish the "memetic immune system" obstacle to rational thinking that Phil observed.

Leave a Line of Retreat

59 Eliezer_Yudkowsky 25 February 2008 11:57PM

"When you surround the enemy
Always allow them an escape route.
They must see that there is
An alternative to death."
        —Sun Tzu, The Art of War, Cloud Hands edition

"Don't raise the pressure, lower the wall."
        —Lois McMaster Bujold, Komarr

Last night I happened to be conversing with a nonrationalist who had somehow wandered into a local rationalists' gathering.  She had just declared (a) her belief in souls and (b) that she didn't believe in cryonics because she believed the soul wouldn't stay with the frozen body.  I asked, "But how do you know that?"  From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her.  I don't say this in a bad way—she seemed like a nice person with absolutely no training in rationality, just like most of the rest of the human species.  I really need to write that book.

Most of the ensuing conversation was on items already covered on Overcoming Bias—if you're really curious about something, you probably can figure out a good way to test it; try to attain accurate beliefs first and then let your emotions flow from that—that sort of thing.  But the conversation reminded me of one notion I haven't covered here yet:

"Make sure," I suggested to her, "that you visualize what the world would be like if there are no souls, and what you would do about that.  Don't think about all the reasons that it can't be that way, just accept it as a premise and then visualize the consequences.  So that you'll think, 'Well, if there are no souls, I can just sign up for cryonics', or 'If there is no God, I can just go on being moral anyway,' rather than it being too horrifying to face.  As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it."

continue reading »

Rationality Considered Harmful (In Politics)

9 The_Jaded_One 08 January 2017 10:36AM

Why you should be very careful about trying to openly seek truth in any political discussion


1. Rationality considered harmful for Scott Aaronson in the great gender debate

In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:

As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time

 

2. Rationality considered harmful for Sam Harris in the islamophobia war

I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:

Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics

Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc

 

3. Rationality considered harmful when talking to your left-wing friends about genetic modification

In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views. 

One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends". 

A large comment thread ensued, but the best comment I got was this one:

One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.

Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.

Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy

 

4. Takeaways

This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!

Meetup Discussion

7 luminosity 08 January 2017 05:14AM

One of the most valuable services the Less Wrong community has to offer are the meetup groups. However, it strikes me that there isn't a lot of knowledge sharing between different meetup groups. Presumably there's a lot that the different groups could learn from each other -- things that can be done, experiments that have or haven't worked out, procedural and organisational tips. Hence this post. Please go ahead and write a summary about your local less wrong meetup below:

  • What meetups do you run?
  • What's worked?
  • What hasn't?
  • How is the group organised?

 

Welcome to LessWrong (10th Thread, January 2017) (Thread A)

11 folkTheory 07 January 2017 05:43AM

(Thread B for January is here, created as a duplicate by accident)

Hi, do you read the LessWrong website, but haven't commented yet (or not very much)? Are you a bit scared of the harsh community, or do you feel that questions which are new and interesting for you could be old and boring for the older members?

This is the place for the new members to become courageous and ask what they wanted to ask. Or just to say hi.

The older members are strongly encouraged to be gentle and patient (or just skip the entire discussion if they can't).

Newbies, welcome!

 

The long version:

 

If you've recently joined the Less Wrong community, please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, how you came to identify as an aspiring rationalist or how you found us. You can skip right to that if you like; the rest of this post consists of a few things you might find helpful. More can be found at the FAQ.

 

A few notes about the site mechanics

To post your first comment, you must have carried out the e-mail confirmation: When you signed up to create your account, an e-mail was sent to the address you provided with a link that you need to follow to confirm your e-mail address. You must do this before you can post!

Less Wrong comments are threaded for easy following of multiple conversations. To respond to any comment, click the "Reply" link at the bottom of that comment's box. Within the comment box, links and formatting are achieved via Markdown syntax (you can click the "Help" link below the text box to bring up a primer).

You may have noticed that all the posts and comments on this site have buttons to vote them up or down, and all the users have "karma" scores which come from the sum of all their comments and posts. This immediate easy feedback mechanism helps keep arguments from turning into flamewars and helps make the best posts more visible; it's part of what makes discussions on Less Wrong look different from those anywhere else on the Internet.

However, it can feel really irritating to get downvoted, especially if one doesn't know why. It happens to all of us sometimes, and it's perfectly acceptable to ask for an explanation. (Sometimes it's the unwritten LW etiquette; we have different norms than other forums.) Take note when you're downvoted a lot on one topic, as it often means that several members of the community think you're missing an important point or making a mistake in reasoning— not just that they disagree with you! If you have any questions about karma or voting, please feel free to ask here.

Replies to your comments across the site, plus private messages from other users, will show up in your inbox. You can reach it via the little mail icon beneath your karma score on the upper right of most pages. When you have a new reply or message, it glows red. You can also click on any user's name to view all of their comments and posts.

All recent posts (from both Main and Discussion) are available here. At the same time, it's definitely worth your time commenting on old posts; veteran users look through the recent comments thread quite often (there's a separate recent comments thread for the Discussion section, for whatever reason), and a conversation begun anywhere will pick up contributors that way.  There's also a succession of open comment threads for discussion of anything remotely related to rationality.

Discussions on Less Wrong tend to end differently than in most other forums; a surprising number end when one participant changes their mind, or when multiple people clarify their views enough and reach agreement. More commonly, though, people will just stop when they've better identified their deeper disagreements, or simply "tap out" of a discussion that's stopped being productive. (Seriously, you can just write "I'm tapping out of this thread.") This is absolutely OK, and it's one good way to avoid the flamewars that plague many sites.

EXTRA FEATURES:
There's actually more than meets the eye here: look near the top of the page for the "WIKI", "DISCUSSION" and "SEQUENCES" links.
LW WIKI: This is our attempt to make searching by topic feasible, as well as to store information like common abbreviations and idioms. It's a good place to look if someone's speaking Greek to you.
LW DISCUSSION: This is a forum just like the top-level one, with two key differences: in the top-level forum, posts require the author to have 20 karma in order to publish, and any upvotes or downvotes on the post are multiplied by 10. Thus there's a lot more informal dialogue in the Discussion section, including some of the more fun conversations here.
SEQUENCES: A huge corpus of material mostly written by Eliezer Yudkowsky in his days of blogging at Overcoming Bias, before Less Wrong was started. Much of the discussion here will casually depend on or refer to ideas brought up in those posts, so reading them can really help with present discussions. Besides which, they're pretty engrossing in my opinion. They are also available in a book form.

A few notes about the community

If you've come to Less Wrong to  discuss a particular topic, this thread would be a great place to start the conversation. By commenting here, and checking the responses, you'll probably get a good read on what, if anything, has already been said here on that topic, what's widely understood and what you might still need to take some time explaining.

If your welcome comment starts a huge discussion, then please move to the next step and create a LW Discussion post to continue the conversation; we can fit many more welcomes onto each thread if fewer of them sprout 400+ comments. (To do this: click "Create new article" in the upper right corner next to your username, then write the article, then at the bottom take the menu "Post to" and change it from "Drafts" to "Less Wrong Discussion". Then click "Submit". When you edit a published post, clicking "Save and continue" does correctly update the post.)

If you want to write a post about a LW-relevant topic, awesome! I highly recommend you submit your first post to Less Wrong Discussion; don't worry, you can later promote it from there to the main page if it's well-received. (It's much better to get some feedback before every vote counts for 10 karma—honestly, you don't know what you don't know about the community norms here.)

Alternatively, if you're still unsure where to submit a post, whether to submit it at all, would like some feedback before submitting, or want to gauge interest, you can ask / provide your draft / summarize your submission in the latest open comment thread. In fact, Open Threads are intended for anything 'worth saying, but not worth its own post', so please do dive in! Informally, there is also the unofficial Less Wrong IRC chat room, and you might also like to take a look at some of the other regular special threads; they're a great way to get involved with the community!

If you'd like to connect with other LWers in real life, we have  meetups  in various parts of the world. Check the wiki page for places with regular meetups, or the upcoming (irregular) meetups page. There's also a Facebook group. If you have your own blog or other online presence, please feel free to link it.

If English is not your first language, don't let that make you afraid to post or comment. You can get English help on Discussion- or Main-level posts by sending a PM to one of the following users (use the "send message" link on the upper right of their user page). Either put the text of the post in the PM, or just say that you'd like English help and you'll get a response with an email address. 
Normal_Anomaly 
Randaly 
shokwave 
Barry Cotter

A note for theists: you will find the Less Wrong community to be predominantly atheist, though not completely so, and most of us are genuinely respectful of religious people who keep the usual community norms. It's worth saying that we might think religion is off-topic in some places where you think it's on-topic, so be thoughtful about where and how you start explicitly talking about it; some of us are happy to talk about religion, some of us aren't interested. Bear in mind that many of us really, truly have given full consideration to theistic claims and found them to be false, so starting with the most common arguments is pretty likely just to annoy people. Anyhow, it's absolutely OK to mention that you're religious in your welcome post and to invite a discussion there.

A list of some posts that are pretty awesome

I recommend the major sequences to everybody, but I realize how daunting they look at first. So for purposes of immediate gratification, the following posts are particularly interesting/illuminating/provocative and don't require any previous reading:

More suggestions are welcome! Or just check out the top-rated posts from the history of Less Wrong. Most posts at +50 or more are well worth your time.

Welcome to Less Wrong, and we look forward to hearing from you throughout the site!

Actually Practicing Rationality and the 5-Second Level

5 lifelonglearner 06 January 2017 06:50AM

[I first posted this as a link to my blog post, but I'm reposting as a focused article here that trims some fat of the original post, which was less accessible]


I think a lot about heuristics and biases, and I admit that many of my ideas on rationality and debiasing get lost in the sea of my own thoughts.  They’re accessible, if I’m specifically thinking about rationality-esque things, but often invisible otherwise.  

That seems highly sub-optimal, considering that the whole point of having usable mental models isn’t to write fancy posts about them, but to, you know, actually use them.

To that end, I’ve been thinking about finding some sort of systematic way to integrate all of these ideas into my actual life.  

(If you’re curious, here’s the actual picture of what my internal “concept-verse” (w/ associated LW and CFAR memes) looks like)

 

MLU Mind Map v1.png


Open Image In New Tab for all the details

So I have all of these ideas, all of which look really great on paper and in thought experiments.  Some of them even have some sort of experimental backing.  Given this, how do I put them together into a kind of coherent notion?

Equivalently, what does it look like if I successfully implement these mental models?  What sorts of changes might I expect to see?  Then, knowing the end product, what kind of process can get me there?

One way of looking it would to say that if I implemented techniques well, then I’d be better able to tackle my goals and get things done.  Maybe my productivity would go up.  That sort of makes sense.  But this tells us nothing about how I’d actually be going about, using such skills.  

We want to know how to implement these skills and then actually utilize them.

Yudkowsky gives a highly useful abstraction when he talks about the five-second level.  He gives some great tips on breaking down mental techniques into their component mental motions.  It’s a step-by-step approach that really goes into the details of what it feels like to undergo one of the LessWrong epistemological techniques.  We’d like our mental techniques to be actual heuristics that we can use in the moment, so having an in-depth breakdown makes sense.

Here’s my attempt at a 5-second-level breakdown for Going Meta, or "popping" out of one's head to stay mindful of the moment:

  1. Notice the feeling that you are being mentally “dragged” towards continuing an action.
    1. (It can feel like an urge, or your mind automatically making a plan to do something.  Notice your brain simulating you taking an action without much conscious input.)
  2. Remember that you have a 5-second-level series of steps to do something about it.
  3. Feel aversive towards continuing the loop.  Mentally shudder at the part of you that tries to continue.
  4. Close your eyes.  Take in a breath.
  5. Think about what 1-second action you could take to instantly cut off the stimulus from whatever loop you’re stuck in. (EX: Turning off the display, closing the window, moving to somewhere else).
  6. Tense your muscles and clench, actually doing said action.
  7. Run a search through your head, looking for an action labeled “productive”.  Try to remember things you’ve told yourself you “should probably do” lately.  
    1. (If you can’t find anything, pattern-match to find something that seems “productive-ish”.)
  8. Take note of what time it is.  Write it down.
  9. Do the new thing.  Finish.
  10. Note the end time.  Calculate how long you did work.

Next, the other part is actually accessing the heuristic in the situations where you want it.  We want it to be habitual.

After doing some quick searches on the existing research on habits, it appears that many of the links go to Charles Duhigg, author of The Power of Habit, or B J Fogg of Tiny Habits. Both models focus on two things: Identifying the Thing you want to do.  Then setting triggers so you actually do It.  (There’s some similarity to CFAR’s Trigger Action Plans.)  

B J’s approach focuses on scaffolding new habits into existing routines, like brushing your teeth, which are already automatic.  Duhigg appears to be focused more on reinforcement and rewards, with several nods to Skinner.  CFAR views actions as self-reinforcing, so the reward isn’t even necessary— they see repetition as building automation.

Overlearning the material also seems to be useful in some contexts, for skills like acquiring procedural knowledge.  And mental notions do seem to be more like procedural knowledge.

For these mental skills specifically, we’d want them to go off, time irrespective, so anchoring it to an existing routine might not be best.  Having it as a response to an internal state (EX: “When I notice myself being ‘dragged’ into a spiral, or automatically making plans to do a thing”) may be more useful.


(Follow-up post forthcoming on concretely trying to apply habit research to implementing heuristics.)

 

 

 

[Link] I'm Not An Effective Altruist Because I Prefer...

5 ozymandias 28 December 2016 10:39PM

View more: Next