Dumb Deplaning
So I just traveled to Portsmouth, VA for an experimental conference - in the sense that I don't expect conferences of this type to prove productive, but maybe I should try at least once - in the unlikely event that there are any local Overcoming Bias readers who want to drive out to Portsmouth for a meeting on say the evening of the 20th, email me - anyway, I am struck, for the Nth time, how uncooperative people are in getting off planes.
Most people, as soon as they have a chance to make for the exit, do so - even if they need to take down luggage first. At any given time after the initial rush to the aisles, usually a single person is taking down luggage, while the whole line behind them waits. Then the line moves forward a little and the next person starts taking down their luggage.
In programming we call this a "greedy local algorithm". But since everyone does it, no one seems to feel "greedy".
How would I do it? Off the top of my head:
"Left aisle seats, please rise and move to your luggage. (Pause.) Left aisle seats, please retrieve your luggage. (Pause.) Left aisle seats, please deplane. (Pause.) Right aisle seats, please rise and move to your luggage..."
Making My Peace with Belief
I grew up in an atheistic household.
Almost needless to say, I was relatively hostile towards religion for most of my early life. A few things changed that.
First, the apology of a pastor. A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people. My friend apologized to me after considering the matter. We stayed friends for a little while afterwards, although I left that school, and we lost contact.
I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.
The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid. This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.
Which is to say, beliefs may be epistemically irrational while being instrumentally rational.
The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing. It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.
Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.
The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here. I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location. I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.
If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore? I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."
That was the point at which I concluded that beliefs can be -rational-. Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.
If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good. The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.
Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.
LessWrong 2.0
Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!
You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.
But before we leap into action, let's review the problem.
Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity
This year's EA Survey is now ready to be shared! This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.
If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.
Please share the survey with others who might be interested using this link rather than the one above: http://bit.ly/1OqsVWo
Vegetarianism Ideological Turing Test Results
Back in August I ran a Caplan Test (or more commonly an "Ideological Turing Test") both on Less Wrong and in my local rationality meetup. The topic was diet, specifically: Vegetarian or Omnivore?
If you're not familiar with Caplan Tests, I suggest reading Palladias' post on the subject or reading Wikipedia. The test I ran was pretty standard; thirteen blurbs were presented to the judges, selected by the toss of a coin to either be from a vegetarian or from an omnivore, and also randomly selected to be genuine or an impostor trying to pass themselves off as the alternative. My main contribution, which I haven't seen in previous tests, was using credence/probability instead of a simple "I think they're X".
I originally chose vegetarianism because I felt like it's an issue which splits our community (and particularly my local community) pretty well. A third of test participants were vegetarians, and according to the 2014 census, only 56% of LWers identify as omnivores.
Before you see the results of the test, please take a moment to say aloud how well you think you can do at predicting whether someone participating in the test was genuine or a fake.
.
.
.
.
.
.
.
.
.
.
.
.
.
If you think you can do better than chance you're probably fooling yourself. If you think you can do significantly better than chance you're almost certainly wrong. Here are some statistics to back that claim up.
I got 53 people to judge the test. 43 were from LessWrong, and 10 were from my local group. Averaging across the entire group, 51.1% of judgments were correct. If my Chi^2 math is correct, the p-value for the null hypothesis is 57% on this data. (Note that this includes people who judged an entry as 50%. If we don't include those folks the success rate drops to 49.4%.)
In retrospect, this seemed rather obvious to me. Vegetarians aren't significantly different from omnivores. Unlike a religion or a political party there aren't many cultural centerpieces to diet. Vegetarian judges did no better than omnivore judges, even when judging vegetarian entries. In other words, in this instance the minority doesn't possess any special powers for detecting other members of the in-group. This test shows null results; the thing that distinguishes vegetarians from omnivores is not familiarity with the other sides' arguments or culture, at least not to the degree that we can distinguish at a glance.
More interesting, in my opinion, than the null results were the results I got on the calibration of the judges. Back when I asked you to say aloud how good you'd be, what did you say? Did the last three paragraphs seem obvious? Would it surprise you to learn that not a single one of the 53 judges held their guesses to a confidence band of 40%-60%? In other words, every single judge thought themselves decently able to discern genuine writing from fakery. The numbers suggest that every single judge was wrong.
(The flip-side to this is, of course, that every entrant to the test won! Congratulations rationalists: signs point to you being able to pass as vegetarians/omnivores when you try, even if you're not in that category. The average credibility of an impostor entry was 59%, while the average credibility of a genuine response was 55%. No impostors got an average credibility below 49%.)
Using the logarithmic scoring rule for the calibration game we can measure the error of the community. The average judge got a score of -543. For comparison, a judge that answered 50% ("I don't know") to all questions would've gotten a score of 0. Only eight judges got a positive score, and only one had a score higher than 100 (consistent with random chance). This is actually one area where Less Wrong should feel good. We're not at all calibrated... but for this test at least, the judges from the website were much better calibrated than my local community (who mostly just lurk). If we separate the two groups we see that the average score for my community was -949, while LW had an average of -448. Given that I restricted the choices to multiples of 10, a random selection of credences gives an average score of -921.
In short, the LW community didn't prove to be any better at discerning fact from fiction, but it was significantly less overconfident. More de-biasing needs to be done, however! The next time you think of a probability to reflect your credence, ask yourself "Is this the sort of thing that anyone would know? Is this the sort of thing I would know?" That answer will probably be "no" a lot more than it feels like from the inside.
Full data (minus contact info) can be found here.
Those of you who submitted a piece of writing that I used, or who judged the test and left their contact information: I will be sending out personal scores very soon (probably by this weekend). Deep apologies regarding the delay on this post. I had a vacation in late August and it threw off my attention to this project.
EDIT: Here's a histogram of the identification accuracy.

EDIT 2: For reference, here are the entries that were judged.
Two Growth Curves
Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.
One of my all-time favorite examples of this:
I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.
I was also frustrated with this hesitation, because I could feel it hampering my skill growth. So I would try to convince myself not to care about what people thought of me. But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.
Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy. In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.
Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)
I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances. (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow. I would picture these growth curves.)
Vegetarian/Omnivore Ideological Turing Test Judging Round!
Come one, come all! Test your prediction skills in my Caplan Test (more commonly called an Ideological Turing Test). To read more about such tests, check out palladias' post here.
The Test: http://goo.gl/forms/7f4pQfxB8I
In the test, you will be asked to read responses written by rationalists from LessWrong (and the Columbus Ohio LW group). These responses are either from a vegetarian or omnivore (as decided by a coin flip) and are either their genuine response or a fake response where they pretend to be a member of the other group (also decided by coin flip). If you'd like to participate (and the more, the merrier) you'll be asked to distinguish fake from real by assigning a credence to the proposition that a given response is genuine.
I'll be posting general statistics on how people did at a later date (probably early September). Please use the comments on this thread to discuss or ask questions. Do not make predictions in the comments. I got more entries than would be reasonable to ask people to judge, so if your entry didn't make it into the test, I'm sorry. We might be able to run a second round of judging. If you're interested in judging more entries, send me a PM or leave a comment. I tended to favor the first entries I got, when selecting who got in.
Experiences in applying "The Biodeterminist's Guide to Parenting"
I'm posting this because LessWrong was very influential on how I viewed parenting, particularly the emphasis on helping one's brain work better. In this context, creating and influencing another person's brain is an awesome responsibility.
It turned out to be a lot more anxiety-provoking than I expected. I don't think that's necessarily a bad thing, as the possibility of screwing up someone's brain should make a parent anxious, but it's something to be aware of. I've heard some blithe "Rational parenting could be a very high-impact activity!" statements from childless LWers who may be interested to hear some experiences in actually applying that.
One thing that really scared me about trying to raise a child with the healthiest-possible brain and body was the possibility that I might not love her if she turned out to not be smart. 15 months in, I'm no longer worried. Evolution has been very successful at producing parents and children that love each other despite their flaws, and our family is no exception. Our daughter Lily seems to be doing fine, but if she turns out to have disabilities or other problems, I'm confident that we'll roll with the punches.
Cross-posted from The Whole Sky.
Before I got pregnant, I read Scott Alexander's (Yvain's) excellent Biodeterminist's Guide to Parenting and was so excited to have this knowledge. I thought how lucky my child would be to have parents who knew and cared about how to protect her from things that would damage her brain.
Real life, of course, got more complicated. It's one thing to intend to avoid neurotoxins, but another to arrive at the grandparents' house and find they've just had ant poison sprayed. What do you do then?
Here are some tradeoffs Jeff and I have made between things that are good for children in one way but bad in another, or things that are good for children but really difficult or expensive.
Germs and parasites
The hygiene hypothesis states that lack of exposure to germs and parasites increases risk of auto-immune disease. Our pediatrician recommended letting Lily playing in the dirt for this reason.
While exposure to animal dander and pollution increase asthma later in life, it seems that being exposed to these in the first year of life actually protects against asthma. Apparently if you're going to live in a house with roaches, you should do it in the first year or not at all.
Except some stuff in dirt is actually bad for you.
Scott writes:
Parasite-infestedness of an area correlates with national IQ at about r = -0.82. The same is true of US states, with a slightly reduced correlation coefficient of -0.67 (p<0.0001). . . . When an area eliminates parasites (like the US did for malaria and hookworm in the early 1900s) the IQ for the area goes up at about the right time.
Living with cats as a child seems to increase risk of schizophrenia, apparently via toxoplasmosis. But in order to catch toxoplasmosis from a cat, you have to eat its feces during the two weeks after it first becomes infected (which it’s most likely to do by eating birds or rodents carrying the disease). This makes me guess that most kids get it through tasting a handful of cat litter, dirt from the yard, or sand from the sandbox rather than simply through cat ownership. We live with indoor cats who don’t seem to be mousers, so I’m not concerned about them giving anyone toxoplasmosis. If we build Lily a sandbox, we’ll keep it covered when not in use.
The evidence is mixed about whether infections like colds during the first year of life increase or decrease your risk of asthma later. After the newborn period, we defaulted to being pretty casual about germ exposure.
Toxins in buildings
Our experiences with lead. Our experiences with mercury.
In some areas, it’s not that feasible to live in a house with zero lead. We live in Boston, where 87% of the housing was built before lead paint was banned. Even in a new building, we’d need to go far out of town before reaching soil that wasn’t near where a lead-painted building had been.
It is possible to do some renovations without exposing kids to lead. Jeff recently did some demolition of walls with lead paint, very carefully sealed off and cleaned up, while Lily and I spent the day elsewhere. Afterwards her lead level was no higher than it had been.
But Jeff got serious lead poisoning as a toddler while his parents did major renovations on their old house. If I didn’t think I could keep the child away from the dust, I wouldn’t renovate.
Recently a house across the street from us was gutted, with workers throwing debris out the windows and creating big plumes of dust (presumable lead-laden) that blew all down the street. Later I realized I should have called city building inspection services, which would have at least made them carry the debris into the dumpster instead of throwing it from the second story.
Floor varnish releases formaldehyde and other nasties as it cures. We kept Lily out of the house for a few weeks after Jeff redid the floors. We found it worthwhile to pay rent at our previous house in order to not have to live in the new house while this kind of work was happening.
Pressure-treated wood was treated with arsenic and chromium until around 2004 in the US. It has a greenish tint, though this may have faded with time. Playing on playsets or decks made of such wood increases children's cancer risk. It should not be used for furniture (I thought this would be obvious, but apparently it wasn't to some of my handyman relatives).
I found it difficult to know how to deal with fresh paint and other fumes in my building at work while I was pregnant. Women of reproductive age have a heightened sense of smell, and many pregnant women have heightened aversion to smells, so you can literally smell things some of your coworkers can’t (or don’t mind). The most critical period of development is during the first trimester, when most women aren’t telling the world they’re pregnant (because it’s also the time when a miscarriage is most likely, and if you do lose the pregnancy you might not want to have to tell the world). During that period, I found it difficult to explain why I was concerned about the fumes from the roofing adhesive being used in our building. I didn’t want to seem like a princess who thought she was too good to work in conditions that everybody else found acceptable. (After I told them I was pregnant, my coworkers were very understanding about such things.)
Food
Recommendations usually focus on what you should eat during pregnancy, but obviously children’s brain development doesn’t stop there. I’ve opted to take precautions with the food Lily and I eat for as long as I’m nursing her.
Claims that pesticide residues are poisoning children scare me, although most scientists seem to think the paper cited is overblown. Other sources say the levels of pesticides in conventionally grown produce are fine. We buy organic produce at home but eat whatever we’re served elsewhere.
I would love to see a study with families randomly selected to receive organic produce for the first 8 years of the kids’ lives, then looking at IQ and hyperactivity. But no one’s going to do that study because of how expensive 8 years of organic produce would be.
The Biodeterminist’s Guide doesn’t mention PCBs in the section on fish, but fish (particularly farmed salmon) are a major source of these pollutants. They don’t seem to be as bad as mercury, but are neurotoxic. Unfortunately their half-life in the body is around 14 years, so if you have even a vague idea of getting pregnant ever in your life you shouldn’t be eating farmed salmon (or Atlantic/farmed salmon, bluefish, wild striped bass, white and Atlantic croaker, blackback or winter flounder, summer flounder, or blue crab).
I had the best intentions of eating lots of the right kind of high-omega-3, low-pollutant fish during and after pregnancy. Unfortunately, fish was the only food I developed an aversion to. Now that Lily is eating food on her own, we tried several sources of omega-3 and found that kippered herring was the only success. Lesson: it’s hard to predict what foods kids will eat, so keep trying.
In terms of hassle, I underestimated how long I would be “eating for two” in the sense that anything I put in my body ends up in my child’s body. Counting pre-pregnancy (because mercury has a half-life of around 50 days in the body, so sushi you eat before getting pregnant could still affect your child), pregnancy, breastfeeding, and presuming a second pregnancy, I’ll probably spend about 5 solid years feeding another person via my body, sometimes two children at once. That’s a long time in which you have to consider the effect of every medication, every cup of coffee, every glass of wine on your child. There are hardly any medications considered completely safe during pregnancy and lactation—most things are in Category C, meaning there’s some evidence from animal trials that they may be bad for human children.
Fluoride
Too much fluoride is bad for children’s brains. The CDC recently recommended lowering fluoride levels in municipal water (though apparently because of concerns about tooth discoloration more than neurotoxicity). Around the same time, the American Dental Association began recommending the use of fluoride toothpaste as soon as babies have teeth, rather than waiting until they can rinse and spit.
Cavities are actually a serious problem even in baby teeth, because of the pain and possible infection they cause children. Pulling them messes up the alignment of adult teeth. Drilling on children too young to hold still requires full anesthesia, which is dangerous itself.
But Lily isn’t particularly at risk for cavities. 20% of children get a cavity by age six, and they are disproportionately poor, African-American, and particularly Mexican-American children (presumably because of different diet and less ability to afford dentists). 75% of cavities in children under 5 occur in 8% of the population.
We decided to have Lily brush without toothpaste, avoid juice and other sugary drinks, and see the dentist regularly.
Home pesticides
One of the most commonly applied insecticides makes kids less smart. This isn’t too surprising, given that it kills insects by disabling their nervous system. But it’s not something you can observe on a small scale, so it’s not surprising that the exterminator I talked to brushed off my questions with “I’ve never heard of a problem!”
If you get carpenter ants in your house, you basically have to choose between poisoning them or letting them structurally damage the house. We’ve only seen a few so far, but if the problem progresses, we plan to:
1) remove any rotting wood in the yard where they could be nesting
2) have the perimeter of the building sprayed
3) place gel bait in areas kids can’t access
4) only then spray poison inside the house.
If we have mice we’ll plan to use mechanical traps rather than poison.
Flame retardants
Since the 1970s, California required a high degree of flame-resistance from furniture. This basically meant that US manufacturers sprayed flame retardant chemicals on anything made of polyurethane foam, such as sofas, rug pads, nursing pillows, and baby mattresses.
The law recently changed, due to growing acknowledgement that the carcinogenic and neurotoxic chemicals were more dangerous than the fires they were supposed to be preventing. Even firefighters opposed the use of the flame retardants, because when people die in fires it’s usually from smoke inhalation rather than burns, and firefighters don’t want to breathe the smoke from your toxic sofa (which will eventually catch fire even with the flame retardants).
We’ve opted to use furniture from companies that have stopped using flame retardants (like Ikea and others listed here). Apparently futons are okay if they’re stuffed with cotton rather than foam. We also have some pre-1970s furniture that tested clean for flame retardants. You can get foam samples tested for free.
The main vehicle for children ingesting the flame retardants is that it settles into dust on the floor, and children crawl around in the dust. If you don’t want to get rid of your furniture, frequent damp-mopping would probably help.
The standards for mattresses are so stringent that the chemical sprays aren’t generally used, and instead most mattresses are wrapped in a flame-resistant barrier which apparently isn’t toxic. I contacted the companies that made our mattresses and they’re fine.
Ratings for chemical safety of children’s car seats here.
Thoughts on IQ
A lot of people, when I start talking like this, say things like “Well, I lived in a house with lead paint/played with mercury/etc. and I’m still alive.” And yes, I played with mercury as a child, and Jeff is still one of the smartest people I know even after getting acute lead poisoning as a child.
But I do wonder if my mind would work a little better without the mercury exposure, and if Jeff would have had an easier time in school without the hyperactivity (a symptom of lead exposure). Given the choice between a brain that works a little better and one that works a little worse, who wouldn’t choose the one that works better?
We’ll never know how an individual’s nervous system might have been different with a different childhood. But we can see population-level effects. The Environmental Protection Agency, for example, is fine with calculating the expected benefit of making coal plants stop releasing mercury by looking at the expected gains in terms of children’s IQ and increased earnings.
Scott writes:
A 15 to 20 point rise in IQ, which is a little more than you get from supplementing iodine in an iodine-deficient region, is associated with half the chance of living in poverty, going to prison, or being on welfare, and with only one-fifth the chance of dropping out of high-school (“associated with” does not mean “causes”).
Salkever concludes that for each lost IQ point, males experience a 1.93% decrease in lifetime earnings and females experience a 3.23% decrease. If Lily would earn about what I do, saving her one IQ point would save her $1600 a year or $64000 over her career. (And that’s not counting the other benefits she and others will reap from her having a better-functioning mind!) I use that for perspective when making decisions. $64000 would buy a lot of the posh prenatal vitamins that actually contain iodine, or organic food, or alternate housing while we’re fixing up the new house.
Conclusion
There are times when Jeff and I prioritize social relationships over protecting Lily from everything that might harm her physical development. It’s awkward to refuse to go to someone’s house because of the chemicals they use, or to refuse to eat food we’re offered. Social interactions are good for children’s development, and we value those as well as physical safety. And there are times when I’ve had to stop being so careful because I was getting paralyzed by anxiety (literally perched in the rocker with the baby trying not to touch anything after my in-laws scraped lead paint off the outside of the house).
But we also prioritize neurological development more than most parents, and we hope that will have good outcomes for Lily.
In praise of gullibility?
I was recently re-reading a piece by Yvain/Scott Alexander called Epistemic Learned Helplessness. It's a very insightful post, as is typical for Scott, and I recommend giving it a read if you haven't already. In it he writes:
When I was young I used to read pseudohistory books; Immanuel Velikovsky's Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn't believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable.
He goes on to conclude that the skill of taking ideas seriously - often considered one of the most important traits a rationalist can have - is a dangerous one. After all, it's very easy for arguments to sound convincing even when they're not, and if you're too easily swayed by argument you can end up with some very absurd beliefs (like that Venus is a comet, say).
This post really resonated with me. I've had several experiences similar to what Scott describes, of being trapped between two debaters who both had a convincingness that exceeded my ability to discern truth. And my reaction in those situations was similar to his: eventually, after going through the endless chain of rebuttals and counter-rebuttals, changing my mind at each turn, I was forced to throw up my hands and admit that I probably wasn't going to be able to determine the truth of the matter - at least, not without spending a lot more time investigating the different claims than I was willing to. And so in many cases I ended up adopting a sort of semi-principled stance of agnosticism: unless it was a really really important question (in which case I was sort of obligated to do the hard work of investigating the matter to actually figure out the truth), I would just say I don't know when asked for my opinion.
[Non-exhaustive list of areas in which I am currently epistemically helpless: geopolitics (in particular the Israel/Palestine situation), anthropics, nutrition science, population ethics]
All of which is to say: I think Scott is basically right here, in many cases we shouldn't have too strong of an opinion on complicated matters. But when I re-read the piece recently I was struck by the fact that his whole argument could be summed up much more succinctly (albeit much more pithily) as:
"Don't be gullible."
Huh. Sounds a lot more obvious that way.
Now, don't get me wrong: this is still good advice. I think people should endeavour to not be gullible if at all possible. But it makes you wonder: why did Scott feel the need to write a post denouncing gullibility? After all, most people kind of already think being gullible is bad - who exactly is he arguing against here?
Well, recall that he wrote the post in response to the notion that people should believe arguments and take ideas seriously. These sound like good, LW-approved ideas, but note that unless you're already exceptionally smart or exceptionally well-informed, believing arguments and taking ideas seriously is tantamount to...well, to being gullible. In fact, you could probably think of gullibility as a kind of extreme and pathological form of lightness; a willingness to be swept away by the winds of evidence, no matter how strong (or weak) they may be.
There seems to be some tension here. On the one hand we have an intuitive belief that gullibility is bad; that the proper response to any new claim should be skepticism. But on the other hand we also have some epistemic norms here at LW that are - well, maybe they don't endorse being gullible, but they don't exactly not endorse it either. I'd say the LW memeplex is at least mildly friendly towards the notion that one should believe conclusions that come from convincing-sounding arguments, even if they seem absurd. A core tenet of LW is that we change our mind too little, not too much, and we're certainly all in favour of lightness as a virtue.
Anyway, I thought about this tension for a while and came to the conclusion that I had probably just lost sight of my purpose. The goal of (epistemic) rationality isn't to not be gullible or not be skeptical - the goal is to form correct beliefs, full stop. Terms like gullibility and skepticism are useful to the extent that people tend to be systematically overly accepting or dismissive of new arguments - individual beliefs themselves are simply either right or wrong. So, for example, if we do studies and find out that people tend to accept new ideas too easily on average, then we can write posts explaining why we should all be less gullible, and give tips on how to accomplish this. And if on the other hand it turns out that people actually accept far too few new ideas on average, then we can start talking about how we're all much too skeptical and how we can combat that. But in the end, in terms of becoming less wrong, there's no sense in which gullibility would be intrinsically better or worse than skepticism - they're both just words we use to describe deviations from the ideal, which is accepting only true ideas and rejecting only false ones.
This answer basically wrapped the matter up to my satisfaction, and resolved the sense of tension I was feeling. But afterwards I was left with an additional interesting thought: might gullibility be, if not a desirable end point, then an easier starting point on the path to rationality?
That is: no one should aspire to be gullible, obviously. That would be aspiring towards imperfection. But if you were setting out on a journey to become more rational, and you were forced to choose between starting off too gullible or too skeptical, could gullibility be an easier initial condition?
I think it might be. It strikes me that if you start off too gullible you begin with an important skill: you already know how to change your mind. In fact, changing your mind is in some ways your default setting if you're gullible. And considering that like half the freakin sequences were devoted to learning how to actually change your mind, starting off with some practice in that department could be a very good thing.
I consider myself to be...well, maybe not more gullible than average in absolute terms - I don't get sucked into pyramid scams or send money to Nigerian princes or anything like that. But I'm probably more gullible than average for my intelligence level. There's an old discussion post I wrote a few years back that serves as a perfect demonstration of this (I won't link to it out of embarrassment, but I'm sure you could find it if you looked). And again, this isn't a good thing - to the extent that I'm overly gullible, I aspire to become less gullible (Tsuyoku Naritai!). I'm not trying to excuse any of my past behaviour. But when I look back on my still-ongoing journey towards rationality, I can see that my ability to abandon old ideas at the (relative) drop of a hat has been tremendously useful so far, and I do attribute that ability in part to years of practice at...well, at believing things that people told me, and sometimes gullibly believing things that people told me. Call it epistemic deferentiality, or something - the tacit belief that other people know better than you (especially if they're speaking confidently) and that you should listen to them. It's certainly not a character trait you're going to want to keep as a rationalist, and I'm still trying to do what I can to get rid of it - but as a starting point? You could do worse I think.
Now, I don't pretend that the above is anything more than a plausibility argument, and maybe not a strong one at that. For one I'm not sure how well this idea carves reality at its joints - after all, gullibility isn't quite the same thing as lightness, even if they're closely related. For another, if the above were true, you would probably expect LWer's to be more gullible than average. But that doesn't seem quite right - while LW is admirably willing to engage with new ideas, no matter how absurd they might seem, the default attitude towards a new idea on this site is still one of intense skepticism. Post something half-baked on LW and you will be torn to shreds. Which is great, of course, and I wouldn't have it any other way - but it doesn't really sound like the behaviour of a website full of gullible people.
(Of course, on the other hand it could be that LWer's really are more gullible than average, but they're just smart enough to compensate for it)
Anyway, I'm not sure what to make of this idea, but it seemed interesting and worth a discussion post at least. I'm curious to hear what people think: does any of the above ring true to you? How helpful do you think gullibility is, if it is at all? Can you be "light" without being gullible? And for the sake of collecting information: do you consider yourself to be more or less gullible than average for someone of your intelligence level?
Approximating Solomonoff Induction
Solomonoff Induction is a sort of mathematically ideal specification of machine learning. It works by trying every possible computer program and testing how likely they are to have produced the data. Then it weights them by their probability.
Obviously Solomonoff Induction is impossible to do in the real world. But it forms the basis of AIXI and other theoretical work in AI. It's a counterargument to the no free lunch theorem; that we don't care about the space of all possible datasets, but ones which are generated by some algorithm. It's even been proposed as a basis for a universal intelligence test.
Many people believe that trying to approximate Solomonoff Induction is the way forward in AI. And any machine learning algorithm that actually works, to some extent, must be an approximation of Solomonoff Induction.
But how do we go about trying to approximate true Solomonoff Induction? It's basically an impossible task. Even if you make restrictions to remove all the obvious problems like infinite loops/non-halting behavior. The space of possibilities is just too huge to reasonably search through. And it's discrete - you can't just flip a few bits in a program and find another similar program.
We can simplify the problem a great deal by searching through logic circuits. Some people disagree about whether logic circuits should be classified as Turing complete, but it's not really important. We still get the best property of Solomonoff Inducion; that it allows most interesting problems to be modelled much more naturally. In the worst case you have some overhead to specify the memory cells you need to emulate a Turing machine.
Logic circuits have some nicer properties compared to arbitrary computer programs, but they still are discrete and hard to do inference on. To fix this we can easily make continuous versions of logic circuits. Go back to analog. It's capable of doing all the same functions, but also working with real valued states instead of binary.
Instead of flipping between discrete states, we can slightly increase connections between circuits, and it will only slightly change the behavior. This is very nice, because we have algorithms like MCMC that can efficiently approximate true bayesian inference on continuous parameters.
And we are no longer restricted to boolean gates, we can use any function that takes real numbers. Like a function that takes a sum of all of it's inputs, or one that squishes a real number between 0 and 1.
We can also look at how much changing the input of a circuit slightly, changes the output. Then we can go to all the circuits that connect to it in the previous time step. And we can see how much changing each of their input changes their output, and therefore the output of the first logic gate.
And we can go to those gates' inputs, and so on, chaining it all the way through the whole circuit. Finding out how much a slight change to each connection will change the final output. This is called the gradient, and we can then do gradient descent. Basically change each parameter slightly in the direction that increases the output the way we want.
This is a very efficient optimization algorithm. With it we can rapidly find circuits that fit functions we want. Like predicting the price of a stock given the past history, or recognizing a number in an image, or something like that.
But this isn't quite Solomonoff Induction. Since we are finding the best single model, instead of testing the space of all possible models. This is important because essentially each model is like a hypothesis. There can be multiple hypotheses which also fit the data yet predict different things.
There are many tricks we can do to approximate this. For example, if you randomly turn off each gate with 50% probability and then optimize the whole circuit to deal with this. For some reason this somewhat approximates the results of true bayesian inference. You can also fit a distribution over each parameter, instead of a single value, and approximate bayesian inference that way.
Although I never said it, everything I've mentioned about continuous circuits is equivalent to Artificial Neural Networks. I've shown how they can be derived from first principles. My goal was to show that ANNs do approximate true Solomonoff Induction. I've found the Bayes-Structure.
It's worth mentioning that Solomonoff Induction has some problems. It's still an ideal way to do inference on data, it just has problems with self-reference. An AI based on SI might do bad things like believe in an afterlife, or replace it's reward signal with an artificial one (e.g. drugs.) It might not fully comprehend that it's just a computer, and exists inside the world that it is observing.
Interestingly, humans also have these problem to some degree.
Reposted from my blog here.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)