It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.
Alternate hypothesis: the experienced rationalists are also doing what everyone else (in their community) is doing, they just consider a different group of people their community.
My immediate thought was that there is a third variable controlling both experience in rationality and willingness to pay for cryonics, such as 'living or hanging out in the bay area'.
and rationalist training probably makes you less likely to believe cryonics will work.
I like this post, but this conclusion seems too strong. There could e.g. be a selection effect, in that people with certain personality traits were less likely to believe in cryonics, more likely to take ideas seriously, and more likely to stick around on LW instead of forgetting the site after the first few months. In that case, "rationalist training" wouldn't be the cause anymore.
If we distinguish between
"experienced rationalists" who are signed up for cryonics
and
"experienced rationalists" who are not signed up for cryonics
... what is the average value of P(Cryonics) for each of these subpopulations?
Going by only the data Yvain made public, and defining "experienced rationalists" as those people who have 1000 karma or more (this might be slightly different from Yvain's sample, but it looked as if most who had that much karma were in the community for at least 2 years), and looking only at those experienced rationalists who both recorded a cryonics probability and their cryonics status, we get the following data (note that all data is given in terms of percentages - so 50 means 50% confidence (1 in 2), while 0.5 means 0.5% confidence (1 in 200)):
For those who said "No - and do not want to sign up for cryonics", we have for the cryonics success probability estimate (and this is conditioning on no global catastrophe) (0.03,1,1) (this is (Q1,median,Q3)), with mean 0.849 and standard deviation 0.728. This group was size N = 32.
For those who said "No - still considering it", we have (5,5,10), with mean 7.023 and standard deviation 2.633. This group was size N = 44.
For those who wanted to but for some reason hadn't signed up yet (either not available in the area (maybe worth moving for?) or otherwise procrastinating), we have (15,25,37), with mean 32.0...
Thanks for the calculations... and for causing me to learn about quartiles.
Part of Yvain's argument is that "proto-rationalists" have an average confidence in cryonics of 21%, but "experienced rationalists", only 15%. The latter group is thereby described as "less credulous", because the average confidence is lower, but "better at taking ideas seriously", because more of them are actually signed up for cryonics.
Meanwhile, your analysis – if I am parsing the figures correctly! – suggests that "experienced rationalists” who don't sign up for cryonics have an average confidence in cryonics of 12%, and "experienced rationalists” who do sign up for cryonics, an average confidence of 26%.
This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.
Yvain, could you give a real-life example analogous to your Goofus & Gallant story?
That is, could you provide an example (or several, even better) of a situation wherein:
Note that cryonics does not fit that bill (it fails point 5), which is why I'm asking for one or more actual examples.
Slightly different but still-important questions -- what about when you remove the requirement that the idea be strange or unconventional? How much of taking ideas seriously here is just about acting strategically, and how much is non-compartmentalization? To what extent can you train the skill of going from thinking "I should do X" to actually doing X?
Other opportunities for victory, not necessarily weird, possibly worth investigating: wearing a bike helmet when biking, using spaced repetition to study, making physical backups of data, staying in touch with friends and family, flossing.
making physical backups of data
Oh boy, is this ever a good example.
I used to work retail, selling and repairing Macs and Mac accessories. When I'd sell someone a computer, I'd tell them — no, beg them — to invest in a backup solution. "I'm not trying to sell you anything!", I'd say. "You don't have to buy your backup device from us — though we'd be glad to sell you one for a decent price — but please, get one somewhere! Set it up — heck, we'll set it up for you — and please... back up! When you come to us after your hard drive has inevitably failed — as all hard drives do eventually, sure as death or taxes — with your life's work on it, you'll be glad you backed up."
And they'd smile, and nod, and come back some time later with a failed hard drive, no backup, and full of outrage that we couldn't magic their data back into existence. And they'd pay absurd amounts of money for data recovery.
Back up your data, people. It's so easy (if you've got a Mac, anyway). The pain of losing months or years of work is really, really, really painful.
This post convinced me to make a physical backup of a bunch of short stories I've been working on. At first I was going to go read through the rest of the comments thread and then go do the back up, but further consideration made me realize how silly that was - burning them to a DVD and writing "Short Story Drafts" on it with a sharpie didn't take more than five minutes to do and made the odds of me forever losing that part of my personal history tremendously smaller. Go go gadget Taking Ideas Seriously!
I am not a salesman.
I am, however, reasonably competent with technology. Growing up in a congregation of all age groups, this made me one of the go-to people whenever somebody had computer problems. I'm talking middle-aged and above, the kind of people who fall for blatant phishing scams, have 256mb of RAM, and don't know what right-clicking is.
Without fail, these people had been aware that losing all their data would be very painful, and that it could happen to them, and that backing up their data could prevent that. Their reaction was universally "this is embarrassing, I should've taken that more seriously", not "I didn't know a thing like this could happen/that I could have done something simple to prevent it". Procrastination, trivial inconveniences, and not-taking-the-idea-seriously-enough are the culprit in a large majority of cases.
In short, I think it requires some contortion to construe the typical customer as rational here.
I note an amusing and strange contradiction in the sibling comments to this one:
VAuroch says the above is explained by hindsight bias; that the people in question actually didn't know about data loss and prevention thereof (but only later confabulated that they did).
Eugine_Nier says the above is explained by akrasia: the people did know about data loss and prevention, but didn't take action.
These are contradictory explanations.
Both VAuroch and Eugine_Nier seem to suggest, by their tone ("Classic hindsight bias", "That's just akrasia") that their respective explanations are obvious.
What's going on?
The example in the thread is real-life-ish - compare to the story of Voltaire and friends winning the French lottery. But if you want more:
It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent. But you can justly accuse me of cherry-picking here and demand repeatable examples.
Nothing guarantees that there will be repeatable examples - it could be that people are bad at taking ideas seriously until the ideas succeed once, at which point they realize they were wrong and jump on the bandwagon.
But in fact I think there are such examples. One such is investing in index funds rather than mutual funds/picking your own stocks. There are strong reasons to believe you'll do better, most people know those reasons but don't credit them, and some people do credit them and end up with more money.
Occasional use of modafinil might fall in this category as well, depending on whether we define people's usual reasons for not taking it as irrational or rational-given-different-utility-functions.
I don't think most of these examples will end out as "such obvious wins no one could possibly disagree with them" - with the possible exception of index funds it's never as purely mathematical as the lottery example - but I think for most people the calculus is clear.
I seriously doubt most people know the the reasons they should be investing in index funds. Remember, the average American has an IQ of 100, doesn't have a degree higher than a high school diploma, and rarely reads books. I'm not sure I'd know the reasons for buying index funds if not for spending a fair amount of time reading econ blogs.
Isn't it a little bit self-contradictory, to propose that smart people have beaten the market by investing in Bitcoin, and at the same time, that smart people invest in index funds rather than trying to beat the market? Or in other words, are those who got rich off Bitcoin really different from those who picked some lucky stocks in 1997 and cashed out in time?
That's a good point but I'm going to argue against it anyway.
Unlike a lucky stock, Bitcoin wasn't accounted for by mainstream markets at the time. An index fund amortizes the chances of lucky success and catastrophic failure across all the stocks into a single number, giving roughly the same expected value but with much lower variance. Bitcoin wasn't something that could be indexed at that point, so there was no way you could have hedged your bet in the same way that an index fund would let you hedge.
It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent.
Actually, I've been working on a mini-essay on exactly this topic: because of my PredictionBook use, I have a long paper trail of explicit predictions on Bitcoin which implied major +EV at every time period, but I failed to meaningfully exploit my beliefs and so my gains have been far smaller than they could have been.
UPDATE:
I think index funds are a good example of something that fits my criteria #s 1, 2, and 3. (Thank you to the commenters who've explained to me both why they are a good idea and why many/most people may not understand or believe this.)
Do index funds fit #s 4 and 5? It might be interesting to ask, in the next survey: do you invest? If so, in index funds, or otherwise? If the former, how much money have you made as a result? In the absence of survey data, is there other evidence that rationalists (or "rationalists") invest in index funds more than the general population, and that they win thusly (i.e. make more money)?
I think modafinil is clearly a good example of my #s 2 and 3; I am not so sure about #1. I am still researching the matter. Gwern's article, though very useful, has not convinced me. (Of course, whether it fits #s 4 and 5 also remains to be demonstrated.)
I remain unsure about whether the Bitcoin investment is a good example of anything. Again, if anyone cares to elucidate the matter, I would be grateful.
I'd like to hear this from a financial expert. Do we have any who'd like to speak on this?
I'm one (PhD in economics) and yes and ordinary investors should use low fee index funds.
An index fund is intended to go up or down y the exact same amount as the entire exchange as a whole. For example, you might hear that the S&P 500 rose a total of 7% last year. If that happened, then your index fund would go up by 7%.
The main reason people don't invest in index funds is because they want to "beat the market." They see some stocks double or triple within a year and think "oh man, if only I'd bought that stock a bit earlier, I'd be rich!" So some people try to pick individual stocks, but the majority of laypeople want to let "experts" do it for them.
Mutual funds generally have a fund manager and tons of analysts working there trying to figure out how to beat the market (get a return greater than the market itself). They all claim to be able to do this and some have a record to point to to prove that they have done it in the past. For example, fund A may have beat the market in the previous 3 years, so investors think that by investing in Fund A over an Index fund, they will come out ahead.
But unfortunately, markets are anti-inductive so past success of individual stocks, mutual funds, and even index funds is no guarantee of...
There's lots of modafinil info at gwern's page. Wikipedia is also a pretty good source...
How much data is there behind this conclusion
Why are you asking, instead of looking?
I'm not Yvain, but his Goofus and Gallant parable did remind me of the time some dude noticed that the uncapped jackpot rollover of the Irish lotto made it vulnerable to a brute force attack#History_of_Lotto).
I note that the National Lottery responded by attempting (with partial success) to block the guy from his victory, and also making such things unfeasible in the future.
As a general rule, when you game the system, the system changes to stop the game, because the organisers have a goal beyond the rules of the day. So there's only a certain window of opportunity to profit. If there are high stakes, you need to be really sure that there is a gap to work with, in between "no-one has done this before, so maybe it doesn't work for reasons I haven't seen" and "everyone's doing it, so does it still work?"
Your conclusion is possible. But I'll admit I find it hard to believe that non-rationalists really lack the ability to take ideas seriously. The 1 = 2 example is a little silly, but I've known lots of not-very-rational people who take ideas seriously. For example, people who stopped using a microwave when they heard about an experiment supposedly showing that microwaved water kills plants. People who threw out all their plastic dishes after the media picked up a study about health dangers caused by plastics. People who spent a lot of time thinking positive thoughts because they have heard it will make them successful.
Could it be that proto-rationalists are just bad at quantifying their level of belief? Normally, I'd trust somebody's claim to believe something more if they're willing to bet on it; and if they aren't willing to bet on it, then I'd think their real level of belief is lower.
Making things happen with positive thinking requires magic. But myths about the health effects of microwaves or plastic bottles are dressed up to look like science as usual. The microwave thing is supposedly based on the effect of radiation on the DNA in your food or something -- nonsense, but to someone with little science literacy not necessarily distinguishable from talk about the information-theoretic definition of death.
I'm not sure that signing papers to have a team of scientists stand by and freeze your brain when you die is more boring than cooking your food without a microwave oven. I would guess that cryonics being "weird", "gross", and "unnatural" would be more relevant.
We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
This is an incredibly bad definition of a rationalist. What you actually research here are people who fit into the mainstream of LW.
Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
I don't like this appropriation of the term "rational" (even with the "-ist" suffix), and in fact I find it somewhat offensive.
[ Warning: Trolling ahead ]
But since words are arbitrary placeholders, let's play a little game and replace the word "rationalist" with another randomly generated string, such as "cultist" (which you might possibly find offensive, but remember, it's just a placeholder).
So what does your data say?
Proto-cultists have give a higher average probability of cryonics success than committed cultists.
But this isn't necessarily particularly informative, because averaging probabilities from different estimators doesn't really tell us much (consider scenario A where half of the res...
Is it possible that the difference you're seeing is just lack of knowledge of probabilities? I am a new person, and I don't really understand percentages. My brain just doesn't work that way. I don't know how I would even begin to assign a probability to how likely cryonics is to work.
Well, yes membership in LW community make one more likely to subscribe for cryonics, even if corrected for selection. Because LW community promotes cryonics. Yes it is that simple. It is basic human behaviour and doesn't much to do with rationality. Remove all positive portrayal, all emotions, all that "value life" and "true rationalist", leaving only cold facts and numbers - and few years later cryonics subscription rate among new LW members will drop much more close to average among "people who know about cryonics" group.
Yes, the fact that LW community convinces people to subscribe to cryonics is not mysterious. The mysterious thing is that the LW community manages at the same time to convince people that cryonics is unlikely to work.
Less credulous than who? All your groups are far, far, extremely more credulous about cryonics on average than, say, me, or neurobiology experts, or most people I know. More credulous than many cryonics proponents, too.
As for the rather minor differences between the averages within your groups... Said groups joined the site at different times, have different age, have discovered this site for different reasons (I gather you get more scifi fans now). You even got a general trend towards increased estimates.
That you go on and ignore all signs of co-foundin...
Did you check if there was a significant age difference between the two groups? I would expect proto-rationalists to be younger, so they would have less money and fewer chances to have signed up for cryonics.
The relevant difference is that Gallant knows how to take ideas seriously.
Flagging that the author no longer endorses this post.
For how long did you deliberate upon whether, or what did you think whilst deciding to go with 'Gallant' and 'Goofus'?
It's a classic pair of Lazy Bad Planner and Shining Example of Humanity, which has been used in the children's magazine Highlights to put morals on display for decades.
I might have gone with Simplicio and Salviati, but that would go over many people's heads for no real benefit.
This whole article makes a sleight of hand assumption that more rational = more time on LW.
I'm a proto-rationalist by these criteria. I don't see any reason cryonics can't eventually work. I've no interest in it, and I think it is kinda weird.
Some of that weirdness is the typical frozen dead body stuff. But, more than that, I'm weirded out by the immortality-ism that seems to be a big part of (some of) the tenured LW crowd (i.e. rationalists).
I've yet to hear one compelling argument for why hyper-long life = better. The standard answers seems to be "...
Given a choice between remaining alive for as long as novelty and risk and challenges and obstacles to overcome and joy remain present, or dying before that point, would you choose to die before that point?
Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).
Both of these numbers are higher than I would have expected, and I'd at least weakly say they at least weakly support the claim "rationalists are gullible, but experienced rationalists are less gullible than proto-rationalists".
Out of curiosity, I took an average in decibel...
You say there were 93 proto-rationalists; I'm curious to know how many experienced rationalists there were.
Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.
This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.
The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.
We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.
Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).
Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working, and experienced rationalists assigned a 12% chance. We see the same pattern.
So experienced rationalists are consistently less likely to believe in cryonics than proto-rationalists, and rationalist training probably makes you less likely to believe cryonics will work.
On the other hand, 0% of proto-rationalists had signed up for cryonics compared to 13% of experienced rationalists. 48% of proto-rationalists rejected the idea of signing up for cryonics entirely, compared to only 25% of experienced rationalists. So although rationalists are less likely to believe cryonics will work, they are much more likely to sign up for it. Last year's survey shows the same pattern.
This is not necessarily surprising. It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.
Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.
Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.
Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.
The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.
The relevant difference is that Gallant knows how to take ideas seriously.
Taking ideas seriously isn't always smart. If you're the sort of person who falls for proofs that 1 = 2 , then refusing to take ideas seriously is a good way to avoid ending up actually believing that 1 = 2, and a generally excellent life choice.
On the other hand, progress depends on someone somewhere taking a new idea seriously, so it's nice to have people who can do that too. Helping people learn this skill and when to apply it is one goal of the rationalist movement.
In this case it seems to have been successful. Proto-rationalists think there is a 21% chance of a new technology making them immortal - surely an outcome as desirable as any lottery jackpot - consider it an interesting curiosity, and go do something else because only weirdos sign up for cryonics.
Experienced rationalists think there is a lower chance of cryonics working, but some of them decide that even a pretty low chance of immortality sounds pretty good, and act strategically on this belief.
This is not to either attack or defend the policy of assigning a non-negligible probability to cryonics working. This is meant to show only that the difference in cryonics status between proto-rationalists and experienced rationalists is based on meta-level cognitive skills in the latter whose desirability is orthogonal to the object-level question about cryonics.
(an earlier version of this article was posted on my blog last year; I have moved it here now that I have replicated the results with a second survey)