Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Rationalists Are Less Credulous But Better At Taking Ideas Seriously

43 Post author: Yvain 21 January 2014 02:18AM

Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.

This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.

The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.

We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).

Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working, and experienced rationalists assigned a 12% chance. We see the same pattern.

So experienced rationalists are consistently less likely to believe in cryonics than proto-rationalists, and rationalist training probably makes you less likely to believe cryonics will work.

On the other hand, 0% of proto-rationalists had signed up for cryonics compared to 13% of experienced rationalists. 48% of proto-rationalists rejected the idea of signing up for cryonics entirely, compared to only 25% of experienced rationalists. So although rationalists are less likely to believe cryonics will work, they are much more likely to sign up for it. Last year's survey shows the same pattern.

This is not necessarily surprising. It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway -  usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.

Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.

Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.

Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.

The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.

The relevant difference is that Gallant knows how to take ideas seriously.

Taking ideas seriously isn't always smart. If you're the sort of person who falls for proofs that 1 = 2  , then refusing to take ideas seriously is a good way to avoid ending up actually believing that 1 = 2, and a generally excellent life choice.

On the other hand, progress depends on someone somewhere taking a new idea seriously, so it's nice to have people who can do that too. Helping people learn this skill and when to apply it is one goal of the rationalist movement.

In this case it seems to have been successful. Proto-rationalists think there is a 21% chance of a new technology making them immortal - surely an outcome as desirable as any lottery jackpot - consider it an interesting curiosity, and go do something else because only weirdos sign up for cryonics.

Experienced rationalists think there is a lower chance of cryonics working, but some of them decide that even a pretty low chance of immortality sounds pretty good, and act strategically on this belief.

This is not to either attack or defend the policy of assigning a non-negligible probability to cryonics working. This is meant to show only that the difference in cryonics status between proto-rationalists and experienced rationalists is based on meta-level cognitive skills in the latter whose desirability is orthogonal to the object-level question about cryonics.

(an earlier version of this article was posted on my blog last year; I have moved it here now that I have replicated the results with a second survey)

Comments (285)

Comment author: Kaj_Sotala 21 January 2014 05:33:21AM 24 points [-]

and rationalist training probably makes you less likely to believe cryonics will work.

I like this post, but this conclusion seems too strong. There could e.g. be a selection effect, in that people with certain personality traits were less likely to believe in cryonics, more likely to take ideas seriously, and more likely to stick around on LW instead of forgetting the site after the first few months. In that case, "rationalist training" wouldn't be the cause anymore.

Comment author: private_messaging 22 January 2014 10:03:31PM *  2 points [-]

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future.

....

Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working

...

I think we have a general trend of decrease in the skepticism of newcomers.

As for signing up for cryonics, LW used to advocate signing up for cryonics far more loudly back in the day. edit: also, did those 13% people sign up for cryonics after their "rationalist training", or before?

Comment author: ESRogs 22 January 2014 12:50:00AM 1 point [-]

Yes, and in particular, I would expect age to possibly be a common cause behind being on LessWrong longer, and having signed up for cryonics after being convinced of its plausibility.

Comment author: FourFire 23 January 2014 02:52:15PM 0 points [-]

Age, and economic status, at least in my case, and I am one of the survey takers.

Comment author: Mitchell_Porter 21 January 2014 03:30:04AM 23 points [-]

If we distinguish between

"experienced rationalists" who are signed up for cryonics

and

"experienced rationalists" who are not signed up for cryonics

... what is the average value of P(Cryonics) for each of these subpopulations?

Comment author: notsonewuser 21 January 2014 03:25:15PM *  19 points [-]

Going by only the data Yvain made public, and defining "experienced rationalists" as those people who have 1000 karma or more (this might be slightly different from Yvain's sample, but it looked as if most who had that much karma were in the community for at least 2 years), and looking only at those experienced rationalists who both recorded a cryonics probability and their cryonics status, we get the following data (note that all data is given in terms of percentages - so 50 means 50% confidence (1 in 2), while 0.5 means 0.5% confidence (1 in 200)):

For those who said "No - and do not want to sign up for cryonics", we have for the cryonics success probability estimate (and this is conditioning on no global catastrophe) (0.03,1,1) (this is (Q1,median,Q3)), with mean 0.849 and standard deviation 0.728. This group was size N = 32.

For those who said "No - still considering it", we have (5,5,10), with mean 7.023 and standard deviation 2.633. This group was size N = 44.

For those who wanted to but for some reason hadn't signed up yet (either not available in the area (maybe worth moving for?) or otherwise procrastinating), we have (15,25,37), with mean 32.069 and standard deviation 23.471. This group was size N = 29.

Finally, for the people who have signed up, we have (7,21.5,33), with mean 26.556 and standard deviation 22.389. This group was size N = 18.

If we put all of the "no" people together (those procrastinating, those still thinking, and those who just don't want to), we get (2,5,15), with mean 12.059 and standard deviation 17.741. This group is size N = 105.

I'll leave the interpretation of this data to Mitchell_Porter, since he's the one who made the original comment. I presume he had some point to make.

(I used Excel's population standard deviation computation to get the standard deviations. Sorry if I should have used a different computation. The sample standard deviation yielded very similar numbers.)

Comment author: Mitchell_Porter 22 January 2014 09:23:23AM 16 points [-]

Thanks for the calculations... and for causing me to learn about quartiles.

Part of Yvain's argument is that "proto-rationalists" have an average confidence in cryonics of 21%, but "experienced rationalists", only 15%. The latter group is thereby described as "less credulous", because the average confidence is lower, but "better at taking ideas seriously", because more of them are actually signed up for cryonics.

Meanwhile, your analysis – if I am parsing the figures correctly! – suggests that "experienced rationalists” who don't sign up for cryonics have an average confidence in cryonics of 12%, and "experienced rationalists” who do sign up for cryonics, an average confidence of 26%.

This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.

Comment author: Vaniver 22 January 2014 07:01:54PM 4 points [-]

This breaks apart the combination of contrary traits that forms the headline of this article. We don’t see a single group of people who are simultaneously more cryo-skeptical than the LW newbies, and yet more willing to sign up for cryonics. Instead, we see two groups: one that is more cryo-skeptical and which doesn’t sign up for cryonics; and another which is less cryo-skeptical, and which does sign up for cryonics.

It seems like you should do the same quartile breakdown for the newbies, because I read Yvain's core point as the existence of high-probability newbies who aren't signed up as a failure to act on their beliefs.

I haven't separated out the newbie cryocrastinators from the newbie considerers, though, and it seems that among the experienced the cryocrastinators give higher numbers than those who have signed up, which also seems relevant to a comparison.

Comment author: private_messaging 22 January 2014 10:36:25PM *  3 points [-]

Maybe procrastinators are trying to over-estimate it to get themselves to do it...

The probabilities are nuts though. For the whole thing to be of use,

1: you must die in a right way to get frozen soon enough and well enough. (Rather unlikely for a young person, by the way).

2: cryonics must preserve enough data.

3: no event that causes you to lose cooling

4: the revival technology must arise and become cheap enough (before you are unfrozen)

5: someone should dispose of the frozen head by revival rather than by garbage disposal or something even nastier (someone uses frozen heads as expired-copyright data).

Note that it's the whole combined probability that matters for the decision to sign up. edit: and not just that, but compared to the alternatives - i.e. you can improve your chances by trying harder not to die, and you can use money/time for that instead of cryonics.

edit2: also, just 3 independent-ish components (freezing works, company doesn't bust, revival available) with high ignorance get you down to 12.5%

Comment author: fortyeridania 25 January 2014 06:50:14AM 1 point [-]

You might be interested in reading some other breakdowns of the conditions required for cryonics to work (and their estimates of the relevant probabilities):

Break Cryonics Down (March 2009, at Overcoming Bias)

How Likely Is Cryonics To Work? (September 2011, here at LW)

More Cryonics Probability Estimates (December 2012, also at LW)

Comment author: V_V 28 January 2014 01:01:08AM 1 point [-]

That's what I thought.

Comment author: SaidAchmiz 21 January 2014 03:14:30AM *  20 points [-]

Yvain, could you give a real-life example analogous to your Goofus & Gallant story?

That is, could you provide an example (or several, even better) of a situation wherein:

  1. There is some opportunity for clear, unambiguous victory;
  2. Taking advantage of it depends primarily on taking a strange/unconventional/etc. idea seriously (as distinct from e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.);
  3. Most people / normal people / non-rationalists do not take the idea seriously, and as a consequence have not taken advantage of said opportunity;
  4. Some people / smart people / rationalists take the idea seriously, and have gone for the opportunity;
  5. And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

Note that cryonics does not fit that bill (it fails point 5), which is why I'm asking for one or more actual examples.

Comment author: whales 21 January 2014 04:17:58AM *  16 points [-]

Slightly different but still-important questions -- what about when you remove the requirement that the idea be strange or unconventional? How much of taking ideas seriously here is just about acting strategically, and how much is non-compartmentalization? To what extent can you train the skill of going from thinking "I should do X" to actually doing X?

Other opportunities for victory, not necessarily weird, possibly worth investigating: wearing a bike helmet when biking, using spaced repetition to study, making physical backups of data, staying in touch with friends and family, flossing.

Comment author: SaidAchmiz 21 January 2014 05:19:13AM 27 points [-]

making physical backups of data

Oh boy, is this ever a good example.

I used to work retail, selling and repairing Macs and Mac accessories. When I'd sell someone a computer, I'd tell them — no, beg them — to invest in a backup solution. "I'm not trying to sell you anything!", I'd say. "You don't have to buy your backup device from us — though we'd be glad to sell you one for a decent price — but please, get one somewhere! Set it up — heck, we'll set it up for you — and please... back up! When you come to us after your hard drive has inevitably failed — as all hard drives do eventually, sure as death or taxes — with your life's work on it, you'll be glad you backed up."

And they'd smile, and nod, and come back some time later with a failed hard drive, no backup, and full of outrage that we couldn't magic their data back into existence. And they'd pay absurd amounts of money for data recovery.

Back up your data, people. It's so easy (if you've got a Mac, anyway). The pain of losing months or years of work is really, really, really painful.

Comment author: poiuyt 22 January 2014 08:09:25AM 12 points [-]

This post convinced me to make a physical backup of a bunch of short stories I've been working on. At first I was going to go read through the rest of the comments thread and then go do the back up, but further consideration made me realize how silly that was - burning them to a DVD and writing "Short Story Drafts" on it with a sharpie didn't take more than five minutes to do and made the odds of me forever losing that part of my personal history tremendously smaller. Go go gadget Taking Ideas Seriously!

Comment author: byrnema 22 January 2014 04:02:06AM *  1 point [-]

(This is a stream of consciousness where I explore why I haven't backed up my data. This proceeds in stages, with evolution to the next stage only because the writing of this comment forced me to keep going. Thus, it's a data point in response to this comment.)

Back up your data, people. It's so easy

Interesting. I have a very dense 'ugh field' around backing up my data, come to think of it. Based on this population of one, it has nothing to do with not trusting the salesperson, or not being aware that my hard drive is going to fail.

... in fact, I know my hard drive is about to fail (upon reboot I get those dooming system error messages that cycle, etc.) and has occurred to me several times I might want to back up my data. Yes, there's some important stuff I need to back up.

Maybe the hurdle is that most stuff on my computer is useless, and I don't want to prioritize the material. I just want it all there if I need it, so I wish my computer wouldn't break.

Since I know my computer is likely to break, or in case the power goes out or I accidentally close without saving, while working I save files electronically very frequently, and I make hard copies if there will be any pain within say -- 72 hours -- of losing a particular document. The pain of the loss of anything later than a few days is discounted. (Is that hyperbolic discounting? Or just akrasia, as another commenter suggested?)

But I do know I won't spend 20 minutes tomorrow investigating how to back up my hard drive. I know someone will say it is "easy", but there will instead be some obstacle that will mean my data won't actually get backed up and I'll have wasted my twenty minutes. Right?

... OK, fine. (sigh) Let's suppose my budget is $20 and 20 minutes. What should I do?

(reading online)

...OK, I buy a hard-drive, connect it with a USB, and drag and drop the files I want to save once the computer recognizes the device. Although I still need to determine which folders are worth saving, and this is a continuous, ongoing chore, there are some folders I know I need to save right away. I should go ahead and store those.

(I'll report back tomorrow whether this back-up actually happened.)

Comment author: RichardKennaway 22 January 2014 09:03:30AM 2 points [-]

Let's suppose my budget is $20 and 20 minutes. What should I do?

Others have mentioned Dropbox, but it's so wonderful I'll mention it again. Dropbox. It's almost as awesome in its just-works-ness as Time Machine (Apple's awesome backup solution). Free up to 2GB, $10/month gets you 100GB. Runs on everything.

Note that Dropbox isn't designed as a backup solution, it's really for sharing files across multiple devices. It only preserves the current version of a file, so offers no protection against deleting a file you didn't mean to. As soon as you edit a file, the changes are uploaded to the Dropbox cloud.

A point to remember is that every backup solution protects against some threats but not others, and you have to decide what you need to defend against. I have a Time Capsule (external drive for Time Machine backup), but it's in the same room as the computer, so it provides excellent protection against disc failure or accidental deletion, but none against theft. So I also have an external drive that I plug in once a week and the rest of the time leave hidden elsewhere. If the files on your computer are your livelihood, you need an off-site backup to survive risks such as your house burning down, or serious burglars doing a complete house clearance.

Although I still need to determine which folders are worth saving, and this is a continuous, ongoing chore

A backup solution that presents a continuous, ongoing chore is not going to work. It has to be something that once you set it up, Just Works. I don't know if there's anything as awesome as Time Machine in this respect for Windows. Ideally a solution should automatically backup everything, except possibly some things you specifically exclude. If you only back up things you specifically decide to, you will inevitably leave things out, that you'll only discover when you need the backup you don't have.

Comment author: Vaniver 22 January 2014 06:52:40PM 3 points [-]

It only preserves the current version of a file, so offers no protection against deleting a file you didn't mean to. As soon as you edit a file, the changes are uploaded to the Dropbox cloud.

Dropbox actually does version control, which has saved several files I've accidentally deleted or overwritten. It's only up to 30 days, though.

Comment author: SaidAchmiz 22 January 2014 04:10:05AM *  0 points [-]

I take it you've got a Windows or Linux machine? Because if you have a Mac, there's a much easier solution. Edit: I mean easier than a continuous, ongoing chore of deciding what files to save, drag-and-dropping stuff, etc. You do still need to buy a device, though. For a $20 budget I recommend this 32 GB USB flash drive.

Comment author: byrnema 22 January 2014 04:17:40AM 0 points [-]

I have a Windows machine, but I know there are automatic back-up schedules that can be done. I just don't want to do it... I don't want to think about a complex automatic process or make decisions about scheduling. Trying to pinpoint why ... it feels messy and discontinuous and inconvenient, to keep saving iterations of all my old junk.

Comment author: RichardKennaway 22 January 2014 01:54:44PM 4 points [-]

it feels messy and discontinuous and inconvenient, to keep saving iterations of all my old junk.

When dealing with old data, what I find most stressful is deciding which things to keep. So as far as possible I don't. It's a wasted effort. I keep everything, or I delete everything. It doesn't matter that there's gigabytes of stuff on my machine that I'll never look at, as long as I never have to see it or think about it. Disc space is measured in terabytes these days.

Comment author: SaidAchmiz 22 January 2014 03:21:06PM 3 points [-]

When dealing with old data, what I find most stressful is deciding which things to keep.

In case this wasn't clear, for the benefit of any Mac users reading this:

Time Machine makes all these decisions for you. That's one of the things that makes it awesome.

Comment author: Lumifer 22 January 2014 05:27:09PM 2 points [-]

Disc space is measured in terabytes these days.

This.

Typically when I change machines, the data from the old one goes into the /old folder on the new one. You get a nesting hierarchy and down at the bottom there are some files from many years ago that I would need to get a simulator to even read :-/

Comment author: byrnema 22 January 2014 10:06:22PM 1 point [-]

So that's what I am going to do. I actually ordered an external hard drive, and every few weeks I'll back up my hard drive. The whole thing (no decisions).

I also understand that I don't need to worry about versions -- the external hard drive just saves the latest version.

I also talked to a friend today and found out they backed their data regularly. I was surprised; didn't know regular people did this regularly.

Comment author: Lumifer 22 January 2014 05:25:08PM 2 points [-]

keep saving iterations of all my old junk.

Backups aren't about saving your old junk. Backup are about saving everything that you have on your hard drive in case it goes to the Great Write-Only Memory In The Sky.

If you're talking about staggered backups or snapshots, their usefulness lies mostly in being a (very primitive) versioning system, as well as a possible lifeline in case your data gets silently corrupted and you don't notice fast enough.

Comment author: SaidAchmiz 22 January 2014 06:25:29AM 1 point [-]

Well, the way it works on the Mac — and I'm only describing this because I speculate that similar, if not quite as awesome, solutions exist for Windows — is this:

  1. Scheduling: backups happen every hour if the backup drive is plugged in; or, whenever you plug it in; plus, you can trigger them manually. You pretty much don't have to think about it; just either keep the thing plugged in (easy with a desktop), or plug it in once in a while.

  2. Multiple iterations of your stuff: there's a "history" of backups, maintained automatically. You can go back to any backed-up prior version (to a certain point; how long a history you can keep is dictated by available storage space). The interface for restoring things hides the messy complexity of the multiple versions from you, and just lets you go back to the latest version, or any previous available version, sorted by time.

With good backup software, it's really quite smooth and easy. The process is not complex; decisions to be made are minimal; your backup feels nice and non-messy; restoring is easy as pie.

Unfortunately I can't recommend good Windows backup software, but maybe someone else can chime in.

Comment author: Jiro 21 January 2014 06:46:33PM 0 points [-]

If the person doesn't know anything about computers or backups, he can't distinguish "I'm not trying to sell you something" from "I am trying to sell you something and I'm lying about it" and he'd have to do a Bayseian update based on the chance that you're trying to sell him something. Furthermore, he knows that if you are trying to sell him something, the fact that you are trying to sell him something would make it likely that anything you say is untrustworthy (and the fact that you are lying about your intent to sell him something increases the probability of untrustworthiness even more).

So the customer is being rational by not listening to you.

Comment author: Wes_W 21 January 2014 09:25:53PM 14 points [-]

I am not a salesman.

I am, however, reasonably competent with technology. Growing up in a congregation of all age groups, this made me one of the go-to people whenever somebody had computer problems. I'm talking middle-aged and above, the kind of people who fall for blatant phishing scams, have 256mb of RAM, and don't know what right-clicking is.

Without fail, these people had been aware that losing all their data would be very painful, and that it could happen to them, and that backing up their data could prevent that. Their reaction was universally "this is embarrassing, I should've taken that more seriously", not "I didn't know a thing like this could happen/that I could have done something simple to prevent it". Procrastination, trivial inconveniences, and not-taking-the-idea-seriously-enough are the culprit in a large majority of cases.

In short, I think it requires some contortion to construe the typical customer as rational here.

Comment author: SaidAchmiz 22 January 2014 01:22:16AM *  7 points [-]

I note an amusing and strange contradiction in the sibling comments to this one:

VAuroch says the above is explained by hindsight bias; that the people in question actually didn't know about data loss and prevention thereof (but only later confabulated that they did).

Eugine_Nier says the above is explained by akrasia: the people did know about data loss and prevention, but didn't take action.

These are contradictory explanations.

Both VAuroch and Eugine_Nier seem to suggest, by their tone ("Classic hindsight bias", "That's just akrasia") that their respective explanations are obvious.

What's going on?

Comment author: Eugine_Nier 23 January 2014 02:26:58AM 0 points [-]

Well, it depends on what precisely we mean by them "knowing" about data loss.

Comment author: CCC 06 February 2014 04:29:45AM 0 points [-]

Limits of language, I think. Both explanations are possible, giving what the parent post said; both VAuroch and Eugine_Nier may have had experience with similar cases caused, respectively, by hindsight bias and akrasia, which makes their explanation appear obvious to them.

A lot of the time, I've noticed that "it's obvious" means "I have seen this pattern before (sometimes multiple times), and this extra element is part of the same pattern every time that I have seen it"

Comment author: Eugine_Nier 21 January 2014 11:09:03PM 0 points [-]

That's just akrasia.

Comment author: SaidAchmiz 21 January 2014 07:10:51PM 5 points [-]

Well, look, of course I'd prefer to sell the customer something. If, knowing this, you take everything out of my mouth to be a lie, then you are not, in fact, being rational. The fact that I would specifically say "buy it elsewhere if you like!", and offer to set the backup system up for free, ought to tell you something.

The other part of this is that the place where I worked was a small, privately owned shop, many of whose customers were local, and which made a large chunk (perhaps the majority) of its revenue from service. (Profit margins on Apple machines are very slim.) It was to our great advantage not to lie to people in the interest of selling them one more widget. Doing so would have been massively self-defeating. As a consequence of all of this, our regular customers generally trusted us, and were quite right to do so.

Finally, even if the customer decided that the chance was too great that I was trying to sell them something, and opted not to buy anything on the spot, it is still ridiculously foolish not to follow up on the salesperson's suggestion that you do something to protect yourself from losing months or years of work. If that is even a slight possibility, you ought to investigate, get second and third opinions, get your backup solution as cheaply as you like, and then take me up on my offer to install it for free (or have a friend install it). To not back up at all, because clearly the salesperson is lying and the truth must surely be the diametrical opposite of what they said, is a ludicrously bad plan.

Comment author: Desrtopa 21 January 2014 09:15:33PM *  4 points [-]

Well, look, of course I'd prefer to sell the customer something. If, knowing this, you take everything out of my mouth to be a lie, then you are not, in fact, being rational. The fact that I would specifically say "buy it elsewhere if you like!", and offer to set the backup system up for free, ought to tell you something.

It tells customers something, but considering that these are plausible marketing techniques, it's not very strong evidence.

If you tell the customers that something is really important, that they should buy it, even if from somewhere else, this signals trustworthiness and consideration, but it's a cheap signal considering that if they decide, right in your store, to buy a product which your store offers, they probably will buy it from you unless they're being willfully perverse. Most of the work necessary to get them to buy the product from you is done in convincing them to buy it at all, and nearly all the rest is done by having them in your store when you do it.

Offering to provide services for free is also not very strong evidence, because in marketing, "free" is usually free*, a foot-in-the-door technique used to extract money from customers via some less obvious avenue. Indeed, the customers might very plausibly reason that if the service was so important that they would be foolish to do without it, you wouldn't be offering it for free.

Comment author: SaidAchmiz 21 January 2014 09:27:34PM 1 point [-]

Indeed, the customers might very plausibly reason that if the service was so important that they would be foolish to do without it, you wouldn't be offering it for free.

Given that setting up backups on a Mac is so easy that, as I suggested in my quoted spiel, the customer could even do it themselves, this is not a very well-supported conclusion.

foot-in-the-door technique used to extract money from customers via some less obvious avenue.

Well, duh. You "extract" money from customers by the fact of them liking you, trusting you, and getting all their service done at your shop, and buying future things they need from you, also.

if they decide, right in your store, to buy a product which your store offers, they probably will buy it from you unless they're being willfully perverse.

I think you underestimate how doggedly many people hunt for deals. I don't even blame them; being a retail shop, my place of work sometimes couldn't compete with mail-order houses on prices.

You're right, though: if they decided then and there that they would buy the thing, the customers often in fact went ahead and bought it then and there.

But you might plausibly think "hmm, suspicious. I'll wait to buy this until I can do some research." Fine and well; that's exactly what I'd do. Do the research. Buy the thing online. But dismissing the entire notion, based on the idea that "bah, he was just trying to sell me something", is foolishness.

Comment author: Caspian 29 January 2014 03:22:14AM 1 point [-]

Back up your data, people. It's so easy (if you've got a Mac, anyway).

Thanks for the encouragement. I decided to do this after reading this and other comments here, and yes it was easy. I used a portable hard drive many times larger than the Mac's internal drive, dedicated just to this, and was guided through the process when I plugged it in. I did read up a bit on what it was doing but was pretty satisfied that I didn't need to change anything.

Comment author: christopherj 25 January 2014 01:44:25AM 1 point [-]

I can verify this -- as an acknowledged "computer person" and "rational person", I still didn't back up my data, even while advising my friends that they should and they'll be sorry when they don't. Fortunately, my hard drive started making interesting new noises, rather than failing without warning, so I didn't embarrass my self too badly. It is fairly common for someone to acknowledge and advise others of backing up their data, but failing to do so themselves.

I think it's a combination of procrastination, laziness, being super-cheap, optimism/arrogance, and not having especially valuable data. Though people with valuable data do it too.

Comment author: wallowinmaya 05 February 2014 11:16:51PM 0 points [-]

You got me kinda scared. I just use Evernote or wordpress for all my important writing. That should be enough, right?

Comment author: RichardKennaway 06 February 2014 09:15:41AM 1 point [-]

Some hazards your online data are exposed to:

  • Your account could be hacked.

  • Their service could be hacked.

  • They might decide that you're in breach of their ToS and close your account.

  • They could go out of business.

Anywhere your data are, they are exposed to some risks. The trick is to have multiple copies, such that no event short of the collapse of civilisation will endanger all of them together.

Comment author: SaidAchmiz 06 February 2014 03:13:50PM 0 points [-]

Precisely. My most immediately critical data — the stuff on which my current employment and professional success/advancement depends — exists in no less than seven places:

  1. My desktop's primary drive (an SSD).
  2. My desktop's backup hard drive.
  3. My laptop's primary drive (an SSD).
  4. My laptop's backup hard drive.
  5. The primary drive (an SSD) of a different computer, in a different part of the country.
  6. That computer's backup hard drive.
  7. A cloud-based storage service.

I worry that that's not enough. I am considering investing in some sort of NAS, or two, and placing them in more secure areas of both of the dwellings to which I have access.

Comment author: gwern 07 February 2014 01:47:54AM *  2 points [-]

How much time are you spending keeping all of that in sync...?

Just having a lot of drives is not a good use of resources from the data protection standpoint. It ensures you protection against the catastrophic failure of one or two drives simultaneously, but you seem unprotected against most other forms of data loss: for example, silent corruption of files (what are you using to ensure integrity? I don't see any mention of hashes or DVCSes), or mistaken deletions/modifications (what stops a file deletion from percolating through each of the 7 before you realize 6 months later that it was a critical file?).

For improving general safety, you should probably drop some of those drives in favor of adding protection in the form of read-only media and error detection + forward error correction (eg periodically making a full backup with PAR2 redundancy to BluRays), and more frequent backups to the backup drives.

Comment author: SaidAchmiz 07 February 2014 02:52:18AM 0 points [-]

Synchronization is automatic. It does not take up any of my time.

I have enough drive space to maintain backups going back several months, which protects against both file corruption (volume corruption is taken care of by redundancy) and mistaken deletion/modification. In any case, the files in question are mostly text or text-based, not binary formats, so corruption is less of a concern.

Code, specifically, is of course also kept in git repositories.

Backups to read-only media are a good idea, and I do them periodically as well (not blurays, though; DVDs or even CDs suffice, as the amount of truly critical data is not that large).

Comment author: Lumifer 06 February 2014 03:36:48PM 1 point [-]

I can't resist the temptation... :-D

"Only wimps use tape backup: real men just upload their important stuff on ftp, and let the rest of the world mirror it" -- Linus Torvalds

Comment author: SaidAchmiz 06 February 2014 03:09:53PM 0 points [-]

Certainly not.

Comment author: EndlessStrategy 05 February 2014 11:22:54PM 0 points [-]

No.

Comment author: jazmt 22 January 2014 04:12:16AM 0 points [-]

What method of backing up data do you recommend for a computer with windows? How often do you recommend doing it?

Comment author: zedzed 22 January 2014 05:20:46AM *  4 points [-]

It depends on your use case. My "life work" consists exclusively of things I've typed. These types of files tend to be small, and lend themselves to being written in Google Documents. If I use Emacs, then the files are tiny and I back them up to Google Drive in about 2 seconds. This costs me all of $0 and is very easy.

But maybe your life work also includes a bunch of pictures documenting your experiences. These, and other large files, will quickly exceed your 15 gigs of free storage. Then you're probably looking at an external hard drive or cloud storage. The better fit will depend on things like your internet connection, which USB standard your computer has, your tech level, how much stuff you need backed up, whether you travel a lot, whether you'll lose or damage the external hard drive, etc.

And then just use Yvain's method to find the best one.

Of course, there's more elaborate solutions for power users, but by the time you're high enough level for them, you're a power user and don't need to ask.

Comment author: jazmt 23 January 2014 01:11:57AM 0 points [-]

Thank you, I basically use this method now and am glad to have it corroborated by an expert.

Comment author: SaidAchmiz 22 January 2014 06:28:08AM 1 point [-]

I don't use Windows nearly as much, but one idea (depending on use case, as zedzed said) is cloud storage. Dropbox is free up to 2 GB. Paid services exist. Synchronization is regular and automatic; some services keep some file history, as well.

Comment author: Cyan 21 January 2014 03:45:32AM *  9 points [-]

I'm not Yvain, but his Goofus and Gallant parable did remind me of the time some dude noticed that the uncapped jackpot rollover of the Irish lotto made it vulnerable to a brute force attack.

Comment author: SaidAchmiz 21 January 2014 05:09:11AM *  3 points [-]

Interesting. Pretty niche (in that it doesn't seem to be an example of behavior that the average rationalist will often, or ever, have a chance to emulate), but interesting.

I note that the National Lottery responded by attempting (with partial success) to block the guy from his victory, and also making such things unfeasible in the future. So someone who thought "nah, that would never be allowed to work" (i.e. didn't take the idea seriously), would have been at least partly correct.

Comment author: RichardKennaway 21 January 2014 08:47:00AM 9 points [-]

I note that the National Lottery responded by attempting (with partial success) to block the guy from his victory, and also making such things unfeasible in the future.

As a general rule, when you game the system, the system changes to stop the game, because the organisers have a goal beyond the rules of the day. So there's only a certain window of opportunity to profit. If there are high stakes, you need to be really sure that there is a gap to work with, in between "no-one has done this before, so maybe it doesn't work for reasons I haven't seen" and "everyone's doing it, so does it still work?"

Comment author: Yvain 21 January 2014 05:09:50AM *  10 points [-]

The example in the thread is real-life-ish - compare to the story of Voltaire and friends winning the French lottery. But if you want more:

It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent. But you can justly accuse me of cherry-picking here and demand repeatable examples.

Nothing guarantees that there will be repeatable examples - it could be that people are bad at taking ideas seriously until the ideas succeed once, at which point they realize they were wrong and jump on the bandwagon.

But in fact I think there are such examples. One such is investing in index funds rather than mutual funds/picking your own stocks. There are strong reasons to believe you'll do better, most people know those reasons but don't credit them, and some people do credit them and end up with more money.

Occasional use of modafinil might fall in this category as well, depending on whether we define people's usual reasons for not taking it as irrational or rational-given-different-utility-functions.

I don't think most of these examples will end out as "such obvious wins no one could possibly disagree with them" - with the possible exception of index funds it's never as purely mathematical as the lottery example - but I think for most people the calculus is clear.

Comment author: Aleksander 21 January 2014 06:39:41PM 12 points [-]

Isn't it a little bit self-contradictory, to propose that smart people have beaten the market by investing in Bitcoin, and at the same time, that smart people invest in index funds rather than trying to beat the market? Or in other words, are those who got rich off Bitcoin really different from those who picked some lucky stocks in 1997 and cashed out in time?

Comment author: VAuroch 21 January 2014 08:44:57PM 12 points [-]

That's a good point but I'm going to argue against it anyway.

Unlike a lucky stock, Bitcoin wasn't accounted for by mainstream markets at the time. An index fund amortizes the chances of lucky success and catastrophic failure across all the stocks into a single number, giving roughly the same expected value but with much lower variance. Bitcoin wasn't something that could be indexed at that point, so there was no way you could have hedged your bet in the same way that an index fund would let you hedge.

Comment author: gwern 22 January 2014 07:15:16PM *  7 points [-]

It's easy to think of trivial examples of one-time victories - for example, an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent.

Actually, I've been working on a mini-essay on exactly this topic: because of my PredictionBook use, I have a long paper trail of explicit predictions on Bitcoin which implied major +EV at every time period, but I failed to meaningfully exploit my beliefs and so my gains have been far smaller than they could have been.

Comment author: SaidAchmiz 21 January 2014 06:53:06PM 7 points [-]

UPDATE:

I think index funds are a good example of something that fits my criteria #s 1, 2, and 3. (Thank you to the commenters who've explained to me both why they are a good idea and why many/most people may not understand or believe this.)

Do index funds fit #s 4 and 5? It might be interesting to ask, in the next survey: do you invest? If so, in index funds, or otherwise? If the former, how much money have you made as a result? In the absence of survey data, is there other evidence that rationalists (or "rationalists") invest in index funds more than the general population, and that they win thusly (i.e. make more money)?

I think modafinil is clearly a good example of my #s 2 and 3; I am not so sure about #1. I am still researching the matter. Gwern's article, though very useful, has not convinced me. (Of course, whether it fits #s 4 and 5 also remains to be demonstrated.)

I remain unsure about whether the Bitcoin investment is a good example of anything. Again, if anyone cares to elucidate the matter, I would be grateful.

Comment author: ChrisHallquist 21 January 2014 06:02:28AM 11 points [-]

I seriously doubt most people know the the reasons they should be investing in index funds. Remember, the average American has an IQ of 100, doesn't have a degree higher than a high school diploma, and rarely reads books. I'm not sure I'd know the reasons for buying index funds if not for spending a fair amount of time reading econ blogs.

Comment author: SaidAchmiz 21 January 2014 06:18:13AM 3 points [-]

Agreed: I have no idea why I should be investing in index funds (if, indeed, I were investing in anything). My skepticism about that example, though, actually comes from a slightly different place:

If I decided to do some investing, went to five financial experts, asked them what I should invest in, and they all said "Yep, index funds, definitely the way to go", then I would invest in index funds. Right? Where would I even get the idea to do anything else?

And thus, why does "invest in index funds" qualify as a counterintuitive idea? Why is it a thing that some people might not take seriously? Wouldn't it just be the default?

Comment author: VAuroch 21 January 2014 08:40:34PM *  6 points [-]

Because this

If I decided to do some investing, went to five financial experts, asked them what I should invest in, and they all said "Yep, index funds, definitely the way to go", then I would invest in index funds

probably wouldn't happen. If you asked uninvolved experts, it would, but the most accessible experts aren't uninvolved. What is much more likely is that you (the average American with some money to invest) go to invest your money with an investment firm. And that investment firm pushes you toward actively-managed funds, since that's where their incentives are. In order for the idea of investing solely in index funds to be available, you have to put in meaningful thought, if only enough to look for non-corporate advice on how to invest well.

Comment author: SaidAchmiz 21 January 2014 09:28:48PM 2 points [-]

Huh. That makes sense, I suppose. Do people generally not seek advice from uninvolved experts? Is that true only in investing, or in other domains?

Comment author: VAuroch 21 January 2014 09:34:36PM *  5 points [-]

I'm not an expert, but my impression is that most people don't think about this kind of thing without prompting. Which means that they don't think about it unless they, for example, see an ad for Charles Schwab and call them to look into investing. Getting to the point of considering whether the expert has an incentive to lie to you seems to mark you as of substantially above-average reasoning skills.

Comment author: SaidAchmiz 21 January 2014 05:31:18AM 1 point [-]

Thank you for the response.

investing in index funds rather than mutual funds/picking your own stocks. There are strong reasons to believe you'll do better, most people know those reasons but don't credit them, and some people do credit them and end up with more money.

I'd like to hear this from a financial expert. Do we have any who'd like to speak on this?

Occasional use of modafinil might fall in this category as well, depending on whether we define people's usual reasons for not taking it as irrational or rational-given-different-utility-functions.

Oh? What will modafinil do for me? (Will google and return to this thread, but if someone wants to recommend some links with concentrated useful info, it would be appreciated.)

I also have some objections to this sort of "obvious win" that do not depend on what modafinil's specific effects are: namely, that "deciding to start taking a drug without the advice and supervision of a licensed medical professional is bad" seems to be a decent heuristic to live by. It's not unalterable, but it seems good to a first approximation. Do you disagree?

an early Bitcoin investor realizing that crypto-currency had potential and buying some when it was still worth fractions of a cent.

Forgive me for my ignorance: so this guy has lots of Bitcoin now? What can you buy with Bitcoin? Can you just convert the Bitcoin into dollars? If so, how much money did this person make from this?

I don't think most of these examples will end out as "such obvious wins no one could possibly disagree with them" - with the possible exception of index funds it's never as purely mathematical as the lottery example - but I think for most people the calculus is clear.

My suspicion is that these examples are actually more like "it's not clear whether these things are, in fact, even wins for the people who did them, never mind whether they will be wins for other people who are considering doing them". I was really looking for something more unambiguous than that.

I will comment more when I've investigated / received clarifications on the examples you've provided. In the meantime I would love to see more examples.

Comment author: James_Miller 21 January 2014 05:47:19AM *  13 points [-]

I'd like to hear this from a financial expert. Do we have any who'd like to speak on this?

I'm one (PhD in economics) and yes and ordinary investors should use low fee index funds.

Comment author: jazmt 23 January 2014 01:19:27AM 1 point [-]

For ordinary investors won't there still be an issue of buying these funds at the right time, so as not to buy when the market is unusually high?

Comment author: memoridem 23 January 2014 04:28:09AM 3 points [-]

You can migitate the problem by making the investment gradually.

Comment author: James_Miller 23 January 2014 04:48:29AM 1 point [-]

Yes

Comment author: SaidAchmiz 21 January 2014 05:55:17AM 1 point [-]

Thank you. A couple of follow-up questions, if you don't mind:

  1. Do most ordinary investors not do this?

  2. If not, do you know why? Do most people not know about the advantage of index funds? Or do they know, but don't use them anyway?

  3. If the latter, why don't they? That seems strange. What makes index funds the "non-default" idea, so to speak? If index funds are known by financial experts to be superior to mutual funds (or other investing strategies), where would an ordinary person get the idea that they should be using anything other than index funds?

Comment author: TylerJay 21 January 2014 07:23:18AM 16 points [-]

An index fund is intended to go up or down y the exact same amount as the entire exchange as a whole. For example, you might hear that the S&P 500 rose a total of 7% last year. If that happened, then your index fund would go up by 7%.

The main reason people don't invest in index funds is because they want to "beat the market." They see some stocks double or triple within a year and think "oh man, if only I'd bought that stock a bit earlier, I'd be rich!" So some people try to pick individual stocks, but the majority of laypeople want to let "experts" do it for them.

Mutual funds generally have a fund manager and tons of analysts working there trying to figure out how to beat the market (get a return greater than the market itself). They all claim to be able to do this and some have a record to point to to prove that they have done it in the past. For example, fund A may have beat the market in the previous 3 years, so investors think that by investing in Fund A over an Index fund, they will come out ahead.

But unfortunately, markets are anti-inductive so past success of individual stocks, mutual funds, and even index funds is no guarantee of future performance.

If you look at the performance of all funds over the past 20+ years and correct for survivorship bias (take into account all the funds that went out of business as well as the ones that are still around today), it becomes very clear that almost no mutual funds actually beat the market in terms of your ACTUAL RETURN when averaged over each year.

The final big problems with actively managed funds are fees and taxes. Actively managed funds charge higher percentage rates each year to cover their work. That's how they make money. They also tend to sell a percentage of your stocks each year and buy new ones in their attempt to beat the market. This gives a certain "portfolio turnover" percentage and the higher that is, the more you have to pay in taxes (capital gains), which lessens your return even more.

The bottom line is that mutual funds claim to be able to beat the market and many do in any given year. People chase the money and pay more in capital gains and fees to try to make a higher return. Over time though, the index fund beats all others in terms of total return over time.

Comment author: James_Miller 21 January 2014 06:15:06PM 2 points [-]

But unfortunately, markets are anti-inductive

But mutual funds are. I don't remember the citation, but I recall that mutual funds that do very poorly one year are more likely to do so in the future when you take into account fees and taxes.

Comment author: V_V 22 January 2014 10:12:06PM -1 points [-]

Clearly, there are actively managed funds that do consistently worse than index funds, otherwise index funds wouldn't be able to make money, since financial markets are negative-sum.

Comment author: James_Miller 21 January 2014 04:55:09PM 5 points [-]

1) No but I'm doing my best as a columnist for Better Investing Magazine to tell them. Still, lots of money is in index funds.

2 and 3) Actively managed mutual funds put a lot of money into marketing, and the explanation for index funds is probably beyond most people. A huge number of financial experts would be out of jobs if all non-professional investors switched to index funds.

Comment author: Vaniver 22 January 2014 07:18:09PM 0 points [-]

the explanation for index funds is probably beyond most people.

I don't know, the simple explanation for index funds is "on average, you will get the market average. So why not avoid the fees?", though it requires people being self-aware enough to recognize situations where they are, in fact, average.

Comment author: James_Miller 22 January 2014 09:34:06PM 0 points [-]

But the actively managed mutual fund you are considering investing in has consistently outperformed the market even when taking into account taxes and fees.

Comment author: Vaniver 22 January 2014 09:58:28PM *  1 point [-]

But the actively managed mutual fund you are considering investing in has consistently outperformed the market even when taking into account taxes and fees.

Am I above average at picking actively-managed mutual funds?

Comment author: James_Miller 22 January 2014 10:02:25PM 1 point [-]

What if you are the kind of person who is above average in most things. It's far from obvious why you shouldn't think you would be above average at picking stocks or mutual funds.

Comment author: Lumifer 22 January 2014 09:51:54PM 0 points [-]

ordinary investors should use low fee index funds

Two questions:

  • Doesn't this ignore the very important question of "which indices?"

  • Is this advice different from the "hold a sufficiently diversified portfolio" one?

Comment author: ygert 22 January 2014 10:04:18PM *  1 point [-]

Not an economist or otherwise particularly qualified, but these are easy questions.

I'll answer the second one first: This advice is exactly the same as advice to hold a diversified portfolio. The concept of an index fund is a tiny little piece of each and every thing that's on the market. The reasoning behind buying index funds is exactly the reasoning behind holding a diversified portfolio.

For the second question, remember the idea is to buy a little bit of everything, to diversify. So go meta, and buy little bits of many different index funds. But actually, as this is considered a good idea, people have made such meta-index funds, that are indices of indices, that you can buy in order to get a little bit of each index fund.

But as an index is defined as "a little bit of everything", the question of which one fades a lot in importance. There are indices of different markets, so one might ask which market to invest in, but even there you want to go meta and diversify. (Say, with one of those meta-indices.) And yes, you want to find one with low fees, which invests as widely as possible, etc. All the standard stuff. But while fiddling with the minueta may matter, it does pale when compared to the difference between buying indices and stupidly trying to pick stocks yourself.

Comment author: Lumifer 22 January 2014 10:08:30PM 2 points [-]

The concept of an index fund is a tiny little piece of each and every thing that's on the market.

This is not true. An index fund holds a particular index which generally does not represent "every thing that's on the market".

For a simple example, consider the most common index -- the S&P 500. This index holds 500 largest-capitalization stocks in the US. If you invest in the S&P500 index you can be fairly described as investing into US large-cap stocks. The point is that you are NOT investing into small-cap stocks and neither you are investing in a large variety of other financial assets (e.g. bonds).

Comment author: ygert 28 January 2014 11:40:32AM 0 points [-]

Yes. What I wrote was a summery, and not as perfectly detailed as one may wish. One can quibble about details: "the market"/"a market", and those quibbles may be perfectly legitimate. Yes, one who buys S&P 500 indices is only buying shares in the large-cap market, not in all the many other things in the US (or world) economy. It would be silly to try to define a index fund as something that invests in every single thing on the face of the planet, and some indices are more diversified than others.

That said, the archetypal ideal of an index fund is that imaginary one piece of everything in the world. A fund is more "indexy" the more diversified it is. In other words, when one buys index funds, what one is buying is diversity. To a greater or lesser extent, of course, and one should buy not only the broadest index funds available, but of course also many different (non-overlapping?) index funds, if one wants to reap the full benifit of diversification.

Comment author: Lumifer 29 January 2014 05:07:02PM *  1 point [-]

the archetypal ideal of an index fund is that imaginary one piece of everything in the world.

Maybe in your mind. Not in mine. I think of indices (and index funds) as portfolios assembled under a particular set of rules. None of them tries to reach everything in the world, in fact a lot of them are designed to be quite narrow.

A fund is more "indexy" the more diversified it is.

I still disagree. An index fund's most striking feature is that it invests passively, that is its managers generally don't have to make any decisions, they just have to follow publicly announced rules. I don't think a fund is more "indexy" if it owns more or more diverse assets.

In other words, when one buys index funds, what one is buying is diversity.

Sigh. Still no. You're buying a portfolio composed under certain rules. Some of these portfolios (= index funds) are reasonably diversifed, some aren't, and that depends on how do you think of diversification, too.

The "classic" index fund, one that invests into S&P500, is not diversified particularly well. It invests in only a single asset class in a single country.

Comment author: hyporational 29 January 2014 05:30:33PM 0 points [-]

An index fund's most striking feature is that it invests passively, that is its managers generally don't have to make any decisions, they just have to follow publicly announced rules. I don't think a fund is more "indexy" if it owns more or more diverse assets.

Yup. Take an actively managed fund that seems to be indexy by ygert's standards today. It might not be so indexy tomorrow.

Comment author: DanArmak 21 January 2014 07:57:38PM 7 points [-]

Forgive me for my ignorance: so this guy has lots of Bitcoin now? What can you buy with Bitcoin? Can you just convert the Bitcoin into dollars? If so, how much money did this person make from this?

The hypothetical investor probably has the same amount of Bitcoins he always had, but Bitcoins are worth many more dollars now than previously, a difference of three orders of magnitude.

Comment author: SaidAchmiz 21 January 2014 08:18:28PM 0 points [-]

Noted. And as for the other things I asked?

Comment author: [deleted] 22 January 2014 04:51:17PM *  1 point [-]

You can easily sell Bitcoins for US dollars on mtgox.com, but after mid-2013 you need a verified account (which IIRC requires sending them proof of residence) to transfer them to your bank account, which is a heck of a trivial inconvenience. (For all I know there might be an easier way, though.)

Comment author: Eugine_Nier 23 January 2014 02:32:51AM 1 point [-]

I heard other exchanges, e.g., BitStamp don't have this problem.

Comment author: Ben_LandauTaylor 21 January 2014 06:06:53AM 2 points [-]

There's lots of modafinil info at gwern's page. Wikipedia is also a pretty good source. The short (and only slightly inaccurate) version is that it gives you the good effects of caffeine, but stronger, and with no withdrawal or other drawbacks. It's had positive effects on my mood and focus.

"deciding to start taking a drug without the advice and supervision of a licensed medical professional is bad" seems to be a decent heuristic to live by

Reasonable! Which is why I'm taking modafinil with the advice and supervision of a licensed medical professional. If you're wary of self-medication, you might want to look into that route.

Comment author: SaidAchmiz 21 January 2014 06:11:28AM 3 points [-]

Thank you for the link, I will look into that.

If you are so inclined, I would be interested in hearing how you approached the "advice of a medical professional" aspect; did you go to your GP and say "So I'm considering taking modafinil"? (If you'd prefer not to answer, I entirely understand, no need to even respond to say no; thank you in any case for your comment.)

Comment author: Ben_LandauTaylor 21 January 2014 07:26:51AM 3 points [-]

I'd been seeing a psychiatrist to get treated for anhedonia. We tried a few different SSRIs, which didn't help. Then I read about modafinil, and it seemed like it could plausibly help treat some of my symptoms (although not their causes), so I brought it up. He agreed it was a reasonable thing to try and prescribed it. I've been taking modafinil regularly for a year, now. It's not a giant boost for me, but it is a boost, and the drawbacks are negligible.

Comment author: Eugine_Nier 21 January 2014 11:24:14PM 0 points [-]

with no withdrawal or other drawbacks.

How much data is there behind this conclusion. Is it comparable to the centuries of experience we have with caffeine?

Comment author: gwern 22 January 2014 02:45:50AM 8 points [-]

There's lots of modafinil info at gwern's page. Wikipedia is also a pretty good source...

How much data is there behind this conclusion

Why are you asking, instead of looking?

Comment author: MugaSofer 21 January 2014 05:57:06PM *  1 point [-]

So - holding up Said, and for that matter my own memories, as evidence - most people simply haven't considered these options.

Which ... checks ... does fit with the original criteria:

1.There is some opportunity for clear, unambiguous victory;

2.Taking advantage of it depends primarily on taking a strange/unconventional/etc. idea seriously (as distinct from e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.);

3.Most people / normal people / non-rationalists do not take the idea seriously, and as a consequence have not taken advantage of said opportunity;

4.Some people / smart people / rationalists take the idea seriously, and have gone for the opportunity;

5.And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

Comment author: SaidAchmiz 21 January 2014 06:44:03PM *  1 point [-]

I don't think it does, actually. The following are three distinct scenarios (as pertain to my point #2):

1. Being entirely unaware of what options/possibilities exist in some domain.

Example: I don't do any investing, and so, prior to this thread, had no opinion on index funds whatsoever, nor on mutual funds, nor on anything related.

2. Being unaware of some* particular* (potentially counterintuitive) idea or option.

Example: I'd never had anyone recommend modafinil to me, or suggest that I should take it, or explain what benefits it might have.

3. Being aware of some idea or option, but not taking it seriously.

Example: I have no idea. Gaming poorly-designed lotteries? I suspect this example fails for other reasons, but it does fit the criterion #2.


The claim, as I understand it, was:

There are numerous cases like scenario 3 above, where the main thing that keeps people from taking advantage of an opportunity, and winning thusly, is not taking some idea seriously — despite being aware of that idea. Rationalists, on the other hand, do take the idea seriously, and win thusly.

Index funds are not a good example for people who have no knowledge of investing, because what kept me, for instance, from taking advantage of the profit opportunities offered by the idea "invest in index funds" was not having any knowledge of investing whatsoever, not some failure to take things seriously.

Modafinil is not a good example for people not aware of modafinil or its (alleged) positive effects, because what kept me, for instance, from taking advantage of the cognitive boosts offered by the idea "take modafinil" was not being aware of modafinil, not some failure to take things seriously.

I haven't gotten a good response about Bitcoin, so I won't comment on that.

Now, don't get me wrong: I think index funds are a good example in general, based on the very helpful and clear comments I've gotten on that topic (thank you, commenters!). (Modafinil is not as clearly a good example. I'm still researching.) But my case, and similar others, are not good evidence for those examples.

Comment author: MugaSofer 21 January 2014 07:35:00PM 1 point [-]

Oh, indeed! Sorry, I didn't mean to state that they proved his point or anything like that. I was just observing that they do seem to fit the criteria listed in the original comment Yvain was replying to.

Comment author: SaidAchmiz 21 January 2014 07:51:05PM 0 points [-]

Well... my point is that they do not, in fact, fit the criteria — specifically, criterion #2 — in the case of people who haven't considered these ideas as options.

Comment author: MugaSofer 22 January 2014 09:13:55AM 1 point [-]

Really?

Unless they're not considering them as options because they wouldn't work for them (e.g. not having the necessary resources/connections, being risk-averse, having a different utility function, etc.), but rather because they're unusual in some fashion...

I guess perhaps you weren't clear on why, exactly, you wanted them to have been ignored?

Comment author: Yvain 25 January 2014 05:06:50PM 1 point [-]

I'm not claiming that a majority of the people who don't do these options don't do them because they're aware of them but don't take them seriously. I'm claiming a majority (or at least many) of the people who possess enough knowledge about them to be able to figure out that they should do them, don't.

My source is mainly anecdotes from people I've talked to who know all the arguments for these but don't do them.

Comment author: SaidAchmiz 26 January 2014 12:00:51AM 2 points [-]

So, concretizing your claim, we get:

  • A majority of the people who know enough about investing to know that they should invest in index funds rather than something else, do not do so, instead continuing to invest in other, less-optimal financial instruments.

I find this hard to believe. Do you really have anecdotes supporting this? (And a lack of a comparable or greater quantity of anecdotes to the contrary?)

  • A majority of the people who possess enough knowledge about nootropic drugs to be able to figure out that they should take modafinil, do not take modafinil.

I am entirely unconvinced that taking modafinil is a good idea, so you would have to first demonstrate that.

  • Something about Bitcoin. I don't know what your claim even means in this case, honestly. Please explain.

I think your post would greatly benefit from the inclusion of some of those anecdotes you allude to. In other words, why do you believe this thing you believe? What has caused you to come to this conclusion? I would love to know!

Comment author: Solvent 29 January 2014 08:05:39PM *  3 points [-]

ETA: Note that I work for App Academy. So take all I say with a grain of salt. I'd love it if one of my classmates would confirm this for me.

Further edit: I retract the claim that this is strong evidence of rationalists winning. So it doesn't count as an example of this.

I just finished App Academy. App Academy is a 9 week intensive course in web development. Almost everyone who goes through the program gets a job, with an average salary above $90k. You only pay if you get a job. As such, it seems to be a fantastic opportunity with very little risk, apart from the nine weeks of your life. (EDIT: They let you live at the office on an air mattress if you want, so living expenses aren't much of an issue.)

There are a bunch of bad reasons to not do the program. To start with, there's the sunk cost fallacy: many people here have philosophy degrees or whatever, and won't get any advantage from that. More importantly, it's a pretty unusual life move at this point to move to San Francisco and learn programming from a non-university institution.

LWers are massively overrepresented at AA. There were 4/40 at my session, and two of those had higher karma than me. I know other LWers from other sessions of AA.

This seems like a decent example of rationalists winning.

EDIT:

My particular point is that for a lot of people, this seems like a really good idea: if there's a 50% chance of it being a scam, and you're making $50k doing whatever else you were doing with your life, then if job search takes 3 months, you're almost better off in expectation over the course of one year.

And most of the people I know who disparaged this kind of course didn't do so because they disagreed with my calculation, but because it "didn't offer real accreditation" or whatever. So I feel that this was a good gamble, which seemed weird, which rationalists were more likely to take.

Comment author: SaidAchmiz 29 January 2014 09:27:16PM 3 points [-]

move to San Francisco

Unrelatedly to my other response: uh, move to San Francisco? That... costs a lot of money. Even if only for nine weeks. Where did you live for the duration?

Comment author: Solvent 29 January 2014 10:42:04PM *  3 points [-]

They let you live at the office. I spent less than $10 a day. Good point though.

Comment author: Jiro 30 January 2014 01:03:20AM *  3 points [-]

Moving to San Francisco has a lot of expenses other than housing expenses, including costs for movers, travel costs (and the costs of moving back if you fail), costs to stop and start utilities, storage costs to store your possessions for 9 weeks if you live in the office, and the excess everyday costs that come from living in an area where everything is expensive. It's also a significant disruption to your social life (which could itself decrease your chances of finding a job, and is a cost even if it doesn't.)

Comment author: Solvent 30 January 2014 01:23:19AM 1 point [-]

You make a good point. But none of the people I've discussed this with who didn't want to do App Academy cite those reasons.

Comment author: Jiro 31 January 2014 05:26:13AM *  1 point [-]

I think this falls into the category of not assuming everyone talks like a LW-er.

Someone who has moved in the past or known someone who has moved might not remember (at least without prompting) each of the individual items which make moving cost. They may just retain a generalized memory that moving is something to be avoided without a good reason.

But guess what? When it comes to making decisions that should take into account the cost of moving, remembering "moving should be avoided without a good reason" will, if their criteria for "good reason" are well-calibrated, lead to exactly the same conclusion as having a shopping list of moving costs in their mind and knowing that the movers are $500 and the loss of social links is worth 1000 utilons etc. even if they can't articulate any numbers or any specific disadvantages of moving. Just because the people didn't actually cite those reasons, and wouldn't be able to cite those reasons, doesn't mean that they weren't in effect rejecting it for those reasons.

And yes, this generalizes to people being unable to articulate reasons to avoid other things that they've learned to avoid.

Comment author: jsteinhardt 11 February 2014 05:46:24PM 0 points [-]

This is an extremely cogent articulation of something I've been wanting to articulate for a while (but couldn't, because I'm the sort of person who just remembers "you shouldn't move without a good reason). I would strongly encourage you to write a top level post about this.

Comment author: SaidAchmiz 29 January 2014 11:03:43PM 0 points [-]

... huh. Could you elaborate on this, please? How's that work? Do they have actual housing? What is living at the office like?

Comment author: troll 31 January 2014 09:40:23AM *  1 point [-]

They don't have actual housing.

There are three rooms and one open space to put beds / storage in.

80%+ of beds are air mattresses people bought at Target.

Living at the office means you have to sign up at a nearby gym if you wish to shower.

It also means no privacy.

The showers in the nearest gym occasionally turn to cold water. (about 1 in 15 times)

The nearest gym is ~7 mins away walking and costs $130 for three months membership.

There are no housing costs.

Lights typically go off at 11 pm - 12 am

Residents have to wash dishes and take out the trash, and generally pick up after themselves.

There are ~15 residents per active cohort.

Food costs are ~$10 / day if you eat out for lunch and dinner, and ~$4 / day if you make food.

Each sleeping space is ~20 square meters. (there are four)

If you sleep in the last sleeping space, you have to move your shit during the day.

Comment author: SaidAchmiz 31 January 2014 01:38:52PM 2 points [-]

Thank you for the info.

I guess the takeaway here is that when someone on LessWrong talks about something being an obvious win, I should take it with a grain of salt, and assume a strong prior probability of this person just having very different values from me.

Comment author: troll 31 January 2014 08:30:27PM 0 points [-]

Possible things to consider are:

It's assumed that you go to App Academy with the interest of getting a high paying job without paying too much for that opportunity, and being very confident of your success.

It's also assumed you want to be able to program, and would imagine it to be fun in the future, if it is not already.

Humans acclimate to conditions relatively quickly.

It's relatively easy to improve your living conditions with earplugs, night eyewear, and a mattress cover.

Having people around you to debug when you are too exhausted to is a significant boon for progression in programming skill.

That said, it's understandable if your values differ.

Comment author: ChrisHallquist 30 January 2014 01:26:04AM *  2 points [-]

I'm one of Solvent's App Academy grads here. Unclear to me whether this is indicative of LWer's superior rationality, and to what extent it's because word about App Academy has gotten around within the LessWrong community. For me, the decision process went something like:

  1. Luke recommended it to me.
  2. I asked Luke if he knew anyone who'd been through it who could vouch for the program. He didn't, but could recommend someone within the LessWrong community who'd done a lot of research into coding bootcamps.
  3. I talked to Luke's contact, everything checked out.
  4. After getting in, I sent the contract to my uncle (a lawyer) to look at. He verified there were no "gotcha" clauses in the contact.

So I don't know how much of my decision was driven by superior rationality and how much was driven by information I had that others might not (due in large part to the LessWrong community.) Though this certainly played a role.

(EDIT: And in case anyone was wondering, it was a great decision and I'd highly recommend it.)

Comment author: Jiro 30 January 2014 12:58:28AM 1 point [-]

Don't dismiss what non-LWers are trying to say just because they don't phrase it as a LWer would. "Didn't offer real accreditation" means that they 1) are skeptical about whether the the plan teaches useful skills (doing a Bayseian update on how likely that is, conditional on the fact that you are not accredited), or 2) they are skeptical that the plan actually has the success rate you claim (based on their belief that employers prefer accreditation, which ultimately boils down to Bayseianism as well).

Furthermore, it's hard to figure the probability that something is a scam. I can't think of any real-world situations where I would estimate (with reasonable error bars) that something has a 50% chance of being a scam. How would I be able to tell the difference between something with a 50% chance of being a scam and a 90% chance of being a scam?

Comment author: Solvent 30 January 2014 01:21:29AM 4 points [-]

I don't think that they're thinking rationally and just saying things wrong. They're legitimately thinking wrong.

If they're skeptical about whether the place teaches useful skills, the evidence that it actually gets people jobs should remove that worry entirely. Their point about accreditation usually came up after I had cited their jobs statistics. My impression was that they were just looking for their cached thoughts about dodgy looking training programs, without considering the evidence that this one worked.

Comment author: V_V 11 February 2014 03:20:54PM *  1 point [-]

This is the first time I hear about this training program, but my impression (as somebody living outside the US) is that at the moment there is a shortage of programmers in the Silicon Valley, and therefore it is relatively easy, at least for people with the appropriate cognitive structure (those who can "grok" programming), to get a relatively high-paying programming job, even with minimal training.
I suppose this is especially true in the web app/mobile app industry, since these tend to be highly commodified, non-critical products, which can be developed and deployed incrementally and have often very short lifecycles, hence a "quantity over quality" production process is used, employing a large number of relatively low-skilled programmers (*).

Since the barriers to entry to the industry are low, evaluating the effectiveness of a commercial training program is not trivial: just noting that most people who complete the program get a job isn't great evidence.
You would have to check whether people who complete the program are more likely to get a job, or get higher average salaries, than people who taught programming themselves by reading a few tutorials or completed free online courses like those offered by Code.org, Coursera, etc.
If there was no difference, or the difference was not high enough to pay back the training program cost, then paying for it would be sub-optimal.

(* I'm not saying that all app programmers are low-skilled, just that high skill is not a requirement for most of these jobs)

Comment author: ChristianKl 11 February 2014 04:16:33PM *  3 points [-]

Few people have the mental starmina to just teach themselves 8 hours a day via reading a few tutorials and complete free online courses.

If you go with your mattress to App Academy it takes effort to not spent time programming when all the people around you are programming.

It also likely that the enviroment will make it easy to network with other programmers.

Comment author: Lumifer 11 February 2014 04:59:18PM 1 point [-]

Few people have the mental starmina to just teach themselves 8 hours a day

It's actually a defining characteristic of hackers, except that it's more like 16 hours a day.

Comment author: ChristianKl 11 February 2014 10:56:18PM *  0 points [-]

It depends on the teacher. If you have a specific well defined project than a good hacker can work his 16 hours focused on the project.

From the people I know few have the same ability for the kind of general tutorial learning that provides broad knowledge.

I think I certainly spend many days where I spent most of my time learning but it wasn't the kind of focused learning you have in school.

Comment author: Lumifer 12 February 2014 02:40:53AM *  1 point [-]

It depends on the teacher.

Which teacher? "...mental stamina to just teach themselves"

Comment author: hyporational 11 February 2014 06:04:15PM 0 points [-]

If that's the case do you have any idea what makes them so exceptional?

Comment author: Lumifer 11 February 2014 06:22:18PM 1 point [-]

Are you asking what makes people self-motivated, have burning curiosity, and be willing to just dive headlong into new fields of study?

I have no idea, but I suspect carefully choosing one's parents helps :-)

There is also the standard stereotype of high-functioning autistics with superhuman ability to focus, but I don't know how well it corresponds to reality.

You might consider this interesting.

Comment author: hyporational 11 February 2014 06:44:38PM 0 points [-]

I do, thanks.

Comment author: V_V 11 February 2014 04:35:27PM *  0 points [-]

Few people have the mental starmina to just teach themselves 8 hours a day via reading a few tutorials and complete free online courses.

True, but I suspect that the effect of training time runs into diminishing returns well before you reach 8 hours a day, in particular after you have been doing it for a few days.

It also likely that the enviroment will make it easy to network with other programmers.

Agreed.

Comment author: ChristianKl 11 February 2014 04:56:05PM 1 point [-]

I think there are many smart people that have issues with akrasia. Being in an enviroment with other people who also work makes it much easier to just sit down and follow the course.

The fact that the deal with App Academy is that you only pay when you get a job also makes it in their interest that the logistics of the job search are settled.

For someone without a programming job the way to find work as a programmer might not seem straightforward even after completing a bunch of tutorials.

For this description the only reason I won't go to App Academy is that it's in the US. If I could just do this is a a European city I would likely pursue it because it's a path that's much more straightforward than my current one.

Comment author: V_V 11 February 2014 05:00:08PM *  0 points [-]

I'm not saying that they offer no value, I'm saying that the fact that they have high hiring ratios statistics is, by itself, not strong evidence that they offer enough value to justify their price.

Comment author: Jiro 11 February 2014 08:32:03PM 1 point [-]

"Shortage of programmers" often means "shortage of programmers willing to work for the salaries we offer".

Comment author: Nornagest 12 February 2014 01:26:05AM 1 point [-]

And/or "shortage of programmers ticking all the boxes on this highly specific technology stack we're using". I get the impression that the greatest advantage of these development bootcamps from a hiring perspective is having a turnaround time short enough that they can focus narrowly on whatever technologies are trendy at the moment, as opposed to a traditional CS degree which is much more theory-centric and often a couple years out of date in its practical offerings.

Comment author: V_V 12 February 2014 01:10:27AM *  0 points [-]

It seems to me they already tend to offer quite high salaries.
Further increasing them could increase the number of available programmers, although there are going to be both short-term and long-term availability limits. And obviously, companies can't afford to pay arbitrary high salaries.

More specifically, I suppose that much of this labor demand comes from startups, which often operate on the brink of financial viability.
Startups have high failure rates, but a few of them generate a very high return on investment, which is what makes the whole startup industry viable: VCs are as risk averse as anybody else, but by diversifying their investments in many startups they reduce the variance of their return and thus obtain a positive expected utility. However, if failure rate goes up (for instance due to increased labor costs) without the other parameters changing, it would kill the whole industry, and I would expect this to occur in a very non-linear fashion, essentially as a threshold effect.

Comment author: Jack 30 January 2014 05:15:48AM 1 point [-]

App Academy was a great decision for me. Though I just started looking for work, I've definitely become a very competent web developer in a short period of time. Speaking of which if anyone in the Bay Area is looking for a Rails or Backbone dev, give me a shout.

I don't know if I agree that my decision to do App Academy had a lot to do with rationalism. 4//40 is a high percentage but a small n and the fact that it was definitely discussed here or at least around the community pretty much means it isn't evidence of much. People in my life I've told about it have all been enthusiastic, even people who are pretty focused on traditional credential-ism.

Comment author: Aleksander 29 January 2014 11:11:23PM 1 point [-]

I've wondered why more people don't train to be software engineers. According to wikipedia, 1 in 200 workers is a software engineer. A friend of mine who teaches programming classes estimates 5% of people could learn how to program. If he's right, 9 out of 10 people who could be software engineers aren't, and I'm guessing 8 of them make less in their current job than they would if they decided to switch.

One explanation is that most people would really hate the anti-social aspect of software engineering. We like to talk a lot about how it's critical for that job to be a great communicator etc., but the reality is, most of the time you sit at your desk and not talk to anyone. It's possible most people couldn't stand it. Most jobs have a really big social factor in comparison, you talk to clients, students, patients, supervisors, etc.

Comment author: SaidAchmiz 29 January 2014 11:28:11PM 2 points [-]

This...

5% of people could learn how to program

does not imply that all those people can learn to be software engineers. Software engineering is not just programming. There are a lot of terrible software engineers out there.

Comment author: Solvent 29 January 2014 11:20:43PM 1 point [-]

I suspect that most people don't think of making the switch.

Comment author: ChristianKl 11 February 2014 04:16:03PM 0 points [-]

Almost everyone who goes through the program gets a job, with an average salary above $90k.

What does almost mean in percentages?

How many people drop out of the program and how many complete it?

Comment author: Solvent 12 February 2014 10:44:35PM 2 points [-]

Of the people who graduated more than 6 months ago and looked for jobs (as opposed to going to university or something), all have jobs.

About 5% of people drop out of the program.

Comment author: RichardKennaway 31 January 2014 11:36:16AM *  0 points [-]

ETA: Note that I work for App Academy.

Any comment on this? (News article a couple of days ago on gummint regulators threatening to shut down App Academy and several similar named organisations.)

Comment author: Solvent 01 February 2014 09:54:31PM *  0 points [-]

It will probably be fine. See here.

Comment author: SaidAchmiz 29 January 2014 09:25:17PM *  0 points [-]

5. And, most importantly, doing so has (not "will"! already has!) caused them to win, in a clear, unambiguous, significant way.

You have, I take it, already gotten a job as a result of finishing App Academy?

Comment author: Solvent 29 January 2014 10:42:28PM *  1 point [-]

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Their figures are telling the truth: I don't know anyone from the previous cohort who was dissatisfied with their experience of job search.

Comment author: SaidAchmiz 29 January 2014 11:09:40PM 0 points [-]

I did, but the job I got was being a TA for App Academy, so that might not count in your eyes.

Indeed it does not. I don't count your experience as an example of the OP.

dissatisfied with their experience of job search.

That's... an awfully strange phrasing. Do you mean they all found a web development job as a result of attending App Academy? Or what?

Comment author: Solvent 29 January 2014 11:19:53PM 1 point [-]

Pretty much all of them, yes. I should have phrased that better.

My experience was unusual, but if they hadn't hired me, I expect I would have been hired like my classmates.

Comment author: SaidAchmiz 29 January 2014 11:42:33PM 0 points [-]

Out of curiosity, why did you take the TA job? Does it pay more than $90k a year?

Comment author: Morendil 21 January 2014 10:25:34PM 2 points [-]

Google "The Pudding Guy".

Comment author: jkaufman 23 January 2014 11:19:37PM 1 point [-]

You could argue earning to give fits this pattern, though I'm not sure the victory/win is unambiguous enough.

Comment author: Vivificient 21 January 2014 02:28:55PM *  15 points [-]

Your conclusion is possible. But I'll admit I find it hard to believe that non-rationalists really lack the ability to take ideas seriously. The 1 = 2 example is a little silly, but I've known lots of not-very-rational people who take ideas seriously. For example, people who stopped using a microwave when they heard about an experiment supposedly showing that microwaved water kills plants. People who threw out all their plastic dishes after the media picked up a study about health dangers caused by plastics. People who spent a lot of time thinking positive thoughts because they have heard it will make them successful.

Could it be that proto-rationalists are just bad at quantifying their level of belief? Normally, I'd trust somebody's claim to believe something more if they're willing to bet on it; and if they aren't willing to bet on it, then I'd think their real level of belief is lower.

Comment author: Viliam_Bur 21 January 2014 06:39:51PM 2 points [-]

Your examples require magic, pseudoscience, conspiracy theories. Perhaps the advantage of rationalists is the ability to take boring ideas seriously. (Even immortality is boring when all you have to do is to buy a life insurance, sign a few papers and wait. And admit that it most likely will not work. And that if it will work, it will pretty much be the science as usual.)

Comment author: Vivificient 21 January 2014 06:57:12PM *  11 points [-]

Making things happen with positive thinking requires magic. But myths about the health effects of microwaves or plastic bottles are dressed up to look like science as usual. The microwave thing is supposedly based on the effect of radiation on the DNA in your food or something -- nonsense, but to someone with little science literacy not necessarily distinguishable from talk about the information-theoretic definition of death.

I'm not sure that signing papers to have a team of scientists stand by and freeze your brain when you die is more boring than cooking your food without a microwave oven. I would guess that cryonics being "weird", "gross", and "unnatural" would be more relevant.

Comment author: Kaj_Sotala 21 January 2014 06:53:20PM 4 points [-]

"There's a health danger involved with plastic dishes" sounds quite boring to me. ("Oh, yet another study about some random substance causing cancer? Yawn.")

Comment author: Sanji 23 January 2014 01:52:01PM 6 points [-]

Is it possible that the difference you're seeing is just lack of knowledge of probabilities? I am a new person, and I don't really understand percentages. My brain just doesn't work that way. I don't know how I would even begin to assign a probability to how likely cryonics is to work.

Comment author: Eugine_Nier 21 January 2014 04:43:34AM 37 points [-]

It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.

Alternate hypothesis: the experienced rationalists are also doing what everyone else (in their community) is doing, they just consider a different group of people their community.

Comment author: zslastman 21 January 2014 07:56:02AM 26 points [-]

My immediate thought was that there is a third variable controlling both experience in rationality and willingness to pay for cryonics, such as 'living or hanging out in the bay area'.

Comment author: [deleted] 22 January 2014 07:21:51AM 3 points [-]

Well, only 13% of “experienced rationalists” are signed up for cryonics, which hardly counts as “everyone else” -- unless the thing they do because everyone else is doing it is “I'll sign up for cryonics iff I think it's worth it”, which kind of dilutes the meaning.

(Anecdata: in each of my social circles before I entered university, to a very good zeroth approximation either everyone smoked or nobody did, but nowadays it's not uncommon for me to be among smokers and non-smokers at the same time. Sure, you could say that in some circles people smoke because everybody does, in some circles people don't smoke because nobody doesn't, and in some circles people smoke iff they like because everybody does that, but...)

Comment author: MTGandP 17 April 2014 11:57:05PM 0 points [-]

As army1987 said, only a small percentage of experienced rationalists sign up for cryonics, so I wouldn't expect there to be social pressure. I think a more likely explanation is that experienced rationalists feel less social pressure against signing up for cryonics.

Comment author: Lalartu 21 January 2014 11:37:22PM 4 points [-]

Well, yes membership in LW community make one more likely to subscribe for cryonics, even if corrected for selection. Because LW community promotes cryonics. Yes it is that simple. It is basic human behaviour and doesn't much to do with rationality. Remove all positive portrayal, all emotions, all that "value life" and "true rationalist", leaving only cold facts and numbers - and few years later cryonics subscription rate among new LW members will drop much more close to average among "people who know about cryonics" group.

Comment author: itaibn0 22 January 2014 12:50:59AM *  8 points [-]

Yes, the fact that LW community convinces people to subscribe to cryonics is not mysterious. The mysterious thing is that the LW community manages at the same time to convince people that cryonics is unlikely to work.

Comment author: jkaufman 23 January 2014 11:28:13PM *  3 points [-]

Except the people who are signed up and the people who think it's less likely to work are not the same people. For example, I meet the criteria for "experienced lesswronger", and I am both not signed up for cryonics and think it's very unlikely to work. There are similarly other people in the Boston meetup group who are signed up and think it's somewhat likely to work. It's only mysterious if you assume we're homogenous.

Comment author: itaibn0 24 January 2014 01:44:11AM *  0 points [-]

Yes, that's another possible explanation, and here's yet another one. I'm not saying the Yvain's theory is correct, only that Lalartu's comment fails to fully account for the situation Yvain describes (though their later comment seems to make up for it if I understood it correctly).

Comment author: Lalartu 22 January 2014 02:04:17PM 3 points [-]

Because "it will never work" is not the main reason why people don't subscribe for cryonics.

Comment author: itaibn0 22 January 2014 10:55:06PM 0 points [-]

I'm not sure I understand you. Are you saying that the reason most people don't subscribe to cryonics is not that they think that it is unlikely to work but some other reason, and so convincing people that it is unlikely to work is compatible with convincing people to do it? In that case that seems to me like a reasonable point of view.

Comment author: Lalartu 24 January 2014 09:24:07AM 4 points [-]

Yes. Effect from convincing people that cryonics is socially acceptable far outweights lower success estimates.

Comment author: private_messaging 26 January 2014 12:31:35AM *  3 points [-]

Less credulous than who? All your groups are far, far, extremely more credulous about cryonics on average than, say, me, or neurobiology experts, or most people I know. More credulous than many cryonics proponents, too.

As for the rather minor differences between the averages within your groups... Said groups joined the site at different times, have different age, have discovered this site for different reasons (I gather you get more scifi fans now). You even got a general trend towards increased estimates.

That you go on and ignore all signs of co-founding, even as blatantly in-your-face as last year's "proto-rationalists" and this year's "experienced rationalists" having the same average of 15%, and instead conclude some lower credulity due to training, is clearly an example of the kind of reasoning that shouldn't be taken seriously.

Human brain is big and messy, it consists of many regions that do different things. The notion of "taking ideas seriously", coupled with a thought disorder, is a recipe for utter disaster; coupled with some odd lesion in right hemisphere it might be helpful though.

My hypothesis is that normally the belief updates and expected value estimates are done in a way that is not available for introspection, much like how visual object recognition (a form of very advanced evidence processing) is outside introspection. For cryonics, normally those processes report a very low expected value.

edit: also see this . It's mostly about the ratios between extreme cryonics believers, the cryonics subscribers possibly suffering from some buyer's remorse, and folks who give the usual (quite low) estimate. Why and how the ratios changed, that's a very different question.

Comment author: Aleksander 21 January 2014 07:08:27PM 3 points [-]
Comment author: Locaha 21 January 2014 07:28:13AM 11 points [-]

We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

This is an incredibly bad definition of a rationalist. What you actually research here are people who fit into the mainstream of LW.

Comment author: MugaSofer 21 January 2014 05:42:07PM 3 points [-]

... which is somewhat relevant to whether LW-style "rationalist training" makes one irrational, yes?

Comment author: V_V 21 January 2014 06:25:32PM *  13 points [-]

Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).

I don't like this appropriation of the term "rational" (even with the "-ist" suffix), and in fact I find it somewhat offensive.

[ Warning: Trolling ahead ]
But since words are arbitrary placeholders, let's play a little game and replace the word "rationalist" with another randomly generated string, such as "cultist" (which you might possibly find offensive, but remember, it's just a placeholder).

So what does your data say?

Proto-cultists have give a higher average probability of cryonics success than committed cultists.
But this isn't necessarily particularly informative, because averaging probabilities from different estimators doesn't really tell us much (consider scenario A where half of the respondents say p = 1 and half say p = 0, and scenario B where all the respondents say p = 0.5. The arithmetic mean is the same, but the scenarios are completely different). The harmonic mean can be a better way of averaging probabilities.
But anyway, let's assume that the distribution of the responses is well-behaved enough that it holds that a randomly sampled proto-cultist is more likely to assign an higher probability of cryonics success than a randomly sampled committed cultist (you can test this hypothesis on the data).

On the other hand, proto-cultists are much less likely to be signed up for cryonics than committed cultists (in fact, none of the proto-cultist are signed up).

What is the correlation between belief in cryonics success and being signed up for cryonics? I don't know since it isn't reported neither here nor in the survey results post (maybe it was computed by found to be not significant, since IIUC there was a significance cutoff for correlations in the survey results post).
Do committed cultists who sign up for cryonics do it because they assign a high probability to its success, or despite they assign it a low probability? I have no way of knowing.
Or, actually, I could look at the data, but I wont, since you wrote the post trying to make a point from the data, hence the burden of providing a meaningful statistic analysis was on you.

Let's try to interpret this finding:
Cryonics is a weird belief. Proto-cultists didn't spend much time researching it and thinking on it, and because their typical background (mostly computer science students) find it somewhat plausible, but they don't really trust their estimate very much.
Committed cultists, on the other hand, have more polarized beliefs. Being in the cult might have actually stripped them away of their instrumental rationality (or selected for irrational people) so they decide against their explicit beliefs. Or they respond to social pressures, since cryonics is high status in the cult and has been explicitly endorsed by one of the cult elders and author of the Sacred Scrip-...Sequences. Or both.
Oops.

[ End of trolling ]

The bottom line of my deliberately uncharitable post is:

  • Don't use words lightly. Words aren't really just syntactic labels, they convey implicit meaning. Using words with an implicit positive meaning ("experienced rationalist") to refer to the core members of a community, naturally suggest a charitable interpretation (that they are smart). Using words with an implicit negative meaning ("committed cultists") suggests an uncharitable interpretation (that they are brainwashed, groupthinking, too much preoccupied with costly status signalling that has no value outside the group).

  • If you are trying to make a point from data, provide relevant statistics.

Comment author: private_messaging 26 January 2014 01:13:36AM *  2 points [-]

Yeah. Suppose we were talking about a newage-ish cult where the founder has arranged himself to be flown to Tibet for the sky burial when he dies. They can very well have exact same statistics on their online forum.

Comment author: Brillyant 21 January 2014 03:56:04PM *  2 points [-]

This whole article makes a sleight of hand assumption that more rational = more time on LW.

I'm a proto-rationalist by these criteria. I don't see any reason cryonics can't eventually work. I've no interest in it, and I think it is kinda weird.

Some of that weirdness is the typical frozen dead body stuff. But, more than that, I'm weirded out by the immortality-ism that seems to be a big part of (some of) the tenured LW crowd (i.e. rationalists).

I've yet to hear one compelling argument for why hyper-long life = better. The standard answers seems to be "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

In the case of the former, yeah, death sucks 'cuz it is an end and often involves lots of pain and inconvenience in the run up to it. To the latter, yeah, I get the jist: More utilons = better. Shut up and do math. Okay.

I'm totally on board with getting rid of gratuitous pain and inconvenience that comes with aging. But, as I said, the "I want to live forever! 'cuz that is winning!" thing is just plain weird to me, at least as much so as the frozen body/head bit.

But what could I know... I'm not rational.

Comment author: TheOtherDave 21 January 2014 04:04:18PM 12 points [-]

If you could remain healthy indefinitely, when do you expect you would choose to die?
Why?

Comment author: Brillyant 21 January 2014 04:37:53PM *  3 points [-]

My first thought is that the number of years lived is relatively arbitrary. 100, 1000, whatever. I'd imagine someone smarter than I could come up with a logical number. Maybe when you could meet your great grandkids or something. Don't know. 10 seems way too small & 1,000,000 way too big, but that is likely just because I'm anchored to 75-85 as an average lifespan.

I think my choice to cease conscious experience would involve a few components:

  • Realization of the end of true novelty. I've read some good stuff on why this might not be an issue given sufficient technology, but I'm ultimately not convinced. It seems to me a perpetual invention of new novelties (new challenges to be overcome, etc.) is still artificial and would not work to extend novelty to the extent I was aware it was artificial. I suspect it might feel like how a particular sandbox video game tends to lose its appeal...and even sandbox video games in general lose their appeal. All this despite the potential for perpetual novelty within the games' engines.

  • I suppose this is related to the first, but it feels a bit separate in my mind... All risk would be lost. With a finite period of time in which to work, all my accomplishments and failures have some scope. I have the body and mind I've been given, and X amount of years to squeeze as much lemonade as I can out of the lemons life throws at me. With the option for infinite time, I'd imagine everything would become an eventuality. Once in a lifetime experiences would be mathematically bound to occur given enough time, of which I'd have an innumerable sum. I'd sum this component up by saying that games are not fun if you can't lose... in fact, they aren't even games. During a philosophical discussion, a former co-worker of mine told me he thought life's meaning was in overcoming obstacles and challenges and finding joy in it. I thought that was the stupidest thing I'd ever heard at the time, but now I basically agree. Infinite availability of time makes this whole purpose kinda moot, in my view.

  • One other component I can think of is a recognition of what death really means. It is only the end of my conscious experience. It is not, in a very real & literal sense, the end of the world. All that happens (presumably) is that I no longer observe. Period. Death isn't nearly as scary or grandiose as we make it out to be.

Comment author: TheOtherDave 21 January 2014 04:47:43PM 8 points [-]

Given a choice between remaining alive for as long as novelty and risk and challenges and obstacles to overcome and joy remain present, or dying before that point, would you choose to die before that point?

Comment author: Vivificient 21 January 2014 05:38:28PM 6 points [-]

Upvoted for providing a clear counterexample to Yvain's assertion that people would find immortality to be "surely an outcome as desirable as any lottery jackpot".

This suggests that a partial explanation for the data is that "experienced rationalists" (high karma, long time in community) are more likely to find immortality desirable, and so more likely to sign up for cryonics despite having slightly lower faith in the technology itself.

Comment author: Yvain 25 January 2014 05:12:20PM 4 points [-]

This whole article makes a sleight of hand assumption that more rational = more time on LW.

Not particularly. If we found that people who spent more time at church are more likely to believe in Jesus, one possible explanation (albeit not proven to be causal) is that going to church makes one believe in Jesus. Likewise, if we find that people who spend more time on Less Wrong are more likely to take a strange idea seriously, one possible (unproven, but reasonable to hypothesize) explanation is that going to Less Wrong makes one more likely to take strange ideas seriously.

Although it's perfectly reasonable not to want to sign up for cryonics (and I haven't signed up myself) the high probability of success but low signup rate among newcomers versus the lower probability of success and higher signup rate among veterans suggests the variable changing is "taking ideas seriously"; this is orthogonal to whether you should or shouldn't want to sign up for cryonics

(unless your claim is that veterans are more anti-deathist than newbies, which would also explain the data and should probably be tested on the next survey. But I think my point the the higher signup rate among veterans does not mean they are more credulous but reflects thought process change still stands)

"Rationalist" here is used to mean "exposed to rationalist ideas", not "is a rationalist person". I realize that's confusing but I don't have better terminology.

Comment author: Brillyant 25 January 2014 11:08:54PM 2 points [-]

Although it's perfectly reasonable not to want to sign up for cryonics (and I haven't signed up myself)

Would you please explain your rationale?

"Rationalist" here is used to mean "exposed to rationalist ideas", not "is a rationalist person". I realize that's confusing but I don't have better terminology.

I understood, and then used, "rationalist" to mean "accurate map of the territory". I'd agree exposure to LW helps eliminate some biases and, in that way, it is rationalist training that improves one's rationality. I'm not yet willing to say Less Wrong = More Right in every case, however.

Maybe more time on LW leads to improved rationality... up to the point where it doesn't? I find the dogmatic-ish acceptance of certain ideas around here reminds me of religion. It is funny to me you used that example...

Comment author: memoridem 25 January 2014 11:14:45PM 1 point [-]

I find the dogmatic-ish acceptance of certain ideas around here reminds me of religion

Did you actually look at the statistics? Whatever dogma you're seeing isn't there. It's more likely you're thinking some people you've had discussions with here are more representative of LW than they actually are.

Comment author: Brillyant 25 January 2014 11:24:53PM 4 points [-]

As in the church, it isn't too terribly important to dogma that it has widespread acceptance among adherents to a particular faith in order to be dogma.

What is far more important to establishing dogma is having de facto authority and/or status leaders accept it and voice their support.

Comment author: memoridem 25 January 2014 11:46:54PM 0 points [-]

Doesn't this apply to any system where power is tilted and the high status members have ideologies? Should we call them all religions?

Comment author: Brillyant 26 January 2014 12:23:07AM *  1 point [-]

I suppose this happens in the way you note. I don't advocate labeling LW, or anyone else, a religion. I just meant to say certain aspects remind me of religion. Other aspects are nothing like religion.

I don't think cryonics is impossible. In fact, I'm probably in the proto-rationalist group that doesn't really understand the science but thinks it has a high probability of working someday. I just don't understand why it is so appealing.

The dogma seems to be more that "cryonics and the option for indefinite life extension is good" more than "cryonics is possible".

Comment author: christopherj 25 January 2014 02:51:12AM 2 points [-]

I've yet to hear one compelling argument for why hyper-long life = better.

It makes dying an optional choice, rather than an inevitable necessity. Talk to an 80 year old person about the "joys" of aging -- any proper immortality means that you don't age. With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

Even people who have said they'd rather die than live as an invalid, almost always change their tune when they become an invalid -- so why should I believe that you'd rather die than live as a healthy man in the prime of life? Go ahead, research this one thing.

If as you fear immortality drains motivation, the immortals will be out-competed by the mortals, so the world won't be harmed. And remember also that full immortality means finding a way around the laws of thermodynamics and the death of the universe -- "forever" might necessarily be limited to a few billion years.

Comment author: Brillyant 25 January 2014 04:50:33AM 1 point [-]

It makes dying an optional choice, rather than an inevitable necessity.

Yes.

Talk to an 80 year old person about the "joys" of aging -- any proper immortality means that you don't age. With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

Okay. Eliminating aging and all the negatives involved with it makes sense.

Even people who have said they'd rather die than live as an invalid, almost always change their tune when they become an invalid -- so why should I believe that you'd rather die than live as a healthy man in the prime of life? Go ahead, research this one thing.

I'm not sure what research you think I should do. I accept that many circumstances we can imagine are much different when we actually have to deal with them in the present reality.

If as you fear immortality drains motivation, the immortals will be out-competed by the mortals, so the world won't be harmed.

I'm not worried about the world being harmed by immortality, per se. I suppose there are lots of interesting implications that would arise, but I'm not concerned.

And remember also that full immortality means finding a way around the laws of thermodynamics and the death of the universe -- "forever" might necessarily be limited to a few billion years.

Sure. That makes sense.

With a longer lifespan, people will tend toward a long term view (at least a little). You can enjoy more things, or accomplish more things, with a longer life.

This seems to be the argument. I don't find it compelling it all. Can you help me understand why "tending toward a long view" is valuable? And how is accomplishing and enjoying more things always good indefinitely? I think enjoying and accomplishing things is cool, but I'd imagine there are some diminishing returns on almost anything.

I'm hearing... "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

Am I off base? How?

Death is just the end of your conscious experience. You won't know your dead. Life is cool, but it isn't as if the stakes on the table are life or eternal torture. That would be a HUGE problem worth freezing bodies or severed heads over.

Comment author: christopherj 29 January 2014 07:10:07AM *  3 points [-]

This seems to be the argument. I don't find it compelling it all. Can you help me understand why "tending toward a long view" is valuable? And how is accomplishing and enjoying more things always good indefinitely? I think enjoying and accomplishing things is cool, but I'd imagine there are some diminishing returns on almost anything.

A longer term view is valuable because it would decrease things like "it's OK to pollute, I'll be dead by the time it gets bad".

I'm hearing... "death is obviously bad and the only way you could disagree is because you are biased" and "more years can equal more utilons".

It's just that many of us don't see any benefit to involuntary death. (Voluntary death also remains unpopular, even in surprisingly bad circumstances). In fact, I don't know of any product which is marketed as being superior to another product due to having a shorter lifespan ("Because our product will cease to function unexpectedly, you can enjoy it more now before it does!"), while things like "lifetime guarantee" are routinely praised as positive. I mean, for houses, tools, toys, vehicles, pet animals, longer lifespan == better, and I don't see why it should be different for my children.

As a thought experiment: Most people would, if they could, take a pill that eliminated the effects of aging, but causes multiple organ failure at about their original life expectancy. You seem to agree that aging is inconvenient, so I assume you'd take this pill. Would you?

But what if that pill also extended your lifespan indefinitely, as well as curing aging? Not true immortality, of course, since your body would still be susceptible to disease and accident, but it would mean that every year you're as likely to die as you were last year, ie your chance of dying doesn't increase with age. Now, there are a lot of people who say death is a good thing. In the interest of pleasing these people, while also providing the elimination of aging, scientists develop a second substance, which causes multiple organ failure at about your expected lifespan. By combining this substance with the immortality pill, they create a cure for aging that does not have immortality as a side-effect. Which of these pills would you prefer, or would you reject both?

Now, if you're not a consequentialist, the second pill no doubt seems like it has the stigma of suicide, even though its effects are identical to a previous example which perhaps seemed both positive and non-suicidal. This stigma would vanish, even if the pill were identical, if the pills had been developed in reverse order, with the immortality pill being a refinement of the anti-aging pill to remove a substance that causes eventual multiple organ failure. Or perhaps simply the existence of both options would make them both repugnant to you, one because it stinks of suicide, and the other one because you don't want immortality?


On a different note, there are in fact some legitimate advantages of death by limited lifespan, and some that might be considered both advantageous and disadvantageous. A limited lifespan allows for permanent retirement. Solving death would be a huge problem for the politicians who have to kick people off retirement, with a risk that they'd rather go bankrupt than anger our elderly. A huge chunk of our taxes are estate taxes "aka death tax". Death is a great equalizer: it will eliminate any specific tyrant and any specific individual who is accumulating "too much" wealth. Making death technically not inevitable would decrease our courage to do dangerous or violent things, such as soldiering, volunteering to test drugs, violent or non-violent resistance to a corrupt regime. The combination of immortal tyrant with decreased opposition from internal resistance or external liberators, is particularly worrisome. Unlimited lifespan will increase procrastination. Death eliminates old people set in their ways from positions of power and authority, making way for new ideas. Death makes all your problems go away or become someone else's problems. With limited lifespans, you won't outlive your friends by more than ~100 years. Even with all that, there's an equally impressive list for the benefits of a longer lifespan, plus I can point to about 7 billion people who think living is better than dying.

Comment author: memoridem 25 January 2014 06:42:34AM *  1 point [-]

If everyone was immortal and healthy by default, do you think it would even occur to you suggest death as a harmless alternative?

If someone tried to convince you that a 50 year lifespan is better than what we have now, what would be your reaction? Don't you find it interesting that your intuitions support a very narrow optimum that just happens to be what you already have?

Do you argue that "death is just the end of your conscious experience" in the case of anyone who dies prematurely? Try to imagine actual deaths in real life and their outcomes.

Have you read this fable by Bostrom?

Comment author: Nornagest 21 January 2014 04:22:42PM *  1 point [-]

This whole article makes a sleight of hand assumption that more rational = more time on LW.

Yvain isn't talking about rationality, he's talking about membership in a rationalist group. (He says "training", but he's looking at time and status in community, not any specific training regime.) That "-ist" is important: it denotes a specific ideology or methodology. In this case, that's one that's strongly associated with the LW community, so using time and karma isn't a bad measure of one's exposure to it.

Myself, I'd be interested to see how these numbers compare to CFAR alumni. There's some overlap, but not so much as to rule out important differences.

Comment author: Brillyant 21 January 2014 05:04:05PM 2 points [-]

Yvain isn't talking about rationality, he's talking about membership in a rationalist group.

My understanding is that one's rationality (or ability to be rational) would increase as a result of participation in rationalist training. Hence, I see your disctinction, but little, if any, difference.

In this case, he assumes (1) LW is rationalist and (2) LW is good at providing training that makes a participating member more rational.

Karma does not necessarily have anything to do with rationality, being rational, rationalist training, etc. It is a point system in which members of LW give points to stuff they want more of. It has also been used as a reward for doing tasks for free for LW, mass blocks of downvoting for dissenting political views, and even filling out the survey we are talking about in this post.

Comment author: TheAncientGeek 21 January 2014 05:41:32PM *  8 points [-]

In this case, he assumes (1) LW is rationalist and (2) LW is good at providing training that makes a participating member more rational.

...(3) No one turns up as a newbie at LW having already learnt rationality.

Comment author: MugaSofer 21 January 2014 05:39:15PM 1 point [-]

My understanding is that one's rationality (or ability to be rational) would increase as a result of participation in rationalist training.

That it, is in fact, the question Yvain is discussing.

Comment author: V_V 21 January 2014 05:11:00PM 3 points [-]

I dislike this usage, and in fact I find it offensive.
Even with the "-ist" appended, It's an appropriation of a term that has a general meaning of "thinking clearly" which gets redefined as a label of membership into a given community.

Comment author: Nornagest 21 January 2014 07:37:29PM 3 points [-]

Personally, I'm more bothered by the fact that it shares a name with an epistemological stance that's in most ways unrelated and in some ways actually opposed to the LW methodology. (We tend to favor empiricist approaches in most situations.) But that ship has sailed.

Comment author: Bugmaster 12 February 2014 12:20:12AM 1 point [-]

I agree with you that the article engages in sleight of hand; however, I disagree with you regarding immortality.

While I do believe that "living longer (assuming high levels of physical and mental health is always better" is too strong a statement, I would argue that "having the choice to live as long as you want is always better" is much closer to the truth.

There are many projects that I will leave unfinished when I die; many things I will never get to experience. If I had the choice to live long enough to finish everything I wanted to do, I would gladly take it. I fully expect that, by the time I'm done with all that stuff, I'll find a lot more stuff that would require even more of my time -- but I could be wrong, in which case I'd want the option to end my life voluntarily.

I fully accept that there exist people for whom 80 or so years (or fewer) would be enough. Perhaps they lead much more efficient lives than I do, or perhaps they lack imagination or curiosity, or perhaps their lives are so terrible that death would come as a welcome release. But I have difficulty believing that the majority of people are like that. I'm pretty average, so it seems more likely that most people are like me.

Comment author: Zaine 21 January 2014 07:05:48PM 1 point [-]

For how long did you deliberate upon whether, or what did you think whilst deciding to go with 'Gallant' and 'Goofus'?

Comment author: VAuroch 21 January 2014 09:10:11PM *  6 points [-]

It's a classic pair of Lazy Bad Planner and Shining Example of Humanity, which has been used in the children's magazine Highlights to put morals on display for decades.

I might have gone with Simplicio and Salviati, but that would go over many people's heads for no real benefit.

Comment author: gwern 22 January 2014 07:23:23PM 1 point [-]

put morals on display for decades.

More specifically, 'Goofus and Gallant' has been running since 1948 (or 66 years now).

Comment author: Zaine 23 January 2014 06:15:48PM 2 points [-]

Ah, then the purpose for my question is rendered moot. If it was an original coining, I wished to know the thought process that went into deciding, "Yes, I shall prime thusly."

Comment author: philh 21 January 2014 02:19:29PM *  1 point [-]

Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).

Both of these numbers are higher than I would have expected, and I'd at least weakly say they at least weakly support the claim "rationalists are gullible, but experienced rationalists are less gullible than proto-rationalists".

Out of curiosity, I took an average in decibels instead of percents, for people with > 1000 karma. Leaving out two people who gave 100% (really?), and four who gave 0%, if I did the calculations right we get -12db = ~5.5%.

(But since 100 and 0 are -/+ epsilon, this might not accurately reflect beliefs.)

(I didn't check time in community, and of course I don't have the full dataset, but the percent average was 14.1 instead of 15, so the results probably don't change much.)

(I might also check whether the 100%ers were trolls, but if we look more closely for trolls among people who profess silly beliefs...)

Comment author: jkaufman 23 January 2014 11:31:11PM 0 points [-]

Does average in decibels give you a geometric mean? I think it does, in which case it's a better average to be taking here.