Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

A vote against spaced repetition

42 ancientcampus 10 March 2014 07:27PM

LessWrong seems to be a big fan of spaced-repetition flashcard programs like Anki, Supermemo, or Mnemosyne. I used to be. After using them religiously for 3 years in medical school, I now categorically advise against using them for large volumes of memorization.


[A caveat before people get upset: I think they appropriate in certain situations, and I have not tried to use them to learn a language, which seems its most popular use. More at the bottom.]


A bit more history: I and 30 other students tried using Mnemosyne (and some used Anki) for multiple tests. At my school, we have a test approximately every 3 weeks, and each test covers about 75 pages of high-density outline-format notes. Many stopped after 5 or so such tests, citing that they simply did not get enough returns from their time. I stuck with it longer and used them more than anyone else, using them for 3 years.


Incidentally, I failed my first year and had to repeat.


By the end of that third year (and studying for my Step 1 boards, a several-month process), I lost faith in spaced-repetition cards as an effective tool for my memorization demands. I later met with a learning-skills specialist, who felt the same way, and had better reasons than my intuition/trial-and-error:

  • Flashcards are less useful to learning the “big picture”
  • Specifically, if you are memorizing a large amount of information, there is often a hierarchy, organization, etc that can make leaning the whole thing easier, and you loose the constant visual reminder of the larger context when using flashcards.
  • Flashcards do not take advantage of spatial, mapping, or visual memory, all of which the human mind is much better optimized for. It is not so well built to memorize pairs between seemingly arbitrary concepts with few to no intuitive links. My preferred methods are, in essence, hacks that use your visual and spatial memory rather than rote.


Here are examples of the typical kind of things I memorize every day and have found flashcards to be surprisingly worthless for:

  • The definition of Sjögren's syndrome
  • The contraindications of Metronidazole
  • The significance of a rise in serum αFP


Here is what I now use in place of flashcards:

  1. Ven diagrams/etc, to compare and contrast similar lists. (This is more specific to medical school, when you learn subtly different diseases.)
  2. Mnemonic pictures. I have used this myself for years to great effect, and later learned it was taught by my study-skills expert, though I'm surprised I haven't found them formally named and taught anywhere else. The basic concept is to make a large picture, where each detail on the picture corresponds to a detail you want to memorize.
  3. Memory palaces. I recently learned how to properly use these, and I'm a true believer. When I only had the general idea to “pair things you want to memorize with places in your room” I found it worthless, but after I was taught a lot of do's and don'ts, they're now my favorite way to memorize any list of 5+ items. If there's enough demand on LW I can write up a summary.


Spaced repetition is still good for knowledge you need to retrieve immediately, when a 2-second delay would make it useless. I would still consider spaced-repetition to memorize some of the more rarely-used notes on the treble and bass clef, if I ever decide to learn to sight-read music properly. I make no comment on it's usefulness to learn a foreign language, as I haven't tried it, but if I were to pick one up I personally would start with a rosetta-stone-esque program.


Your mileage may vary, but after seeing so many people try and reject them, I figured it was enough data to share. Mnemonic pictures and memory palaces are slightly time consuming when you're learning them. However, if someone has the motivation and discipline to make a stack of flashcards and study them every day indefinitely, then I believe learning and using those skills is a far better use of time.

Optimal Exercise

38 RomeoStevens 10 March 2014 03:37AM

Followup to: Lifestyle interventions to increase longevity.

What does it mean for exercise to be optimal?

  • Optimal for looks
  • Optimal for time
  • Optimal for effort
  • Optimal for performance
  • Optimal for longevity

There may be even more criteria.

We're all likely going for a mix of outcomes, and optimal exercise is going to change depending on your weighting of different factors. So I'm going to discuss something close to a minimum viable routine based on meta-analyses of exercise studies.

Not knowing which sort of exercise yields the best results gives our brains an excuse to stop thinking about it. The intent of this post is to go over the dose responses to various types of exercise. We’re going to break through vague notions like “exercise is good” and “I should probably exercise more” with a concrete plan where you understand the relevant parameters that will cause dramatic improvements.

continue reading »

Beware Trivial Fears

36 Stabilizer 04 February 2014 05:40AM

Does the surveillance state affect us? It has affected me, and I didn't realize that it was affecting me until recently. I give a few examples of how it has affected me:

  1. I was once engaged in a discussion on Facebook about Obama's foreign policy. Around that time, I was going to apply for a US visa. I stopped the discussion early. Semi-consciously, I was worried that what I was writing would be checked by US visa officials and would lead to my visa being denied.
  2. I was once really interested in reading up on the Unabomber and his manifesto, because somebody mentioned that he had some interesting ideas, and though fundamentally misguided, he might have been onto something. I didn't explore much because I was worried---again semi-consciously---that my traffic history would be logged on some NSA computer somewhere, and that I'd pattern match to the Unabomber (I'm a physics grad student, the Unabomber was a mathematician).
  3. I didn't visit Silk Road as I was worried that my visits would be traced, even though I had no plans of buying anything.
  4. Just generally, I try to not search for some really weird stuff that I want to search for (I'm a curious guy!). 
  5. I was almost not going to write this post. 
And these are just the ones that I became conscious of. I wonder how many more have slipped under the radar.

Yes, I know these fears are silly. In fact, writing them out makes them feel even more silly. But they still affected my behavior. Now, I may be atypical. But I'm sure I'm not that atypical. I'm sure many, many people refrain from visiting and exploring parts of the Internet and writing things on different forums and blogs because of the fear of being recorded and the data being used against them. Especially susceptible to this fear are immigrants.

In Beware Trivial Inconveniences, Yvain points out that the Great Firewall of China is very easy to bypass but the vast majority of Chinese people don't bypass it because it's a trivial inconvenience.

I would like to introduce the analogous and very related concept of a trivial fear: fear of low probability events that affects behavior in a major way, especially over a large population. Much more insidiously, the people experiencing these fears don't even realize they're experiencing it: because the fear is of small magnitude, it can be rationalized away easily.

In this particular case, the fear acts in a way so as to restrict the desire for information and free speech.

In a recent conversation, a friend mentioned that calling the modern surveillance state 'Orwellian' is hyperbole. Maybe so. I don't know if the surveillance state is a Good Thing or a Bad Thing. I'm not an economist or a political scientist or a moral philosopher. I simply want to point out that the main lesson from 1984 is not the exact details of the dystopia, but the fact that the people living in the dystopia weren't even remotely aware that they were living in one.

Humans can drive cars

33 Apprentice 30 January 2014 11:55AM

There's been a lot of fuss lately about Google's gadgets. Computers can drive cars - pretty amazing, eh? I guess. But what amazed me as a child was that people can drive cars. I'd sit in the back seat while an adult controlled a machine taking us at insane speeds through a cluttered, seemingly quite unsafe environment. I distinctly remember thinking that something about this just doesn't add up.

It looked to me like there was just no adequate mechanism to keep the car on the road. At the speeds cars travel, a tiny deviation from the correct course would take us flying off the road in just a couple of seconds. Yet the adults seemed pretty nonchalant about it - the adult in the driver's seat could have relaxed conversations with other people in the car. But I knew that people were pretty clumsy. I was an ungainly kid but I knew even the adults would bump into stuff, drop things and generally fumble from time to time. Why didn't that seem to happen in the car? I felt I was missing something. Maybe there were magnets in the road?

Now that I am a driving adult I could more or less explain this to a 12-year-old me:

1. Yes, the course needs to be controlled very exactly and you need to make constant tiny course corrections or you're off to a serious accident in no time.

2. Fortunately, the steering wheel is a really good instrument for making small course corrections. The design is somewhat clumsiness-resistant.

3. Nevertheless, you really are just one misstep away from death and you need to focus intently. You can't take your eyes off the road for even one second. Under good circumstances, you can have light conversations while driving but a big part of your mind is still tied up by the task.

4. People can drive cars - but only just barely. You can't do it safely even while only mildly inebriated. That's not just an arbitrary law - the hit to your reflexes substantially increases the risks. You can do pretty much all other normal tasks after a couple of drinks, but not this.

So my 12-year-old self was not completely mistaken but still ultimately wrong. There are no magnets in the road. The explanation for why driving works out is mostly that people are just somewhat more capable than I'd thought. In my more sunny moments I hope that I'm making similar errors when thinking about artificial intelligence. Maybe creating a safe AGI isn't as impossible as it looks to me. Maybe it isn't beyond human capabilities. Maybe.

Edit: I intended no real analogy between AGI design and driving or car design - just the general observation that people are sometimes more competent than I expect. I find it interesting that multiple commenters note that they have also been puzzled by the relative safety of traffic. I'm not sure what lesson to draw.

On not getting a job as an option

32 diegocaleiro 11 March 2014 02:44AM

This was originally a comment to VipulNaik's recent indagations about the academic lifestyle versus the job lifestyle. Instead of calling it lifestyle he called them career options, but I'm taking a different emphasis here on purpose.

Due to information hazards risks, I recommend that Effective Altruists who are still wavering back and forth do not read this. Spoiler EA alert. 

I'd just like to provide a cultural difference information that I have consistently noted between Americans and Brazilians which seems relevant here. 

To have a job and work in the US is taken as a *de facto* biological need. It is as abnormal for an American, in my experience, to consider not working, as it is to consider not breathing, or not eating.  It just doesn't cross people's minds. 

If anyone has insight above and beyond "Protestant ethics and the spirit of capitalism" let me know about it, I've been waiting for the "why?" for years. 

So yeah, let me remind people that you can spend years and years not working. that not getting a job isn't going to kill you or make you less healthy, that ultravagabonding is possible and feasible and many do it for over six months a year, that I have a friend who lives as the boyfriend of his sponsor's wife in a triad and somehow never worked a day in his life (the husband of the triad pays it all, both men are straight). That I've hosted an Argentinian who left graduate economics for two years to randomly travel the world, ended up in Rome and passed by here in his way back, through couchsurfing.  That Puneet Sahani has been well over two years travelling the world with no money and an Indian passport now. I've also hosted a lovely estonian gentleman who works on computers 4 months a year in London to earn pounds, and spends eight months a year getting to know countries while learning their culture etc... Brazil was his third country. 

Oh, and never forget the Uruguay couple I just met at a dance festival who have been travelling as hippies around and around South America for 5 years now, and showed no sign of owning more than 500 dollars worth of stuff. 

Also in case you'd like to live in a paradise valley taking Santo Daime (a religious ritual with DMT) about twice a week, you can do it with a salary of aproximatelly 500 dollars per month in Vale do Gamarra, where I just spent carnival, that is what the guy who drove us back did.  Given Brazilian or Turkish returns on investment, that would cost you 50 000 bucks in case you refused to work within the land itself for the 500. 


Oh, I forgot to mention that though it certainly makes you unable to do expensive stuff, thus removing the paradox of choice and part of your existential angst from you (uhuu less choices!), there is nearly no detraction in status from not having a job. In fact, during these years in which I was either being an EA and directing an NGO, or studying on my own, or doing a Masters (which, let's agree is not very time consuming) my status has increased steadily, and many opportunities would have been lost if I had a job that wouldn't let me move freely. Things like being invited as Visiting Scholar to Singularity Institute, like giving a TED talk, like directing IERFH, and like spending a month working at FHI with Bostrom, Sandberg, and the classic Lesswrong poster Stuart Armstrong. 

So when thinking about what to do with you future my dear fellow Americans, please, at least consider not getting a job. At least admit what everyone knows from the bottom of their hearts, that jobs are abundant for high IQ people (specially you my programmer lurker readers.... I know you are there...and you native English speakers, I can see you there, unnecessarily worrying about your earning potential). 

A job is truly an instrumental goal, and your terminal goals certainly do have chains of causation leading to them that do not contain a job for 330 days a year.  Unless you are a workaholic who experiences flow in virtue of pursuing instrumental goals. Then please, work all day long, donate as much as you can, and may your life be awesome! 

Be comfortable with hypocrisy

31 The_Duck 08 April 2014 10:03AM

Neal Stephenson's The Diamond Age takes place several decades in the future and this conversation is looking back on the present day:

"You know, when I was a young man, hypocrisy was deemed the worst of vices,” Finkle-McGraw said. “It was all because of moral relativism. You see, in that sort of a climate, you are not allowed to criticise others-after all, if there is no absolute right and wrong, then what grounds is there for criticism?" [...]

"Now, this led to a good deal of general frustration, for people are naturally censorious and love nothing better than to criticise others’ shortcomings. And so it was that they seized on hypocrisy and elevated it from a ubiquitous peccadillo into the monarch of all vices. For, you see, even if there is no right and wrong, you can find grounds to criticise another person by contrasting what he has espoused with what he has actually done. In this case, you are not making any judgment whatsoever as to the correctness of his views or the morality of his behaviour-you are merely pointing out that he has said one thing and done another. Virtually all political discourse in the days of my youth was devoted to the ferreting out of hypocrisy." [...]

"We take a somewhat different view of hypocrisy," Finkle-McGraw continued. "In the late-twentieth-century Weltanschauung, a hypocrite was someone who espoused high moral views as part of a planned campaign of deception-he never held these beliefs sincerely and routinely violated them in privacy. Of course, most hypocrites are not like that. Most of the time it's a spirit-is-willing, flesh-is-weak sort of thing."

"That we occasionally violate our own stated moral code," Major Napier said, working it through, "does not imply that we are insincere in espousing that code."

I'm not sure if I agree with this characterization of the current political climate; in any case, that's not the point I'm interested in. I'm also not interested in moral relativism.

But the passage does point out a flaw which I recognize in myself: a preference for consistency over actually doing the right thing. I place a lot of stock--as I think many here do--on self-consistency. After all, clearly any moral code which is inconsistent is wrong. But dismissing a moral code for inconsistency or a person for hypocrisy is lazy. Morality is hard. It's easy to get a warm glow from the nice self-consistency of your own principles and mistake this for actually being right.

Placing too much emphasis on consistency led me to at least one embarrassing failure. I decided that no one who ate meat could be taken seriously when discussing animal rights: killing animals because they taste good seems completely inconsistent with placing any value on their lives. Furthermore, I myself ignored the whole concept of animal rights because I eat meat, so that it would be inconsistent for me to assign animals any rights. Consistency between my moral principles and my actions--not being a hypocrite--was more important to me than actually figuring out what the correct moral principles were. 

To generalize: holding high moral ideals is going to produce cognitive dissonance when you are not able to live up to those ideals. It is always tempting--for me at least--to resolve this dissonance by backing down from those high ideals. An alternative we might try is to be more comfortable with hypocrisy. 


Related: Self-deception: Hypocrisy or Akrasia?

The January 2013 CFAR workshop: one-year retrospective

31 Qiaochu_Yuan 18 February 2014 06:41PM

About a year ago, I attended my first CFAR workshop and wrote a post about it here. I mentioned in that post that it was too soon for me to tell if the workshop would have a large positive impact on my life. In the comments to that post, I was asked to follow up on that post in a year to better evaluate that impact. So here we are!

Very short summary: overall I think the workshop had a large and persistent positive impact on my life. 

Important caveat

However, anyone using this post to evaluate the value of going to a CFAR workshop themselves should be aware that I'm local to Berkeley and have had many opportunities to stay connected to CFAR and the rationalist community. More specifically, in addition to the January workshop, I also

  • visited the March workshop (and possibly others),
  • attended various social events held by members of the community,
  • taught at the July workshop, and
  • taught at SPARC.

These experiences were all very helpful in helping me digest and reinforce the workshop material (which was also improving over time), and a typical workshop participant might not have these advantages. 

Answering a question

pewpewlasergun wanted me to answer the following question:

I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.

The short answer is: in some sense very few, but a lot of the value I got out of attending the workshop didn't come from specific techniques. 

In more detail: to be honest, many of the specific techniques are kind of a chore to use (at least as of January 2013). I experimented with a good number of them in the months after the workshop, and most of them haven't stuck (but that isn't so bad; the cost of trying a technique and finding that it doesn't work for you is low, while the benefit of trying a technique and finding that it does work for you can be quite high!). One that has is the idea of a next action, which I've found incredibly useful. Next actions are the things that to-do list items should be, say in the context of using Remember The Milk. Many to-do list items you might be tempted to right down are difficult to actually do because they're either too vague or too big and hence trigger ugh fields. For example, you might have an item like

  • Do my taxes

that you don't get around to until right before you have to because you have an ugh field around doing your taxes. This item is both too vague and too big: instead of writing this down, write down the next physical action you need to take to make progress on this item, which might be something more like

  • Find tax forms and put them on desk

which is both concrete and small. Thinking in terms of next actions has been a huge upgrade to my GTD system (as was Workflowy, which I also started using because of the workshop) and I do it constantly. 

But as I mentioned, a lot of the value I got out of attending the workshop was not from specific techniques. Much of the value comes from spending time with the workshop instructors and participants, which had effects that I find hard to summarize, but I'll try to describe some of them below: 

Emotional attitudes

The workshop readjusted my emotional attitudes towards several things for the better, and at several meta levels. For example, a short conversation with a workshop alum completely readjusted my emotional attitude towards both nutrition and exercise, and I started paying more attention to what I ate and going to the gym (albeit sporadically) for the first time in my life not long afterwards. I lost about 15 pounds this way (mostly from the eating part, not the gym part, I think). 

At a higher meta level, I did a fair amount of experimenting with various lifestyle changes (cold showers, not shampooing) after the workshop and overall they had the effect of readjusting my emotional attitude towards change. I find it generally easier to change my behavior than I used to because I've had a lot of practice at it lately, and am more enthusiastic about the prospect of such changes. 

(Incidentally, I think emotional attitude adjustment is an underrated component of causing people to change their behavior, at least here on LW.)

Using all of my strength

The workshop is the first place I really understood, on a gut level, that I could use my brain to think about something other than math. It sounds silly when I phrase it like that, but at some point in the past I had incorporated into my identity that I was good at math but absentminded and silly about real-world matters, and I used it as an excuse not to fully engage intellectually with anything that wasn't math, especially anything practical. One way or another the workshop helped me realize this, and I stopped thinking this way. 

The result is that I constantly apply optimization power to situations I wouldn't have even tried to apply optimization power to before. For example, today I was trying to figure out why the water in my bathroom sink was draining so slowly. At first I thought it was because the strainer had become clogged with gunk, so I cleaned the strainer, but then I found out that even with the strainer removed the water was still draining slowly. In the past I might've given up here. Instead I looked around for something that would fit farther into the sink than my fingers and saw the handle of my plunger. I pumped the handle into the sink a few times and some extra gunk I hadn't known was there came out. The sink is fine now. (This might seem small to people who are more domestically talented than me, but trust me when I say I wasn't doing stuff like this before last year.)

Reflection and repair

Thanks to the workshop, my GTD system is now robust enough to consistently enable me to reflect on and repair my life (including my GTD system). For example, I'm quicker to attempt to deal with minor medical problems I have than I used to be. I also think more often about what I'm doing and whether I could be doing something better. In this regard I pay a lot of attention in particular to what habits I'm forming, although I don't use the specific techniques in the relevant CFAR unit.

For example, at some point I had recorded in RTM that I was frustrated by the sensation of hours going by without remembering how I had spent them (usually because I was mindlessly browsing the internet). In response, I started keeping a record of what I was doing every half hour and categorizing each hour according to a combination of how productively and how intentionally I spent it (in the first iteration it was just how productively I spent it, but I found that this was making me feel too guilty about relaxing). For example:

  • a half-hour intentionally spent reading a paper is marked green.
  • a half-hour half-spent writing up solutions to a problem set and half-spent on Facebook is marked yellow. 
  • a half-hour intentionally spent playing a video game is marked with no color.
  • a half-hour mindlessly browsing the internet when I had intended to do work is marked red. 

The act of doing this every half hour itself helps make me more mindful about how I spend my time, but having a record of how I spend my time has also helped me notice interesting things, like how less of my time is under my direct control than I had thought (but instead is taken up by classes, commuting, eating, etc.). It's also easier for me to get into a success spiral when I see a lot of green. 


Being around workshop instructors and participants is consistently intellectually stimulating. I don't have a tactful way of saying what I'm about to say next, but: two effects of this are that I think more interesting thoughts than I used to and also that I'm funnier than I used to be. (I realize that these are both hard to quantify.) 


I worry that I haven't given a complete picture here, but hopefully anything I've left out will be brought up in the comments one way or another. (Edit: this totally happened! Please read Anna Salamon's comment below.) 

Takeaway for prospective workshop attendees

I'm not actually sure what you should take away from all this if your goal is to figure out whether you should attend a workshop yourself. My thoughts are roughly this: I think attending a workshop is potentially high-value and therefore that even talking to CFAR about any questions you might have is potentially high-value, in addition to being relatively low-cost. If you think there's even a small chance you could get a lot of value out of attending a workshop I recommend that you at least take that one step. 

How long will Alcor be around?

28 Froolow 17 April 2014 03:28PM

The Drake equation for cryonics is pretty simple: work out all the things that need to happen for cryonics to succeed one day, estimate the probability of each thing occurring independently, then multiply all those numbers together. Here’s one example of the breakdown from Robin Hanson. According to the 2013 LW survey, LW believes the average probability that cryonics will be successful for someone frozen today is 22.8% assuming no major global catastrophe. That seems startlingly high to me – I put the probability at at least two orders of magnitude lower. I decided to unpick some of the assumptions behind that estimate, particularly focussing on assumptions which I could model.

EDIT: This needs a health warning; here be overconfidence dragons. There are psychological biases that can lead you to estimating these numbers badly based on the number of terms you're asked to evaluate, statistical biases that lead to correlated events being evaluated independently by these kind of models and overall this can lead to suicidal overconfidence if you take the nice neat number these equations spit out as gospel.

Every breakdown includes a component for ‘the probability that the company you freeze with goes bankrupt’ for obvious reasons. In fact, the probability of bankruptcy (and global catastrophe) are particularly interesting terms because they are the only terms which are ‘time dependant’ in the usual Drake equation. What I mean by this is that if you know your body will be frozen intact forever, then it doesn’t matter to you when effective unfreezing technology is developed (except to the extent you might have a preference to live in a particular time period). By contrast, if you know safe unfreezing techniques will definitely be developed one day it matters very much to you that it occurs sooner rather than later because if you unfreeze before the development of these techniques then they are totally wasted on you.

The probability of bankruptcy is also very interesting because – I naively assumed last week – we must have excellent historical data on the probability of bankruptcy given the size, age and market penetration of a given company. From this – I foolishly reasoned – we must be able to calculate the actual probability of the ‘bankruptcy’ component in the Cryo-Drake equation and slightly update our beliefs.

I began by searching for the expected lifespan of an average company and got two estimates which I thought would be a useful upper- and lower-bound. Startup companies have an average lifespan of four years. S&P 500 companies have an average lifespan of fifteen years. My logic here was that startups must be the most volatile kind of company, S&P 500 must be the least volatile and cryonics firms must be somewhere in the middle. Since the two sources only report the average lifespan, I modelled the average as a half-life. The results really surprised me; take a look at the following graph:


Even assuming cryonics firms are as well managed as S&P 500 companies, a 22.8% chance of success depends on every single other factor in the Drake equation being absolutely certain AND unfreezing technology being developed in 37 years.

But I noticed I was confused; Alcor has been around forty-ish years. Assuming it started life as a small company, the chance of that happening was one in ten thousand. That both Alcor AND The Cryonics Institute have been successfully freezing people for forty years seems literally beyond belief. I formed some possible hypotheses to explain this:

  1. Many cryo firms have been set up, and I only know about the successes (a kind of anthropic argument)
  2. Cryonics firms are unusually well-managed
  3. The data from one or both of my sources was wrong
  4. Modelling an average life expectancy as a half-life was wrong
  5. Some extremely unlikely event that is still more likely than the one in billion chance my model predicts – for example the BBC article is an April Fool’s joke that I don’t understand.

I’m pretty sure I can rule out 1; if many cryo firms were set up I’d expect to see four lasting twenty years and eight lasting ten years, but in fact we see one lasting about five years and two lasting indefinitely. We can also probably rule out 2; if cryo firms were demonstrably better managed than S&P 500 companies, the CEO of Alcor could go and run Microsoft and use the pay differential to support cryo research (if he was feeling altruistic). Since I can’t do anything about 5, I decided to focus my analysis on 3 and 4. In fact, I think 3 and 4 are both correct explanations; my source for the S&P 500 companies counted dropping out of the S&P 500 as a company ‘death’, when in fact you might drop out because you got taken over, because your industry became less important (but kept existing) or because other companies overtook you – your company can’t do anything about Facebook or Apple displacing them from the S&P 500, but Facebook and Apple don’t make you any more likely to fail. Additionally, modelling as a half-life must have been flawed; a company that has survived one hundred years and a company that has survived one year are not equally likely to collapse!

Consequently I searched Google Scholar for a proper academic source. I found one, but I should introduce the following caveats:

  1. It is UK data, so may not be comparable to the US (my understanding is that the US is a lot more forgiving of a business going bankrupt, so the UK businesses may liquidate slightly less frequently).
  2. It uses data from 1980. As well as being old data, there are specific reasons to believe that this time period overestimates the true survival of companies. For example, the mid-1980’s was an economic boom in the UK and 1980-1985 misses both major UK financial crashes of modern times (Black Wednesday and the Sub-Prime Crash). If the BBC is to be believed, the trend has been for companies to go bankrupt more and more frequently since the 1920’s.

I found it really shocking that this question was not better studied. Anyway, the key table that informed my model was this one, which unfortunately seems to break the website when I try to embed it. The source is Dunne, Paul, and Alan Hughes. "Age, size, growth and survival: UK companies in the 1980s." The Journal of Industrial Economics (1994): 115-140.

You see on the left the size of the company in 1980 (£1 in 1980 is worth about £2.5 now). On the top is the size of the company in 1985, with additional columns for ‘taken over’, ‘bankrupt’ or ‘other’. Even though a takeover might signal the end of a particular product line within a company, I have only counted bankruptcies as representing a threat to a frozen body; it is unlikely Alcor will be bought out by anyone unless they have an interest in cryonics.

The model is a Discrete Time Markov Chain analysis in five-year increments. What this means is that I start my hypothetical cryonics company at <£1m and then allow it to either grow or go bankrupt at the rate indicated in the article. After the first period I look at the new size of the company and allow it to grow, shrink or go bankrupt in accordance with the new probabilities. The only slightly confusing decision was what to do with takeovers. In the end I decided to ignore takeovers completely, and redistribute the probability mass they represented to all other survival scenarios.

The results are astonishingly different:


Now your body can remain alive 415 years for a 22.8% chance of revival (assuming all other probabilities are certain). Perhaps more usefully, if you estimate the year you expect revival to occur you can read across the x axis to find the probability that your cryo company will still exist by then. For example in the OvercomingBias link above, Hanson estimates that this will occur in 2090, meaning he should probably assign something like a 0.65 chance to the probability his cryo company is still around.

Remember you don’t actually need to estimate the actual year YOUR revival will occur, but only the first year the first successful revival proves that cryogenically frozen bodies are ‘alive’ in a meaningful sense and therefore recieve protection under the law in case your company goes bankrupt. In fact, you could instead estimate the year Congress passes a ‘right to not-death’ law which would protect your body in the event of a bankruptcy even before routine unfreezing, or the year when brain-state scanning becomes advanced enough that it doesn’t matter what happens to your meatspace body because a copy of your brain exists on the internet.

My conclusion is that the survival of your cryonics firm is a lot more likely that the average person in the street thinks, but probably a lot less likely that you think if you are strongly into cryonics. This is probably not news to you; most of you will be aware of over-optimism bias, and have tried to correct for it. Hopefully these concrete numbers will be useful next time you consider the Cryo-Drake equation and the net present value of investing in cryonics.

Business Networking through LessWrong

28 JoshuaFox 02 April 2014 05:39PM

Is anyone interested in contacting other people in the LessWrong community to find a job, employee, business partner, co-founder, adviser, or investor?

Connections like this develop inside ethnic and religious groups, as well as  among university  alums or members of a fraternity. I think that LessWrong can provide  the same value.

For example, LessWrong must have plenty of skilled software developers in dull jobs, who would love to work with smart, agenty rationalists. Likewise, there must be some company founders or managers who are having a very hard time finding good software developers. 

A shared commitment to instrumental and epistemic rationality should be a good starting point, not to mentioned a shared memeplex to help break the ice. (Paperclips! MoR!)

Besides being  fun, working together with other rationalists could be a good business move.

As a side-benefit, it also has good potential to raise the sanity waterline and help further develop new rationality skills, both personally and as a community.

Naturally, such a connection is not guaranteed to produce results. But it's hard to find the right people to work with: Why not try this route? And although you can cold-contact someone you've seen online, you don't know who's interested in what you have to offer, so I think more effort is needed to bootstrap such networking.

I'd like to gauge interest. (Alexandros has volunteered to help.) If you might be interested in this sort of networking, please fill out this short Google Form [Edit: Survey closed as of April 15]. I'll post an update about what sort of response we get.

Privacy: Although the main purpose of this form is to gauge interest, and other details may be needed to form good connections,  the info might be enough to get some contacts going. So, we might use this information to personally connect people. We won't share the info or build any online group with it. If we get a lot of interest we may later create some sort of online mechanism, but we’ll be sure to get your permission before adding you.


Edit April 6: We're still seeing that people are  filling out the form, so we'll wait a week or two, and report on results.


Edit April 15: See some comments on the results at this comment, below.


Two arguments for not thinking about ethics (too much)

28 Kaj_Sotala 27 March 2014 02:15PM

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

1: Little expected insight

This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.

One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.

To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.

2: Leads to akrasia

Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.

For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:

As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.

Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.

Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)

The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:

Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”

Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]

Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.

In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

The Problem of "Win-More"

26 katydee 26 March 2014 06:32PM

In Magic: the Gathering and other popular card games, advanced players have developed the notion of a "win-more" card. A "win-more" card is one that works very well, but only if you're already winning. In other words, it never helps turn a loss into a win, but it is very good at turning a win into a blowout. This type of card seems strong at first, but since these games usually do not use margin of victory scoring in tournaments, they end up being a trap-- instead of using cards that convert wins into blowouts, you want to use cards that convert losses into wins.

This concept is useful and important and you should never tell a new player about it, because it tends to make them worse at the game. Without a more experienced player's understanding of core concepts, it's easy to make mistakes and label cards that are actually good as being win-more.

This is an especially dangerous mistake to make because it's relatively uncommon for an outright bad card to seem like a win-more card; win-more cards are almost always cards that look really good at first. That means that if you end up being too wary of win-more cards, you're going to end up misclassifying good cards as bad, and that's an extremely dangerous mistake to make. Misclassifying bad cards as good is relatively easy to deal with, because you'll use them and see that they aren't good; misclassifying good cards as bad is much more dangerous, because you won't play them and therefore won't get the evidence you need to update your position.

I call this the "win-more problem." Concepts that suffer from the win-more problem are those that-- while certainly useful to an advanced user-- are misleading or net harmful to a less skillful person. Further, they are wrong or harmful in ways that are difficult to detect, because they screen off feedback loops that would otherwise allow someone to realize the mistake.

LINK: In favor of niceness, community, and civilisation

26 Solvent 24 February 2014 04:13AM

Scott, known on LessWrong as Yvain, recently wrote a post complaining about an inaccurate rape statistic.

Arthur Chu, who is notable for winning money on Jeopardy recently, argued against Scott's stance that we should be honest in arguments in a comment thread on Jeff Kaufman's Facebook profile, which can be read here.

Scott just responded here, with a number of points relevant to the topic of rationalist communities.

I am interested in what LW thinks of this.

Obviously, at some point being polite in our arguments is silly. I'd be interested in people's opinions of how dire the real world consequences have to be before it's worthwhile debating dishonestly.

Gunshot victims to be suspended between life and death [link]

24 Dr_Manhattan 27 March 2014 04:33PM

- First "official" program to practice suspended animation

- The article naturally goes on to ask whether longer SA (months, years) is possible 

- Amazing quote: "Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It's frustrating to know there's a solution."

- IMO this if (I hope!) successful, will go a long way to bridge the emotional gap for cryonics

Google may be trying to take over the world

23 eli_sennesh 27 January 2014 09:33AM

So I know we've already seen them buying a bunch of ML and robotics companies, but now they're purchasing Shane Legg's AGI startup.  This is after they've acquired Boston Dynamics, several smaller robotics and ML firms, and started their own life-extension firm.


Is it just me, or are they trying to make Accelerando or something closely related actually happen?  Given that they're buying up real experts and not just "AI is inevitable" prediction geeks (who shall remain politely unnamed out of respect for their real, original expertise in machine learning), has someone had a polite word with them about not killing all humans by sheer accident?

Items to Have In Case of Emergency...Maybe.

22 daenerys 03 April 2014 12:23AM

This post is inspired by a recent comment thread on my Facebook. I asked people to respond with whether or not they kept fire/lock boxes in their homes for their important documents (mainly to prove to a friend that this is a Thing People Do). It was pretty evenly divided, with slightly more people having them, than not. The interesting pattern I noticed was that almost ALL of my non-rationality community friends DID have them, and almost NONE of my rationality community friends did, and some hadn't even considered it.

This could be because getting a lock box is not an optimal use of time or money, OR it could be because rationalists often overlook the mundane household-y things more than the average person. I'm actually not certain which it is, so am writing this post presenting the case of why you should keep certain emergency items in the hope that either I'll get some interesting points for why you shouldn't prep that I haven't thought of yet, OR will get even better ideas in the comments.

General Case

Many LWers are concerned about x-risks that have a small chance of causing massive damage. We may or may not see this occur in our lifetime. However, there are small problems that occur every 2-3 years or so (extended blackout, being snowed in, etc), and there are mid-sized catastrophes that you might see a couple times in your life (blizzards, hurricanes, etc). It is likely that at least once in your life you will be snowed in your house and the pipes will burst or freeze (or whatever the local equivalent is, if you live in a warmer climate). Having the basic preparations ready for these occurrences is low cost (many minor emergencies require a similar set of preparations), and high payoff. 

Medicine and Hospitality

This category is so minor, you probably don't consider it to be "emergency", but it's still A Thing To Prepare For. It really sucks having to go to the store when you're sick because you don't already have the medicine you need at hand. It's better to keep the basics always available, just in case. You, or a guest, are likely to be grateful that you have these on hand. Even if you personally never get sick, I consider a well-stocked medicine cabinet to be a point of hospitality. If you have people over to your place with any frequency, it is nice to have:


  • Pain Reliever (ibuprofen, NSAID)
  • Zyrtec (Especially if you have cats. Guests might be allergic!)
  • Antacids, Chewable Pepto, Gas-X (Especially if you have people over for food)
  • Multipurpose contact solution (getting something in your contact without any solution nearby is both rare and awful)
  • Neosporin/bandaids (esp. if your cats scratch :P)


  • Spare toothbrush (esp. if you might have a multi-day guest)
  • Single use disposable toothbrushes (such as Wisp). These are also good to carry with you in your backpack or purse.)
  • Pads/tampons (Yes, even if you're a guy. They should be somewhere obvious such as under the sink, so that your guest doesn't need to ask)
Of course, you can also go all out with your First Aid kit, and also include less common items like epi pens, bandages, etc.

Vehicle Kits

The stuff you keep at home isn't going to be very helpful if you have a minor emergency while travelling. Some things that are useful to keep in your car include

  • Blanket
  • Water
  • Protein/ granola bar
  • Jumper Cables
  • Spare Tire and jack
  • If you get frequent headaches or the like, you might also want to keep your preferred pain reliever or whatnot in the car

Minor Catastrophe Preparation

These are somewhat geography dependent. Adjust for whatever catastrophes are common in your area. There are places where if you don't have 4 wheel drive, you're just not going to be able to leave your house during a snowstorm. There are places where tornadoes or earthquakes are common. There are places where a bad hurricane rolls through every couple years. If you're new to an area, make sure you know what the local "regular" emergency is.

Some of these are a bit of a harder sell, I think. 

  • Flashlights (that you can find in the dark)
  • Spare batteries
  • Candles/Lighter
  • Water ( says one gallon per person per day, and have enough for 3 days)
  • Non perishable food (ideally that doesn't need to be cooked, e.g. canned goods)
  • Manual can opener
  • Fire Extinguisher
  • Action: check out for the emergencies that are most common for your area, and read their recommendations

Bigger Preparations

This list goes a bit beyond the basics:
  • A "Go Bag" (something pre-packed that you can grab and go)
  • A fire-safe lock box (not only does this protect your documents, but it helps in organizing that there is an obvious place where these important documents go, and not just "somewhere in that file drawer...or somewhere else")
  • Back up your data in the cloud
  • Moar water, moar food


Arthur Chu: Jeopardy! champion through exemplary rationality

22 syllogism 02 February 2014 08:02AM

I'm not sure I've ever seen such a compelling "rationality success story". There's so much that's right here.

The part that really grabs me about this is that there's no indication that his success has depended on "natural" skill or talent. And none of the strategies he's using are from novel research. He just studied the "literature" and took the results seriously. He didn't arbitrarily deviate from the known best practice based on aesthetics or intuition. And he kept a simple, single-minded focus on his goal. No lost purposes here --- just win as much money as possible, bank the winnings, and use it to self-insure. It's rationality-as-winning, plain and simple.

Recommendations for donating to an anti-death cause

20 fowlertm 09 April 2014 02:56AM

I've recently had the bad luck of having numerous people close to me die. Though I've wanted to contribute to anti-aging and anti-death research for a while, I'm only now in the position of being stable and materially well-off enough to throw around semi-serious cash.

Who should I donate to? I don't want to do anything with cryonics yet; I haven't given cryonics enough thought to be convinced it'd be worth the money. But I was considering the Methuselah foundation.


Discovering Your Secretly Secret Sensory Experiences

20 seez 18 March 2014 10:12AM

In his recent excellent blog post, Yvain discusses a few "universal" (commonplace) human experiences that many people never notice they don't have, such as the ability to smell, see some colors, see mental pictures, and feel emotions.  I was reminded of a longstanding argument I had with a friend.  She always insisted that she would rather be blind than deaf.  I could not understand how that was possible, since the visual world is so much richer and more interesting.  We later found out that I can see an order of magnitude more colors than she can, but have subpar ability to distinguish tones.  And I thought she was just being a contrarian for its own sake.  I thought the experience of that many colors was universal, and had rarely seen evidence that challenged that belief.  

More seriously, a good friend of mine did not realize he suffered from a serious genetic disorder that caused him extreme body pain and terrible headaches whenever he became tired or dehydrated for the first three decades of his life.  He thought everyone felt that way, but considered it whiny to talk about it.  He almost never mentioned it, and never realized what it was, until <bragging> I noticed how tense his expressions became when he got tired, asked him about it, then put it together with some other unusual physical experiences I knew he had </bragging>

This got me thinking about when it is likely we might be having unusual sensory experiences and not realize for long periods of time.  I am calling these "secretly secret experiences."  Here are the factors that might increase the likelihood of having a secretly secret experience. 

1) When they are rarely consciously mentally examined: experiences such as the ability to distinguish subtle differences in shades of color are tested occasionally (when choosing paint or ripe fruit), but few people besides interior decorators think about how good their shade-distinguishing skills are.  Others include that feeling of being in different moods or mental states, breathing, sensing commonly-sensed things (the look of roads or the sound of voices, etc.)  Most of the examples from the blog post fall under this category.  People might not notice that they over- or under-experience or differently experience such feelings, relative to others.  

2) When they are rarely discussed in everyday life: If my experience of pooping feels very different from other peoples' I may never know, because I don't discuss the experience in detail with anyone.  If people talked about their experiences, I would probably notice if mine didn't match up, but that's unlikely to happen.  The same might apply for other experiences that are taboo to discuss, such as masturbation, sex (in some cultures), anything considered gross or unhygienic, or socially awkward experiences (in some cultures).

3) When there is social pressure to experience something a certain way: it may be socially dangerous to admit you don't find members of the opposite sex attractive, or you didn't enjoy The Godfather or whatever.  Depending on your sensitivity to social pressure (see 4) and the strength of the pressure, this could lead to unawareness about true rare preferences.  

4) Sensitivity to external influences:  Some people pick up on social cues more easily than others.  Some notice social norms more readily, and some seem more or less willing to violate some norms (partly because of how well they perceive them, plus some other factors). I can imagine that a deeply autistic person might be influenced far less by mainstream descriptions of different experiences.  Exceptionally socially attuned people might (perhaps) take social influences to heart and be less able to distinguish their own from those they know about.  

5) When skills are redundant or you have good substitutes:  For example, if we live in a world with only fish and mammals, and all mammals are brown and warm and all fish are cold and silver, you might never notice that you can't feel temperature because you are still a perfectly good mammal and fish distinguisher.  In the real world, it's harder to find clear examples, but I can think of substitutes for color-sightedness such as shade and textural cues that increase the likelihood of a color-blind person not realizing zir blindness.  Similarly, empathy and social adeptness may increase someone's ability both to mask that ze is having a different experience than others, and the likelihood that ze will believe all others are good at hiding a different experience than the one they portray openly.

What else can people think of?

Special thanks to JT for his feedback and for letting me share his story.

What we learned about Less Wrong from Cognito Mentoring advising

20 VipulNaik 06 March 2014 09:40PM

In late December 2013, Jonah, my collaborator at Cognito Mentoring, announced the service on LessWrong. Information about the service was also circulated in other venues with high concentrations of gifted and intellectually curious people. Since then, we're received ~70 emails asking for mentoring from learners across all ages, plus a few parents. At least 40 of our advisees heard of us through LessWrong, and the number is probably around 50. Of the 23 who responded to our advisee satisfaction survey, 16 filled in information on where they'd heard of us, and 14 of those 16 had heard of us from LessWrong. The vast majority of student advisees with whom we had substantive interactions, and the ones we felt we were able to help the most, came from LessWrong (we got some parents through the Davidson Forum post, but that's a very different sort of advising).

In this post, I discuss some common themes that emerged from our interaction with these advisees. Obviously, this isn't a comprehensive picture of the LessWrong community the way that Yvain's 2013 survey results were.

  • A significant fraction of the people who contacted us via LessWrong aren't active LessWrong participants, and many don't even have user accounts on LessWrong. The prototypical advisees we got through LessWrong don't have many distinctive LessWrongian beliefs. Many of them use LessWrong primarily as a source of interesting stuff to read, rather than a community to be part of.
  • About 25% of the advisees we got through LessWrong were female, and a slightly higher proportion of the advisees with whom we had substantive interaction (and subjectively feel we helped a lot) were female. You can see this by looking at the sex distribution of the public reviews of us from students.
  • Our advisees included people in high school (typically, grades 11 and 12) and college. Our advisees in high school tended to be interested in mathematics, computer science, physics, engineering, and entrepreneurship. We did have a few who were interested in economics, philosophy, and the social sciences as well, but this was rarer. Our advisees in college and graduate school were also interested in the above subjects but skewed a bit more in the direction of being interested in philosophy, psychology, and economics.
  • Somewhat surprisingly and endearingly, many of our advisees were interested in effective altruism and social impact. Some had already heard of the cluster of effective altruist ideas. Others were interested in generating social impact through entrepreneurship or choosing an impactful career, even though they weren't familiar with effective altruism until we pointed them to it. Of those who had heard of effective altruism as a cluster of ideas, some had either already consulted with or were planning to consult with 80,000 Hours, and were connecting with us largely to get a second opinion or to get opinion on matters other than career choice.
  • Some of our advisees had had some sort of past involvement with MIRI/CFAR/FHI. Some were seriously considering working in existential risk reduction or on artificial intelligence. The two subsets overlapped considerably.
  • Our advisees were somewhat better educated about rationality issues than we'd expect others of similar academic accomplishment to be, and more than the advisees we got from sources other than LessWrong. That's obviously not a surprise at all.
  • We hadn't been expecting it, but many advisees asked us questions related to procrastination, social skills, and other life skills. We were initially somewhat ill-equipped to handle these, but we've built a base of recommendations, with some help from LessWrong and other sources.
  • One thing that surprised me personally is that many of these people had never spent time exploring Quora. I'd have expected Quora to be much more widely known and used by the sort of people who were sufficiently aware of the Internet to know LessWrong. But it's possible there's not that much overlap.

My overall takeaway is that LessWrong seems to still be one of the foremost places that smart and curious young people interested in epistemic rationality visit. I'm not sure of the exact reason, though HPMOR probably gets a significant fraction of the credit. As long as things stay this way, LessWrong remains a great way to influence a subset of the young population today that's likely to be disproportionately represented among the decision-makers a few years down the line.

It's not clear to me why they don't participate more actively on LessWrong. Maybe no special reasons are needed: the ratio of lurkers to posters is huge for most Internet fora. Maybe the people who contacted us were relatively young and still didn't have an Internet presence, or were being careful about building one. On the other hand, maybe there is something about the comments culture that dissuades people from participating (this need not be a bad feature per se: one reason people may refrain from participating is that comments are held to a high bar and this keeps people from offering off-the-cuff comments). That said, if people could somehow participate more, LessWrong could transform itself into an interactive forum for smart and curious people that's head and shoulders above all the others.

PS: We've now made our information wiki publicly accessible. It's still in beta and a lot of content is incomplete and there are links to as-yet-uncreated pages all over the place. But we think it might still be interesting to the LessWrong audience.

How my math skills improved dramatically

20 JonahSinick 05 March 2014 08:27PM

When I was a freshman in high school, I was a mediocre math student: I earned a D in second semester geometry and had to repeat the course. By the time I was a senior in high school, I was one of the strongest few math students in my class of ~600 students at an academic magnet high school. I went on to earn a PhD in math. Most people wouldn't have guessed that I could have improved so much, and the shift that occurred was very surreal to me. It’s all the more striking in that the bulk of the shift occurred in a single year. I thought I’d share what strategies facilitated the change.

continue reading »

L-zombies! (L-zombies?)

19 Benja 07 February 2014 06:30PM

Reply to: Benja2010's Self-modification is the correct justification for updateless decision theory; Wei Dai's Late great filter is not bad news

"P-zombie" is short for "philosophical zombie", but here I'm going to re-interpret it as standing for "physical philosophical zombie", and contrast it to what I call an "l-zombie", for "logical philosophical zombie".

A p-zombie is an ordinary human body with an ordinary human brain that does all the usual things that human brains do, such as the things that cause us to move our mouths and say "I think, therefore I am", but that isn't conscious. (The usual consensus on LW is that p-zombies can't exist, but some philosophers disagree.) The notion of p-zombie accepts that human behavior is produced by physical, computable processes, but imagines that these physical processes don't produce conscious experience without some additional epiphenomenal factor.

An l-zombie is a human being that could have existed, but doesn't: a Turing machine which, if anybody ever ran it, would compute that human's thought processes (and its interactions with a simulated environment); that would, if anybody ever ran it, compute the human saying "I think, therefore I am"; but that never gets run, and therefore isn't conscious. (If it's conscious anyway, it's not an l-zombie by this definition.) The notion of l-zombie accepts that human behavior is produced by computable processes, but supposes that these computational processes don't produce conscious experience without being physically instantiated.

Actually, there probably aren't any l-zombies: The way the evidence is pointing, it seems like we probably live in a spatially infinite universe where every physically possible human brain is instantiated somewhere, although some are instantiated less frequently than others; and if that's not true, there are the "bubble universes" arising from cosmological inflation, the branches of many-worlds quantum mechanics, and Tegmark's "level IV" multiverse of all mathematical structures, all suggesting again that all possible human brains are in fact instantiated. But (a) I don't think that even with all that evidence, we can be overwhelmingly certain that all brains are instantiated; and, more importantly actually, (b) I think that thinking about l-zombies can yield some useful insights into how to think about worlds where all humans exist, but some of them have more measure ("magical reality fluid") than others.

So I ask: Suppose that we do indeed live in a world with l-zombies, where only some of all mathematically possible humans exist physically, and only those that do have conscious experiences. How should someone living in such a world reason about their experiences, and how should they make decisions — keeping in mind that if they were an l-zombie, they would still say "I have conscious experiences, so clearly I can't be an l-zombie"?

If we can't update on our experiences to conclude that someone having these experiences must exist in the physical world, then we must of course conclude that we are almost certainly l-zombies: After all, if the physical universe isn't combinatorially large, the vast majority of mathematically possible conscious human experiences are not instantiated. You might argue that the universe you live in seems to run on relatively simple physical rules, so it should have high prior probability; but we haven't really figured out the exact rules of our universe, and although what we understand seems compatible with the hypothesis that there are simple underlying rules, that's not really proof that there are such underlying rules, if "the real universe has simple rules, but we are l-zombies living in some random simulation with a hodgepodge of rules (that isn't actually ran)" has the same prior probability; and worse, if you don't have all we do know about these rules loaded into your brain right now, you can't really verify that they make sense, since there is some mathematically possible simulation whose initial state has you remember seeing evidence that such simple rules exist, even if they don't; and much worse still, even if there are such simple rules, what evidence do you have that if these rules were actually executed, they would produce you? Only the fact that you, like, exist, but we're asking what happens if we don't let you update on that.

I find myself quite unwilling to accept this conclusion that I shouldn't update, in the world we're talking about. I mean, I actually have conscious experiences. I, like, feel them and stuff! Yes, true, my slightly altered alter ego would reason the same way, and it would be wrong; but I'm right...

...and that actually seems to offer a way out of the conundrum: Suppose that I decide to update on my experience. Then so will my alter ego, the l-zombie. This leads to a lot of l-zombies concluding "I think, therefore I am", and being wrong, and a lot of actual people concluding "I think, therefore I am", and being right. All the thoughts that are actually consciously experienced are, in fact, correct. This doesn't seem like such a terrible outcome. Therefore, I'm willing to provisionally endorse the reasoning "I think, therefore I am", and to endorse updating on the fact that I have conscious experiences to draw inferences about physical reality — taking into account the simulation argument, of course, and conditioning on living in a small universe, which is all I'm discussing in this post.

NB. There's still something quite uncomfortable about the idea that all of my behavior, including the fact that I say "I think therefore I am", is explained by the mathematical process, but actually being conscious requires some extra magical reality fluid. So I still feel confused, and using the word l-zombie in analogy to p-zombie is a way of highlighting that. But this line of reasoning still feels like progress. FWIW.

But if that's how we justify believing that we physically exist, that has some implications for how we should decide what to do. The argument is that nothing very bad happens if the l-zombies wrongly conclude that they actually exist. Mostly, that also seems to be true if they act on that belief: mostly, what l-zombies do doesn't seem to influence what happens in the real world, so if only things that actually happen are morally important, it doesn't seem to matter what the l-zombies decide to do. But there are exceptions.

Consider the counterfactual mugging: Accurate and trustworthy Omega appears to you and explains that it just has thrown a very biased coin that had only a 1/1000 chance of landing heads. As it turns out, this coin has in fact landed heads, and now Omega is offering you a choice: It can either (A) create a Friendly AI or (B) destroy humanity. Which would you like? There is a catch, though: Before it threw the coin, Omega made a prediction about what you would do if the coin fell heads (and it was able to make a confident prediction about what you would choose). If the coin had fallen tails, it would have created an FAI if it has predicted that you'd choose (B), and it would have destroyed humanity if it has predicted that you would choose (A). (If it hadn't been able to make a confident prediction about what you would choose, it would just have destroyed humanity outright.)

There is a clear argument that, if you expect to find yourself in a situation like this in the future, you would want to self-modify into somebody who would choose (B), since this gives humanity a much larger chance of survival. Thus, a decision theory stable under self-modification would answer (B). But if you update on the fact that you consciously experience Omega telling you that the coin landed heads, (A) would seem to be the better choice!

One way of looking at this is that if the coin falls tails, the l-zombie that is told the coin landed heads still exists mathematically, and this l-zombie now has the power to influence what happens in the real world. If the argument for updating was that nothing bad happens even though the l-zombies get it wrong, well, that argument breaks here. The mathematical process that is your mind doesn't have any evidence about whether the coin landed heads or tails, because as a mathematical object it exists in both possible worlds, and it has to make a decision in both worlds, and that decision affects humanity's future in both worlds.

Back in 2010, I wrote a post arguing that yes, you would want to self-modify into something that would choose (B), but that that was the only reason why you'd want to choose (B). Here's a variation on the above scenario that illustrates the point I was trying to make back then: Suppose that Omega tells you that it actually threw its coin a million years ago, and if it had fallen tails, it would have turned Alpha Centauri purple. Now throughout your history, the argument goes, you would never have had any motive to self-modify into something that chooses (B) in this particular scenario, because you've always known that Alpha Centauri isn't, in fact, purple.

But this argument assumes that you know you're not a l-zombie; if the coin had in fact fallen tails, you wouldn't exist as a conscious being, but you'd still exist as a mathematical decision-making process, and that process would be able to influence the real world, so you-the-decision-process can't reason that "I think, therefore I am, therefore the coin must have fallen heads, therefore I should choose (A)." Partly because of this, I now accept choosing (B) as the (most likely to be) correct choice even in that case. (The rest of my change in opinion has to do with all ways of making my earlier intuition formal getting into trouble in decision problems where you can influence whether you're brought into existence, but that's a topic for another post.)

However, should you feel cheerful while you're announcing your choice of (B), since with high (prior) probability, you've just saved humanity? That would lead to an actual conscious being feeling cheerful if the coin has landed heads and humanity is going to be destroyed, and an l-zombie computing, but not actually experiencing, cheerfulness if the coin has landed heads and humanity is going to be saved. Nothing good comes out of feeling cheerful, not even alignment of a conscious' being's map with the physical territory. So I think the correct thing is to choose (B), and to be deeply sad about it.

You may be asking why I should care what the right probabilities to assign or the right feelings to have are, since these don't seem to play any role in making decisions; sometimes you make your decisions as if updating on your conscious experience, but sometimes you don't, and you always get the right answer if you don't update in the first place. Indeed, I expect that the "correct" design for an AI is to fundamentally use (more precisely: approximate) updateless decision theory (though I also expect that probabilities updated on the AI's sensory input will be useful for many intermediate computations), and "I compute, therefore I am"-style reasoning will play no fundamental role in the AI. And I think the same is true for humans' decisions — the correct way to act is given by updateless reasoning. But as a human, I find myself unsatisfied by not being able to have a picture of what the physical world probably looks like. I may not need one to figure out how I should act; I still want one, not for instrumental reasons, but because I want one. In a small universe where most mathematically possible humans are l-zombies, the argument in this post seems to give me a justification to say "I think, therefore I am, therefore probably I either live in a simulation or what I've learned about the laws of physics describes how the real world works (even though there are many l-zombies who are thinking similar thoughts but are wrong about them)."

And because of this, even though I disagree with my 2010 post, I also still disagree with Wei Dai's 2010 post arguing that a late Great Filter is good news, which my own 2010 post was trying to argue against. Wei argued that if Omega gave you a choice between (A) destroying the world now and (B) having Omega destroy the world a million years ago (so that you are never instantiated as a conscious being, though your choice as an l-zombie still influences the real world), then you would choose (A), to give humanity at least the time it's had so far. Wei concluded that this means that if you learned that the Great Filter is in our future, rather than our past, that must be good news, since if you could choose where to place the filter, you should place it in the future. I now agree with Wei that (A) is the right choice, but I don't think that you should be happy about it. And similarly, I don't think you should be happy about news that tells you that the Great Filter is later than you might have expected.

True numbers and fake numbers

19 cousin_it 06 February 2014 12:29PM

In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.

-- Lord Kelvin

If you believe that science is about describing things mathematically, you can fall into a strange sort of trap where you come up with some numerical quantity, discover interesting facts about it, use it to analyze real-world situations - but never actually get around to measuring it. I call such things "theoretical quantities" or "fake numbers", as opposed to "measurable quantities" or "true numbers".

An example of a "true number" is mass. We can measure the mass of a person or a car, and we use these values in engineering all the time. An example of a "fake number" is utility. I've never seen a concrete utility value used anywhere, though I always hear about nice mathematical laws that it must obey.

The difference is not just about units of measurement. In economics you can see fake numbers happily coexisting with true numbers using the same units. Price is a true number measured in dollars, and you see concrete values and graphs everywhere. "Consumer surplus" is also measured in dollars, but good luck calculating the consumer surplus of a single cheeseburger, never mind drawing a graph of aggregate consumer surplus for the US! If you ask five economists to calculate it, you'll get five different indirect estimates, and it's not obvious that there's a true number to be measured in the first place.

Another example of a fake number is "complexity" or "maintainability" in software engineering. Sure, people have proposed different methods of measuring it. But if they were measuring a true number, I'd expect them to agree to the 3rd decimal place, which they don't :-) The existence of multiple measuring methods that give the same result is one of the differences between a true number and a fake one. Another sign is what happens when two of these methods disagree: do people say that they're both equally valid, or do they insist that one must be wrong and try to find the error?

It's certainly possible to improve something without measuring it. You can learn to play the piano pretty well without quantifying your progress. But we should probably try harder to find measurable components of "intelligence", "rationality", "productivity" and other such things, because we'd be better at improving them if we had true numbers in our hands.

Explanations for Less Wrong articles that you didn't understand

18 Kaj_Sotala 31 March 2014 11:19AM

ErinFlight said:

I'm struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.

Thinking about it, I realized that this might be a common concern. There are probably plenty of people who've looked at various more-or-less technical or jargony Less Wrong posts, tried understanding them, and then given up (without posting a comment explaining their confusion).

So I figured that it might be good to have a thread where you can ask for explanations for any Less Wrong post that you didn't understand and would like to, but don't want to directly comment on for any reason (e.g. because you're feeling embarassed, because the post is too old to attract much traffic, etc.). In the spirit of various Stupid Questions threads, you're explicitly encouraged to ask even for the kinds of explanations that you feel you "should" get even yourself, or where you feel like you could get it if you just put in the effort (but then never did).

You can ask to have some specific confusing term or analogy explained, or to get the main content of a post briefly summarized in plain English and without jargon, or anything else. (Of course, there are some posts that simply cannot be explained in non-technical terms, such as the ones in the Quantum Mechanics sequence.) And of course, you're encouraged to provide explanations to others!

The Rationality Wars

18 Stefan_Schubert 27 February 2014 05:08PM

Ever since Tversky and Kahneman started to gather evidence purporting to show that humans suffer from a large number of cognitive biases, other psychologists and philosophers have criticized these findings. For instance, philosopher L. J. Cohen argued in the 80's that there was something conceptually incoherent with the notion that most adults are irrational (with respect to a certain problem). By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.) See chapter 8 of this book (where Gigerenzer, below, is also discussed).

Another attempt to resurrect human rationality is due to Gerd Gigerenzer and other psychologists. They have a) shown that if you tweak some of the heuristics and biases (i.e. the research program led by Tversky and Kahneman) experiments but a little - for instance by expressing probabilities in terms of frequencies - people make much fewer mistakes and b) argued, on the back of this, that the heuristics we use are in many situations good (and fast and frugal) rules of thumb (which explains why they are evolutionary adaptive). Regarding this, I don't think that Tversky and Kahneman ever doubted that the heuristics we use are quite useful in many situations. Their point was rather that there are lots of naturally occuring set-ups which fool our fast and frugal heuristics. Gigerenzer's findings are not completely uninteresting - it seems to me he does nuance the thesis of massive irrationality a bit - but his claims to the effect that these heuristics are rational in a strong sense are wildly overblown in my opnion. The Gigerenzer vs. Tversky/Kahneman debates are well discussed in this article (although I think they're too kind to Gigerenzer).

A strong argument against attempts to save human rationality is the argument from individual differences, championed by Keith Stanovich. He argues that the fact that some intelligent subjects consistently avoid to fall prey to the Wason Selection task, the conjunction fallacy, and other fallacies, indicates that there is something misguided with the notion that the answer that psychologists traditionally has seen as normatively correct is in fact misguided.

Hence I side with Tversky and Kahneman in this debate. Let me just mention one interesting and possible succesful method for disputing some supposed biases. This method is to argue that people have other kinds of evidence than the standard interpretation assumes, and that given this new interpretation of the evidence, the supposed bias in question is in fact not a bias. For instance, it has been suggested that the "false consensus effect" can be re-interpreted in this way:

The False Consensus Effect

Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said "Repent!". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.

Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying "Repent!" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.

(The quote is from an excellent Less Wrong article on this topic due to Kaj Sotala. See also this post by himthis by Andy McKenzie, this by Stuart Armstrong and this by lukeprog on this topic. I'm sure there are more that I've missed.)

It strikes me that the notion that people are "massively flawed" is something of an intellectual cornerstone of the Less Wrong community (e.g. note the names "Less Wrong" and "Overcoming Bias"). In the light of this it would be interesting to hear what people have to say about the rationality wars. Do you all agree that people are massively flawed?

Let me make two final notes to keep in mind when discussing these issues. Firstly, even though the heuristics and biases program is sometimes seen as pessimistic, one could turn the tables around: if they're right, we should be able to improve massively (even though Kahneman himself seems to think that that's hard to do in practice). I take it that CFAR and lots of LessWrongers who attempt to "refine their rationality" assume that this is the case. On the other hand, if Gigerenzer or Cohen are right, and we already are very rational, then it would seem that it is hard to do much better. So in a sense the latter are more pessimistic (and conservative) than the former.

Secondly, note that parts of the rationality wars seem to be merely verbal and revolve around how "rationality" is to be defined (tabooing this word is very often a good idea). The real question is not if the fast and frugal heuristics are in some sense rational, but whether there are other mental algorithms which are more reliable and effective, and whether it is plausible to assume that we could learn to use them on a large scale instead.

Human capital or signaling? No, it's about doing the Right Thing and acquiring karma

17 VipulNaik 20 April 2014 09:04PM

There's a huge debate among economists of education on whether the positive relationship between educational attainment and income is due to human capital, signaling, or ability bias. But what do the students themselves believe? Bryan Caplan has argued that students' actions (for instance, their not sitting in for free on classes and their rejoicing at class cancellation) suggest a belief in the signaling model of education. At the same time, he notes that students may not fully believe the signaling model, and that shifting in the direction of that belief might improve individual educational attainment.

Still, something seems wrong about the view that most people believe in the signaling model of education. While their actions are consistent with that view, I don't think they frame it quite that way. I don't think they usually think of it as "education is useless, but I'll go through it anyway because that allows me to signal to potential employers that I have the necessary intelligence and personality traits to succeed on the job." Instead, I believe that people's model of school education is linked to the idea of karma: they do what the System wants them to do, because that's their duty and the Right Thing to do. Many of them also expect that if they do the Right Thing, and fulfill their duties well, then the System shall reward them with financial security and a rewarding life. Others may take a more fateful stance, saying that it's not up to them to judge what the System has in store for them, but they still need to do the Right Thing.

The case of the devout Christian

Consider a reasonably devout Christian who goes to church regularly. For such a person, going to church, and living a life in accordance with (his understanding of) Christian ethics is part of what he's supposed to do. God will take care of him as long as he does his job well. In the long run, God will reward good behavior and doing the Right Thing, but it's not for him to question God's actions.

Such a person might look bemused if you asked him, "Are you a practicing Christian because you believe in the prudential value of Christian teachings (the "human capital" theory) or because you want to give God the impression that you are worthy of being rewarded (the "signaling" theory")?" Why? Partly, because the person attributes omniscience, omnipotence, and omnibenevolence to God, so that the very idea of having a conceptual distinction between what's right and how to impress God seems wrong. Yes, he does expect that God will take care of him and reward him for his goodness (the "signaling" theory). Yes, he also believes that the Christian teachings are prudent (the "human capital" theory). But to him, these are not separate theories but just parts of the general belief in doing right and letting God take care of the rest.

Surely not all Christians are like this. Some might be extreme signalers: they may be deliberately trying to optimize for (what they believe to be) God's favor and maximizing the probability of making the cut to Heaven. Others might believe truly in the prudence of God's teachings and think that any rewards that flow are because the advice makes sense at the worldly level (in terms of the non-divine consequences of actions) rather than because God is impressed by the signals they're sending him through those actions. There are also a number of devout Christians I personally know who, regardless of their views on the matter, would be happy to entertain, examine, and discuss such hypotheses without feeling bemused. Still, I suspect the majority of Christians don't separate the issue, and many might even be offended at second-guessing God.

Note: I selected Christianity and a male sex just for ease of description; similar ideas apply to other religions and the female sex. Also note that in theory, some religious sects emphasize free will and others emphasize determinism more, but it's not clear to me how much effect this has on people's mental models on the ground.

The schoolhouse as church: why human capital and signaling sound ridiculous

Just as many people believe in following God's path and letting Him take care of the rewards, many people believe that by doing the Right Thing educationally (being a Good Student and jumping through the appropriate hoops through correctly applied sincere effort) they're doing their bit for the System. These people might be bemused at the cynicism involved in separating out "human capital" and "signaling" theories of education.

Again, not everybody is like this. Some people are extreme signalers: they openly claim that school builds no useful skills, but grades are necessary to impress future employers, mates, and society at large. Some are human capital extremists: they openly claim that the main purpose is to acquire a strong foundation of knowledge, and they continue to do so even when the incentive from the perspective of grades is low. Some are consumption extremists: they believe in learning because it's fun and intellectually stimulating. And some strategically combine these approaches. Yet, none of these categories describe most people.

I've had students who worked considerably harder on courses than the bare minimum effort needed to get an A. This is despite the fact that they aren't deeply interested in the subject, don't believe it will be useful in later life, and aren't likely to remember it for too long anyway. I think that the karma explanation fits best: people develop an image of themselves as Good Students who do their duty and fulfill their role in the system. They strive hard to fulfill that image, often going somewhat overboard beyond the bare minimum needed for signaling purposes, while still not trying to learn in ways that optimize for human capital acquisition. There are of course many other people who claim to aspire to the label of Good Student because it's the Right Thing, and consider it a failing of virtue that they don't currently qualify as Good Students. Of course, that's what they say, and social desirability bias might play a role in individuals' statements,  but the very fact that people consider such views socially desirable indicates the strong societal belief in being a Good Student and doing one's academic duty.

If you presented the signaling hypothesis to self-identified Good Students they'd probably be insulted. It's like telling a devout Christian that he's in it only to curry favor with God. At the same time, the human capital hypothesis might also seem ridiculous to them in light of their actual actions and experiences: they know they don't remember or understand the material too well. Thinking of it as doing their bit for the System because it's the Right Thing to do seems both noble and realistic.

The impressive success of this approach

At the individual level, this works! Regardless of the relative roles of human capital, signaling, and ability bias, people who go through higher levels of education and get better grades tend to earn better and get more high-status jobs than others. People who transform themselves from being bad students to good students often see rewards both academically and in later life in the form of better jobs. This could again be human capital, signaling, or ability bias. The ability bias explanation is plausible because it requires a lot of ability to turn from a bad student into a good student, about the same as it does to be a good student from the get-go or perhaps even more because transforming oneself is a difficult task.

Can one do better?

Doing what the System commands can be reasonably satisfying, and even rewarding. But for many people, and particularly for the people who do the most impressive things, it's not necessarily the optimal path. This is because the System isn't designed to maximize every individual's success or life satisfaction, or even to optimize things for society as a whole. It's based on a series of adjustments driven by squabbling between competing interests. It could be a lot worse, but a motivated person could do better.

Also note that being a Good Student is fundamentally different from being a Good Worker. A worker, whether directly serving customers or reporting to a boss, is producing stuff that other people value. So, at least in principle, being a better worker translates to more gains for the customers. This means that a Good Worker is contributing to the System in a literal sense, and by doing a better job, directly adds more value. But this sort of reasoning doesn't apply to Good Students, because the actions of students qua students aren't producing direct value. Their value is largely their consumption value to the students themselves and their instrumental value to the students' current and later life choices.

Many of the qualities that define a Good Student are qualities that are desirable in other contexts as well. In particular, good study habits are valuable not just in school but in any form of research that relies on intellectual comprehension and synthesis (this may be an example of the human capital gains from education, except that I don't think most students acquire good study habits). So, one thing to learn from the Good Student model is good study habits. General traits of conscientiousness, hardwork, and willingness to work beyond the bare minimum needed for signaling purposes are also valuable to learn and practice.

But the Good Student model breaks down when it comes to acquiring perspective about how to prioritize between different subjects, and how to actually learn and do things of direct value. A common example is perfectionism. The Good Student may spend hours practicing calculus to get a perfect score in the test, far beyond what's necessary to get an A in the class or an AP BC 5, and yet not acquire a conceptual understanding of calculus or learn calculus in a way that would stick. Such a student has acquired a lot of karma, but has failed from both the human capital perspective (in not acquiring durable human capital) and the signaling perspective (in spending more effort than is needed for the signal). In an ideal world, material would be taught in a way that one can score highly on tests if and only if it serves useful human capital or signaling functions, but this is often not the case.

Thus, I believe it makes sense to critically examine the activities one is pursuing as a student, and ask: "does this serve a useful purpose for me?" The purpose could be human capital. signaling, pure consumption, or something else (such as networking). Consider the following four extreme answers a student may give to why a particular high school or college course matters:

  • Pure signaling: A follow-up might be: "how much effort would I need to put in to get a good return on investment as far as the signaling benefits go?" And then one has to stop at that level, rather than overshoot or undershoot.
  • Pure human capital: A follow-up might be: "how do I learn to maximize the long-term human capital acquired and retained?" In this world, test performance matters only as feedback rather than as the ultimate goal of one's actions. Rather than trying to practice for hours on end to get a perfect score on a test, more effort will go into learning in ways that increase the probability of long-term retention in ways that are likely to prove useful later on. (As mentioned above, in an ideal world, these goals would converge).
  • Pure consumption: A follow-up might be: "how much effort should I put in in order to get the maximum enjoyment and stimulation (or other forms of consumptive experience), without feeling stressed or burdened by the material?"
  • Pure networking: A follow-up might be: "how do I optimize my course experience to maximize the extent to which I'm able to network with fellow students and instructors?"

One might also believe that some combination of these explanations applies. For instance, a mixed human capital-cum-signaling explanation might recommend that one study all topics well enough to get an A, and then concentrate on acquiring a durable understanding of the few subtopics that one believes are needed for long-term knowledge and skills. For instance, a mastery of fractions matters a lot more than a mastery of quadratic equations, so a student preparing for a middle school or high school algebra course might choose to learn both at a basic level but get a really deep understanding of fractions. Similarly, in calculus, having a clear idea of what a function and derivative means matters a lot more than knowing how to differentiate trigonometric functions, so a student may superficially understand all aspects (to get the signaling benefits of a good grade) but dig deep into the concept of functions and the conceptual definition of derivatives (to acquire useful human capital). By thinking clearly about this, one may realize that perfecting one's ability to differentiate complicated trigonometric function expressions or integrate complicated rational functions may not be valuable from either a human capital perspective or a signaling perspective.

Ultimately, the changes wrought by consciously thinking about these issues are not too dramatic. Even though the System is suboptimal, it's locally optimal in small ways and one is constrained in one's actions in any case. But the changes can nevertheless add up to lead one to be more strategic and less stressed, do better on all fronts (human capital, signaling, and consumption), and discover opportunities one might otherwise have missed.


17 peter_hurford 28 March 2014 09:27PM

by Patrick Brinich-Langlois and Ozzie Gooen


Communities once kept our ancestors from being torn apart by mountain lions and tyrannosaurus rexes. Dinosaur violence has declined greatly since the Cretaceous, but the world has become more complex and interconnected. Communities remain essential.

Effective altruists have a lot to offer one another. But we're geographically dispersed, so it's hard to know whom to ask for help. is built to fix this. is a place for effective altruists to share their skills, items, and couches with one another.


Offer skills or things that you're willing to share. Request items that other people have offered. Here are a few things people have offered on the site:

  • access to academic papers
  • advice on fundraising, careers, nutrition, productivity, startups, investments, etc.
  • French translation (two people!)
  • math tutoring
  • lodging in Switzerland, the Bay Area, London, Melbourne, and Oxford


As of this writing, we already have 59 offers from 55 people. With your help, we can make it 60 offers from 56 people!

Why use, instead of getting the things you need the normal way? Certain things, like career advice or study buddies, can be hard to get. Even if you can find someone who has what you're looking for, you might enjoy the opportunity to relationships with other altruists. Plus, by participating in, you show that the community of do-gooders is welcoming and supportive, qualities that may draw in new people.

You can be notified of new offers and requests by Twitter or RSS. As with all .impact software, the source code is available on GitHub. We use a publicly accessible Trello board to track bugs and features.


We'd love to hear what you think about the site. Is it awesome, or a horrifically inefficient use of our resources? What could be improved? Send us an e-mail or leave a comment.

Channel factors

17 benkuhn 12 March 2014 04:52AM

Or, “how not to make a fundamental attribution error on yourself;” or, “how to do that thing that you keep being frustrated at yourself for not doing;” or, “finding and solving trivial but leveraged inconveniences.”

continue reading »

How do you approach the problem of social discovery?

16 InquilineKea 21 April 2014 09:05PM

As in, how do you find ways to meet the right people you talk to? Presumably, they would have personality fit with you, and be high on both intelligence and openness. Furthermore, they would be in the point of their life where they are willing to spend time with you (although sometimes you can learn a lot from people simply by friending them on Facebook and just observing their feeds from time to time).

Historically, I've made myself extremely stalkable on the Internet. In retrospect, I believe that this "decision" is on the order of one of the very best decisions I've ever made in my life, and has made me better at social discovery than most people I know, despite my dual social anxiety and Asperger's. In fact, if a more extroverted non-Aspie could do the same thing, I think they could do WONDERS with developing an online profile.

I've also realized more that social discovery is often more rewarding when done with teenagers. You can do so much to impact teenagers, and they often tend to be a lot more open to your ideas/musings (just as long as you're responsible).

But I've wondered - how else have you done it? Especially in real life? What are some other questions you ask with respect to social discovery? I tend to avoid real life for social discovery simply because it's extremely hit-and-miss, but I've discovered (from Richard Florida's books) that the Internet often strengthens real-life interaction because it makes it so much easier to discover other people in real life (and then it's in real life when you can really get to know people).

Community overview and resources for modern Less Wrong meetup organisers

16 BraydenM 04 April 2014 08:53PM

I've been travelling around the US for the past month since arriving from Australia, and have had the chance to see how a number of different Less Wrong communities operate. As a departing organiser for the Melbourne Less Wrong community, it has been interesting to make comparisons between the different Less Wrong groups all over the US, and I suspect sharing the lessons learned by different communities will benefit the global movement.

For aspiring organisers, or leaders looking at making further improvements to their community, there already exists an excellent meetup organisers handbook, list of meetups, and NYC case study. I'd also recommend one super useful ability: rapid experimentation. This is a relatively low cost way to find out exactly what format of events attracts the most people and are the most beneficial. Once you know how to win, spam it! This ability is sometimes even better than just asking people what they want out of the community, but you should probably do both.

I'll summarise a few types of meetup that I have seen here. Please feel free to help out by adding descriptions of other types of events you have seen, or variations on the ones already posted if you think there is something other communities could learn. 

Public Practical Rationality Meetups (Melbourne)

Held monthly on a Friday in Matthew Fallshaw's offices at TrikeApps. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 25-40 attendees. Until January, were also advertised publicly on, but since then the format has changed significantly. Audience was 50% Less Wrongers, and 50% newcomers, so this served as our outreach event. 

6:30pm-7:30pm Doors open, usually most people arrive around 7:15pm

7:30pm sharp-9:00pm: Content introduced. Usually around 3 topics have been prepared by 3 separate Less Wrongers, for discussion in groups of about 10 people each. After 30 minutes the groups rotate, so the presenters present the same thing multiple times. Topics have included: effective communication, giving and receiving feedback, sequence summaries, cryonics, habit formation, etc.

9:00pm - Late: Unstructured socialising, with occasional 'rationality therapy' where a few friends get together to think about a particular issue in someone's life in detail. Midnight souvlaki runs are a tradition.

Monthly Social Games Meetup (Melbourne)

Held in a private residence on a Friday, close to central city public transport. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 15-25 attendees. Snacks provided by the host.

6:30pm - Late: People show up whenever and there are lots of great conversations. Mafia, (science themed) Zendo, and a variety of board games are popular, but the majority of the night is usually spent talking about what people have learned or read recently. There are enough discussions happening that it is usually easy to find an interesting group to join. Delivery dinner is often ordered, and many people stay quite late.

Large public salons (from Rafael Cosman, Stanford University)

Held on campus in a venue provided by the university. Advertised on a custom mailing list, and presumably facebook/word of mouth. Audience is mostly unfamiliar with Less Wrong Material, and this event is has not yet officially become associated with Less Wrong, but Rafael is in the process of getting a spin-off LW specific meetup happening.

7pm-7:30pm: Guests trickle in. Light background music helps inform the first arrivals that they are indeed at the right place.

7:30pm-7:45pm: Introductions, covering 1. Who you are 2. One thing that people should talk to you about (e.g. "You should talk to me about Conway's Game of Life" 3. One thing that people could come and do with you sometime (e.g. "Come and join me for yoga on Sunday mornings"

7:45pm-9:30pm: Short talks on a variety of topics. At the end of a presentation, instead of tossing it open for questions, everyone comes up to give the speaker a high-five, and then the group immediately enters unstructured discussion for 5-10 minutes. This allows people with pressing questions to go up and ask the speaker, but also allows everyone else to break out to mingle rather than being passive.

Still to come: New York, Austin, and the SF East and South Bay meetup formats.

Don't rely on the system to guarantee you life satisfaction

16 JonahSinick 18 February 2014 05:48AM

A brief essay intended for high school students: any thoughts?

If you go to school, take the classes that people tell you to, do your homework, and engage in the extracurricular activities that your peers do, you'll be setting yourself up for an "okay" life. But you can do better than that.

continue reading »

Rationality & Low-IQ People

16 kokotajlod 02 February 2014 03:11PM

This post is to raise a question about the demographics of rationality: Is rationality something that can appeal to low-IQ people as well?

I don't mean in theory, I mean in practice. From what I've seen, people who are concerned about rationality (in the sense that it has on LW, OvercomingBias, etc.) are overwhelmingly high-IQ.

Meanwhile, HPMOR and other stories in the "rationality genre" appeal to me, and to other people I know. However I wonder: Perhaps part of the reason they appeal to me is that I think of myself as a smart person, and this allows me to identify with the main characters, cheer when they think their way to victory, etc. If I thought of myself as a stupid person, then perhaps I would feel uncomfortable, insecure, and alienated while reading the same stories.

So, I have four questions:

1.) Do we have reason to believe that the kind of rationality promoted on LW, OvercomingBias, CFAR, etc. appeals to a fairly normal distribution of people around the IQ mean? Or should we think, as I suggested, that people with lower IQ's are disposed to find the idea of being rational less attractive?

2.) Ditto, except replace "being rational" with "celebrating rationality through stories like HPMOR." Perhaps people think that rationality is a good thing in much the same way that being wealthy is a good thing, but they don't think that it should be celebrated, or at least they don't find such celebrations appealing.

3.) Supposing #1 and #2 have the answers I am suggesting, why? 

4.) Making the same supposition, what are the implications for the movement in general? 

Note: I chose to use IQ in this post instead of a more vague term like "intelligence," but I could easily have done the opposite. I'm happy to do whichever version is less problematic.

The Cold War divided Science

15 Douglas_Knight 05 April 2014 11:10PM

What can we learn about science from the divide during the Cold War?

I have one example in mind: America held that coal and oil were fossil fuels, the stored energy of the sun, while the Soviets held that they were the result of geologic forces applied to primordial methane.

At least one side is thoroughly wrong. This isn't a politically charged topic like sociology, or even biology, but a physical science where people are supposed to agree on the answers. This isn't a matter of research priorities, where one side doesn't care enough to figure things out, but a topic that both sides saw to be of great importance, and where they both claimed to apply their theories. On the other hand, Lysenkoism seems to have resulted from the practical importance of crop breeding.

First of all, this example supports the claim that there really was a divide, that science was disconnected into two poorly communicating camps. It suggests that when the two sides reached the same results on other topics, they did so independently. Even if we cannot learn from this example, it suggests that we may be able to learn from other consequences of dividing the scientific community.

My understanding is that although some Russian language research papers were available in America, they were completely ignored and the scientists failed to even acknowledge that there was a community with divergent opinions. I don't know about the other direction.

Some questions:

  • Are there other topics, ideally in physical science, on which such a substantial disagreement persisted for decades? not necessarily between these two parties?
  • Did the Soviet scientists know that their American counterpoints disagreed?
  • Did Warsaw Pact (eg, Polish) scientists generally agree with the Soviets about the origin of coal and oil? Were they aware of the American position? Did other Western countries agree with America? How about other countries, such as China and Japan?
  • What are the current Russian beliefs about coal and oil? I tried running Russian Wikipedia through google translate and it seemed to support the biogenic theory. (right?) Has there been a reversal among Russian scientists? When? Or does Wikipedia represent foreign opinion? If a divide remains, does it follow the Iron Curtain, or some new line?
  • Have I missed some detail that would make me not classify this as an honest disagreement between two scientific establishments?
  • Finally, the original question: what can we learn about the institution of science?

Engineering archaeology

15 NancyLebovitz 20 March 2014 04:38PM

Here's an account by a retired engineer of what happened when his old company wanted to streamline a process in the factory where he used to work.

People only knew how to keep the factory going from one day to the next, but all the documentation was lost-- the factory had been sold a couple of times, and efforts at digitization caused more to get lost. Even the name of the factory had been lost.

Fortunately, engineers keep more documentation than their bosses allow them to. (Trade secrets!) And they don't throw the documentation away just because they've retired.

I've been concerned about infrastructure neglect for a while, and this makes me more concerned. On the other hand, instead of just viewing with alarm, I'd like to view with quantified alarm, and I don't have the foggiest on how to quantify the risks.

Also, some of the information loss is a result of a search for efficiency. How can you tell when you're leaving something important out?


On Irrational Theory of Identity

15 SilentCal 19 March 2014 12:06AM

Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.


Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means--maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.


Occasionally her friend Bob talks to her about her strange theory of identity. 


"Don't you ever wish you had left yourself more of your paycheck?" he once asked.

"I can't remember any of me ever thinking that." Alice replied. "I guess it'd be nice, but I might as well wish yesterday's Bill Gates had sent me his paycheck."


Another time, Bob posed the question, "Right now, you allocate yourself enough to survive with the (true) justification that that's a good investment of your funds. But what if that ever ceases to be true?"

Alice resopnded, "When me's have made their allocations, they haven't felt any particular fondness for their successors. I know that's hard to believe from your perspective, but it was years after past me's started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices' decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.

So me's really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won't make it. She won't feel it as a grand sacrifice, either. Last week's Alice didn't have to exert willpower when she cut the food budget based on new nutritional evidence."


"Look," Bob said on a third occasion, "your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity."

"I'm not a perfect altruist, and becoming one wouldn't be any easier for me than it would be for you," Alice replied. "And I know the arguments against the uninterrupted-consciousness theory of identity, and they're definitely correct. But I don't alieve a word of it."

"Have you actually tried to internalize them?"

"No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU's published average for people of similar intelligence, conscientiousness, and other relevant traits."

"Hmm," said Bob. "I don't want to make allegations about your motives-"

"You don't have to," Alice interrupted. "The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There's status quo bias, there's the desire not to admit I'm wrong, and there's the fact that I've come to identify with my theory of identity.

I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don't alieve those gains will be mine, so they don't motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?"






If you wish to ponder Alice's position with relative objectivity before I link it to something less esoteric, please do so before continuing.








Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn't sign up for cryonics. He didn't buy any of the usual counterarguments--when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn't motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn't alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.

So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice's answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob's survival and resurrection.

He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to 'correct' himself?


Does Carrie have anything to say about this argument?

Meta: social influence bias and the karma system

15 Snorri 17 February 2014 01:07AM

Given LW’s keen interest in bias, it would seem pertinent to be aware of the biases engendered by the karma system. Note: I used to be strictly opposed to comment scoring mechanisms, but witnessing the general effectiveness in which LWers use karma has largely redeemed the system for me.

In “Social Influence Bias: A Randomized Experiment” by Muchnik et al, random comments on a “social news aggregation Web site” were up-voted after being posted. The likelihood of such rigged comments receiving additional up-votes were quantified in comparison to a control group. The results show that users were significantly biased towards the randomly up-voted posts:

The up-vote treatment significantly increased the probability of up-voting by the first viewer by 32% over the control group ... Uptreated comments were not down-voted significantly more or less frequently than the control group, so users did not tend to correct the upward manipulation. In the absence of a correction, positive herding accumulated over time.

At the end of their five month testing period, the comments that had artificially received an up-vote had an average rating 25% higher than the control group. Interestingly, the severity of the bias was largely dependent on the topic of discussion:

We found significant positive herding effects for comment ratings in “politics,” “culture and society,” and “business,” but no detectable herding behavior for comments in “economics,” “IT,” “fun,” and “general news”.

The herding behavior outlined in the paper seems rather intuitive to me. If before I read a post, I see a little green ‘1’ next to it, I’m probably going to read the post in a better light than if I hadn't seen that little green ‘1’ next to it. Similarly, if I see a post that has a negative score, I’ll probably see flaws in it much more readily. One might say that this is the point of the rating system, as it allows the group as a whole to evaluate the content. However, I’m still unsettled by just how easily popular opinion was swayed in the experiment.

This certainly doesn't necessitate that we reprogram the site and eschew the karma system. Moreover, understanding the biases inherent in such a system will allow us to use it much more effectively. Discussion on how this bias affects LW in particular would be welcomed. Here are some questions to begin with:

  • Should we worry about this bias at all? Are its effects negligible in the scheme of things?
  • How does the culture of LW contribute to this herding behavior? Is it positive or negative?
  • If there are damages, how can we mitigate them?


In the paper, they mentioned that comments were not sorted by popularity, therefore “mitigating the selection bias.” This of course implies that the bias would be more severe on forums where comments are sorted by popularity, such as this one.

For those interested, another enlightening paper is “Overcoming the J-shaped distribution of product reviews” by Nan Hu et al, which discusses rating biases on websites such as amazon. User gwern has also recommended a longer 2007 paper by the same authors which the one above is based upon: "Why do Online Product Reviews have a J-shaped Distribution? Overcoming Biases in Online Word-of-Mouth Communication"

I like simplicity, but not THAT much

15 Benja 14 February 2014 07:51PM

Followup to: L-zombies! (L-zombies?)
Reply to: Coscott's Preferences without Existence; Paul Christiano's comment on my l-zombies post

In my previous post, I introduced the idea of an "l-zombie", or logical philosophical zombie: A Turing machine that would simulate a conscious human being if it were run, but that is never run in the real, physical world, so that the experiences that this human would have had, if the Turing machine were run, aren't actually consciously experienced.

One common reply to this is to deny the possibility of logical philosophical zombies just like the possibility of physical philosophical zombies: to say that every mathematically possible conscious experience is in fact consciously experienced, and that there is no kind of "magical reality fluid" that makes some of these be experienced "more" than others. In other words, we live in the Tegmark Level IV universe, except that unlike Tegmark argues in his paper, there's no objective measure on the collection of all mathematical structures, according to which some mathematical structures somehow "exist more" than others (and, although IIRC that's not part of Tegmark's argument, according to which the conscious experiences in some mathematical structures could be "experienced more" than those in other structures). All mathematically possible experiences are experienced, and to the same "degree".

So why is our world so orderly? There's a mathematically possible continuation of the world that you seem to be living in, where purple pumpkins are about to start falling from the sky. Or the light we observe coming in from outside our galaxy is suddenly replaced by white noise. Why don't you remember ever seeing anything as obviously disorderly as that?

And the answer to that, of course, is that among all the possible experiences that get experienced in this multiverse, there are orderly ones as well as non-orderly ones, so the fact that you happen to have orderly experiences isn't in conflict with the hypothesis; after all, the orderly experiences have to be experienced as well.

One might be tempted to argue that it's somehow more likely that you will observe an orderly world if everybody who has conscious experiences at all, or if at least most conscious observers, see an orderly world. (The "most observers" version of the argument assumes that there is a measure on the conscious observers, a.k.a. some kind of magical reality fluid.) But this requires the use of anthropic probabilities, and there is simply no (known) system of anthropic probabilities that gives reasonable answers in general. Fortunately, we have an alternative: Wei Dai's updateless decision theory (which was motivated in part exactly by the problem of how to act in this kind of multiverse). The basic idea is simple (though the details do contain devils): We have a prior over what the world looks like; we have some preferences about what we would like the world to look like; and we come up with a plan for what we should do in any circumstance we might find ourselves in that maximizes our expected utility, given our prior.


In this framework, Coscott and Paul suggest, everything adds up to normality if, instead of saying that some experiences objectively exist more, we happen to care more about some experiences than about others. (That's not a new idea, of course, or the first time this has appeared on LW -- for example, Wei Dai's What are probabilities, anyway? comes to mind.) In particular, suppose we just care more about experiences in mathematically really simple worlds -- or more precisely, places in mathematically simple worlds that are mathematically simple to describe (since there's a simple program that runs all Turing machines, and therefore all mathematically possible human experiences, always assuming that human brains are computable). Then, even though there's a version of you that's about to see purple pumpkins rain from the sky, you act in a way that's best in the world where that doesn't happen, because that world has so much lower K-complexity, and because you therefore care so much more about what happens in that world.

There's something unsettling about that, which I think deserves to be mentioned, even though I do not think it's a good counterargument to this view. This unsettling thing is that on priors, it's very unlikely that the world you experience arises from a really simple mathematical description. (This is a version of a point I also made in my previous post.) Even if the physicists had already figured out the simple Theory of Everything, which is a super-simple cellular automaton that accords really well with experiments, you don't know that this simple cellular automaton, if you ran it, would really produce you. After all, imagine that somebody intervened in Earth's history so that orchids never evolved, but otherwise left the laws of physics the same; there might still be humans, or something like humans, and they would still run experiments and find that they match the predictions of the simple cellular automaton, so they would assume that if you ran that cellular automaton, it would compute them -- except it wouldn't, it would compute us, with orchids and all. Unless, of course, it does compute them, and a special intervention is required to get the orchids.

So you don't know that you live in a simple world. But, goes the obvious reply, you care much more about what happens if you do happen to live in the simple world. On priors, it's probably not true; but it's best, according to your values, if all people like you act as if they live in the simple world (unless they're in a counterfactual mugging type of situation, where they can influence what happens in the simple world even if they're not in the simple world themselves), because if the actual people in the simple world act like that, that gives the highest utility.

You can adapt an argument that I was making in my l-zombies post to this setting: Given these preferences, it's fine for everybody to believe that they're in a simple world, because this will increase the correspondence between map and territory for the people that do live in simple worlds, and that's who you care most about.


I mostly agree with this reasoning. I agree that Tegmark IV without a measure seems like the most obvious and reasonable hypothesis about what the world looks like. I agree that there seems no reason for there to be a "magical reality fluid". I agree, therefore, that on the priors that I'd put into my UDT calculation for how I should act, it's much more likely that true reality is a measureless Tegmark IV than that it has some objective measure according to which some experiences are "experienced less" than others, or not experienced at all. I don't think I understand things well enough to be extremely confident in this, but my odds would certainly be in favor of it.

Moreover, I agree that if this is the case, then my preferences are to care more about the simpler worlds, making things add up to normality; I'd want to act as if purple pumpkins are not about to start falling from the sky, precisely because I care more about the consequences my actions have in more orderly worlds.



Imagine this: Once you finish reading this article, you hear a bell ringing, and then a sonorous voice announces: "You do indeed live in a Tegmark IV multiverse without a measure. You had better deal with it." And then it turns out that it's not just you who's heard that voice: Every single human being on the planet (who didn't sleep through it, isn't deaf etc.) has heard those same words.

On the hypothesis, this is of course about to happen to you, though only in one of those worlds with high K-complexity that you don't care about very much.

So let's consider the following possible plan of action: You could act as if there is some difference between "existence" and "non-existence", or perhaps some graded degree of existence, until you hear those words and confirm that everybody else has heard them as well, or until you've experienced one similarly obviously "disorderly" event. So until that happens, you do things like invest time and energy into trying to figure out what the best way to act is if it turns out that there is some magical reality fluid, and into trying to figure out what a non-confused version of something like a measure on conscious experience could look like, and you act in ways that don't kill you if we happen to not live in a measureless Tegmark IV. But once you've had a disorderly experience, just a single one, you switch over to optimizing for the measureless mathematical multiverse.

If the degree to which you care about worlds is really proportional to their K-complexity, with respect to what you and I would consider a "simple" universal Turing machine, then this would be a silly plan; there is very little to be gained from being right in worlds that have that much higher K-complexity. But when I query my intuitions, it seems like a rather good plan:

  • Yes, I care less about those disorderly worlds. But not as much less as if I valued them by their K-complexity. I seem to be willing to tap into my complex human intuitions to refer to the notion of "single obviously disorderly event", and assign the worlds with a single such event, and otherwise low K-complexity, not that much lower importance than the worlds with actual low K-complexity.
  • And if I imagine that the confused-seeming notions of "really physically exists" and "actually experienced" do have some objective meaning independent of my preferences, then I care much more about the difference between "I get to 'actually experience' a tomorrow" and "I 'really physically' get hit by a car today" than I care about the difference between the world with true low K-complexity and the worlds with a single disorderly event.

In other words, I agree that on the priors I put into my UDT calculation, it's much more likely that we live in measureless Tegmark IV; but my confidence in this isn't extreme, and if we don't, then the difference between "exists" and "doesn't exist" (or "is experienced a lot" and "is experienced only infinitesimally") is very important; much more important than the difference between "simple world" and "simple world plus one disorderly event" according to my preferences if we do live in a Tegmark IV universe. If I act optimally according to the Tegmark IV hypothesis in the latter worlds, that still gives me most of the utility that acting optimally in the truly simple worlds would give me -- or, more precisely, the utility differential isn't nearly as large as if there is something else going on, and I should be doing something about it, and I'm not.

This is the reason why I'm trying to think seriously about things like l-zombies and magical reality fluid. I mean, I don't even think that these are particularly likely to be exactly right even if the measureless Tegmark IV hypothesis is wrong; I expect that there would be some new insight that makes even more sense than Tegmark IV, and makes all the confusion go away. But trying to grapple with the confused intuitions we currently have seems at least a possible way to make progress on this, if it should be the case that there is in fact progress to be made.


Here's one avenue of investigation that seems worthwhile to me, and wouldn't without the above argument. One thing I could imagine finding, that could make the confusion go away, would be that the intuitive notion of "all possible Turing machines" is just wrong, and leads to outright contradictions (e.g., to inconsistencies in Peano Arithmetic, or something similarly convincing). Lots of people have entertained the idea that concepts like the real numbers don't "really" exist, and only the behavior of computable functions is "real"; perhaps not even that is real, and true reality is more restricted? (You can reinterpret many results about real numbers as results about computable functions, so maybe you could reinterpret results about computable functions as results about these hypothetical weaker objects that would actually make mathematical sense.) So it wouldn't be the case after all that there is some Turing machine that computes the conscious experiences you would have if pumpkins started falling from the sky.

Does the above make sense? Probably not. But I'd say that there's a small chance that maybe yes, and that if we understood the right kind of math, it would seem very obvious that not all intuitively possible human experiences are actually mathematically possible (just as obvious as it is today, with hindsight, that there is no Turing machine which takes a program as input and outputs whether this program halts). Moreover, it seems plausible that this could have consequences for how we should act. This, together with my argument above, make me think that this sort of thing is worth investigating -- even if my priors are heavily on the side of expecting that all experiences exist to the same degree, and ordinarily this difference in probabilities would make me think that our time would be better spent on investigating other, more likely hypotheses.


Leaving aside the question of how I should act, though, does all of this mean that I should believe that I live in a universe with l-zombies and magical reality fluid, until such time as I hear that voice speaking to me?

I do feel tempted to try to invoke my argument from the l-zombies post that I prefer the map-territory correspondences of actually existing humans to be correct, and don't care about whether l-zombies have their map match up with the territory. But I'm not sure that I care much more about actually existing humans being correct, if the measureless mathematical multiverse hypothesis is wrong, than I care about humans in simple worlds being correct, if that hypothesis is right. So I think that the right thing to do may be to have a subjective belief that I most likely do live in the measureless Tegmark IV, as long as that's the view that seems by far the least confused -- but continue to spend resources on investigating alternatives, because on priors they don't seem sufficiently unlikely to make up for the potential great importance of getting this right.

A few remarks about mass-downvoting

15 gjm 13 February 2014 05:06PM

To whoever has for the last several days been downvoting ~10 of my old comments per day:

It is possible that your intention is to discourage me from commenting on Less Wrong.

The actual effect is the reverse. My comments still end up positive on average, and I am therefore motivated to post more of them in order to compensate for the steady karma drain you are causing.

If you are mass-downvoting other people, the effect on some of them is probably the same.

To the LW admins, if any are reading:

Look, can we really not do anything about this behaviour? It's childish and stupid, and it makes the karma system less useful (e.g., for comment-sorting), and it gives bad actors a disproportionate influence on Less Wrong. It seems like there are lots of obvious things that would go some way towards helping, many of which have been discussed in past threads about this.

Failing that, can we at least agree that it's bad behaviour and that it would be good in principle to stop it or make it more visible and/or inconvenient?

Failing that, can we at least have an official statement from an LW administrator that mass-downvoting is not considered an undesirable behaviour here? I really hope this isn't the opinion of the LW admins, but as the topic has been discussed from time to time with never any admin response I've been thinking it increasingly likely that it is. If so, let's at least be honest about it.

To anyone else reading this:

If you should happen to notice that a sizeable fraction of my comments are at -1, this is probably why. (Though of course I may just have posted a bunch of silly things. I expect it happens from time to time.)

My apologies for cluttering up Discussion with this. (But not very many apologies; this sort of mass-downvoting seems to me to be one of the more toxic phenomena on Less Wrong, and I retain some small hope that eventually something may be done about it.)

How can I spend money to improve my life?

15 jpaulson 02 February 2014 10:16AM

On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went)

I care about health, improving personal skills (particularly: programming, writing, people skills), gaining respect (particularly at work), and entertainment (these days: primarily books and computer games). If you think I should care about something else, feel free to suggest it.

I am early-twenties programmer living in San Francisco. In the interest of getting advice useful to more than one person, I'll omit further personal details.

Budget: $50/day

If your idea requires significant ongoing time commitment, that is a major negative.

The effect of effectiveness information on charitable giving

14 Unnamed 15 April 2014 04:43PM

A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.

The Abstract:

We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.

In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:

In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.

These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.

In the control condition, the mailing instead included this paragraph:

Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.

My Heartbleed learning experience and alternative to poor quality Heartbleed instructions.

14 aisarka 15 April 2014 08:15AM

Due to the difficulty of finding high-quality Heartbleed instructions, I have discovered that perfectly good, intelligent rationalists either didn't do all that was needed and ended up with a false sense of security or did things that increased their risk without realizing it and needed to take some additional steps.  Part of the problem is that organizations who write for end users do not specialize in computer security and vice versa, so many of the Heartbleed instructions for end users had issues.  The issues range from conflicting and confusing information to outright ridiculous hype.  As an IT person and a rationalist, I knew better than to jump to the proposing solutions phase before researching [1].  Recognizing the need for well thought out Heartbleed instructions, I spent 10-15 hours sorting through the chaos to create more comprehensive Heartbleed instructions.  I'm not a security expert, but as an IT person who has read about computer security out of a desire for professional improvement and also due to curiosity and is familiar with various research issues, cognitive biases, logical fallacies, etc, I am not clueless either.  In light of this being a major event that some sources are calling one of the worst security problems ever to happen on the Internet [2], that has been proven to be more than a theoretical risk (Four people hacked the keys to the castle out of Cloudflare's challenge in just one day.) [3], that has been badly exploited (900 Canadian social insurance numbers were leaked today. [4]), and some evidence exists that it may have been used for spying for a long time (EFF found evidence of someone spying on IRC conversations. [5]), I think it's important to share my compilation of Heartbleed instructions just so that a better list of instructions is out there.  More importantly, this disaster is a very rare rationality learning opportunity: reflecting on our behavior and comparing it with what we realize we should have done after becoming more informed may help us see patches of irrationality that could harm us during future disasters.  For that reason, I did some rationality checks on my own behavior by asking myself a set of questions.  I have of course included the questions.


Heartbleed Research Challenges this Post Addresses:

  - There are apparent contradictions between sources about which sites were affected by Heartbleed, which sites have updated for Heartbleed, which sites need a password reset, and whether to change your passwords now or wait until the company has updated for Heartbleed.  For instance, Yahoo said Facebook was not vulnerable. [6] LastPass said Facebook was confirmed vulnerable and recommended a password update. [7]

  - Companies are putting out a lot of "fluffspeek"*, which makes it difficult to figure out which of your accounts have been affected, and which companies have updated their software.

  - Most sources *either* specialize in writing for end-users *or* are credible sources on computer security, not both.

  - Different articles have different sets of Heartbleed instructions.  None of the articles I saw contained every instruction.

  - A lot of what's out there is just ridiculous hype. [8]



I am not a security specialist, nor am I certified in any security-related area.  I am an IT person who has randomly read a bunch of security literature over the last 15 years, but there *is* a definite quality difference between an IT person who has read security literature and a professional who is dedicated to security.  I can't give you any guarantees (though I'm not sure it's wise to accept that from the specialists either).  Another problem here is time.  I wanted to act ASAP.  With hackers on the loose, I do not think it wise to invest the time it would take me to create a Gwern style masterpiece.  This isn't exactly slapped together, but I am working within time constraints, so it's not perfect.  If you have something important to protect, or have the money to spend, consult a security specialist.


Compilation of Heartbleed Instructions

  Beware fraudulent password reset emails and shiny Heartbleed fixes.

  With all the real password reset emails going around, there are a lot of scam artists out there hoping to sneak in some dupes.  A lot of people get confused.  It doesn't mean you're stupid.  If you clicked a nasty link, or even if you're not sure, call the company's fraud department immediately.  That's why they're there. [9]  Always be careful about anything that seems too good to be true, as the scam artists have also begun to advertise Heartbleed "fixes" as bait.

  If the site hasn't done an update, it's risky to change your password.

  Why: This may increase your risk.  If Heartbleed isn't fixed, any new password you type in could be stolen, and a lot of criminals are probably doing whatever they can to exploit Heartbleed right now since they just found out about it.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]

  If you use digital password storing, consider whether it is secure.

  Some digital password storing software is way better than others.  I can't recommend one, but be careful which one you choose.  Also, check them for Heartbleed.

  If you already changed your password, and then a site updates or says "change your password" do it again.

  Why change it twice?: If you changed it before the update, you were sending that new password over a connection with a nasty security flaw.  Consider that password "potentially stolen" and make a new one.  "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]

  If a company says "no need to change your password" do you really want to believe them?

  There's a perverse incentive for companies to tell you "everything is fine" when in fact it is not fine, because nobody wants to be seen as having bad security on their website.  Also, if someone did steal your password through this bug, it's not traceable to the bug.  Companies could conceivably claim "things are fine" without much accountability.  "Exploitation of this bug leaves no traces of anything abnormal happening to the logs." [11] I do not know whether, in practice, companies respond to similar perverse incentives, or if some unknown thing keeps them in check, but I have observed plenty of companies taking advantage of other perverse incentives.  Health care rescission for instance.  That affected much more important things than data.

  When a site has done a Heartbleed update, *then* change your password.

  That's the time to do it. "Changing your password before receiving notice about a fixed service may only reveal your new password to an attacker." [10]

  Security Questions

  Nothing protected your mother's maiden name or the street you grew up on from Heartbleed any more than your passwords or other data.  A stolen security question can be a much bigger risk than a stolen password, especially if you used the same one on multiple different accounts.  When you change your password, also consider whether you should change your security questions.  Think about changing them to something hard to guess, unique to that account, and remember that you don't have to fill out your security questions with accurate information.  If you filled the questions out in the last two years, there's a risk that they were stolen, too.

  How do I know if a site updated?


  Method One:

    Qualys SSL Labs, an Information Security Provider created a free SSL Server Test.  Just plug in the domain name and Qualys will generate a report.  Yes, it checks the certificate, too.  (Very important.)

    Qualys Server Test


  Method Two:

    CERT, a major security flaw advisory publisher, listed some (not all!) of the sites that have updated.  If you want a list, you should use CERT's list, not other lists. 

    CERT's List

    Why CERT's list?  Hearing "not vulnerable" on some news website's list does not mean that any independent organization verified that the site was fine, nor that an independent organization even has the ability to verify that the site has been safe for the entire last two years.  If anyone can do that job, it would be CERT, but I am not unaware of tests of their abilities in that regard.  Also, there is no fluffspeek*.

  Method Three:

    Search the site itself for the word "Heartbleed" and read the articles that come up.  If the site had to do a Heartbleed update, change your password.  Here's the quick way to search a whole site in Google (do not add "www"): Heartbleed

  If an important site hasn't updated yet:

  If you have sensitive data stored there, don't log into that site until it's fixed.  If you want to protect it, call them up and try to change your password by phone or lock the account down.  "Stick to reputable websites and services, as those sites are most likely to have addressed the vulnerability right away." [10]

  Check your routers, mobile phones, and other devices.

  Yes, really. [13] [14]

  If you have even the tiniest website:

  Don't think "There's nothing to steal on my website".  Spammers always want to get into your website.  Hackers make software that exploits bugs and can share or sell that software.  If a hacker shares a tool that exploits Heartbleed and your site is vulnerable, spammers will get the tool and could make a huge mess out of everything.  That can get you blacklisted and disrupt email, it can get you removed from Google search engine results, it can disrupt your online advertising ... it can be a mess.

  Get a security expert involved to look for all the places where Heartbleed may have caused a security risk on your site, preferably one who knows about all the different services that your website might be using.  "Services" meaning things like a vendor that you pay so your website can send bulk text messages for two-factor authentication, or a free service that lets users do "social sign on" to log into your site with an external service like Yahoo.  The possibilities for Heartbleed to cause problems on your website, through these kinds of services, is really pretty enormous.  Both paid services and free services could be affected.

  A sysadmin needs to check the server your site is on to figure out if it's got the Heartbleed bug and update it.

  Remember to check your various web providers like domain name registration services, web hosting company, etc.

Rationality Learning Opportunity (The Questions)

We won't get many opportunities to think about how we react in a disaster.  For obvious ethical reasons, we can't exactly create disasters in order to test ourselves.  I am taking the opportunity to reflect on my reactions and am sharing my method for doing this.  Here are some questions I asked myself which are designed to encourage reflection.  I admit to having made two mistakes at first: I did not apply rigorous skepticism to each news source right from the very first article I read, and the mistake of underestimating the full extent of what it would take to address the issue.  What saved me was noticing my confusion.

  When you first heard about Heartbleed, did you fail to react?  (Normalcy bias)

  When you first learned about the risk, what probability did you assign to being affected by it?  What probability do you assign now?  (Optimism bias)

  Were you surprised to find out that someone in your life did not know about Heartbleed, and regret not telling them when it had occurred to you to tell them?  (Bystander effect)

  What did you think it was going to take to address Heartbleed?  Did you underestimate what it would take to address it competently?  (Dunning-Kruger effect)

  After reading news sources on Heartbleed instructions, were you surprised later that some of them were wrong?

  How much time did you think it would take to address the issue?  Did it take longer?  (Planning fallacy)

  Did you ignore Heartbleed?  (Ostrich effect)


Companies, of course, want to present a respectable face to customers, so most of them are not just coming out and saying "We were affected by Heartbleed.  We have updated.  It's time to change your password now."  Instead, some have been writing fluff like:

  "We see no evidence that data was stolen."

  According to the company that found this bug, Heartbleed doesn't leave a trail in the logs. [15] If someone did steal your password, would there be evidence anyway?  Maybe some really were able to rule that out somehow.  Positivity bias, a type of confirmation bias, is an important possibility here.  Maybe, like many humans, these companies simply failed to "Look into the dark" [16] and think of alternate explanations for the evidence they're seeing (or not seeing, which can sometimes be evidence [17], but not useful evidence in this case).

  "We didn't bother to tell you whether we updated for Heartbleed, but it's always a good idea to change your password however often."

  Unless you know each website has updated for Heartbleed, there's a chance that you're going to go out and send your new passwords right through a bunch of website's Heartbleed security holes as you're changing them.  Now that Heartbleed is big news, every hacker and script kiddie on planet earth probably knows about it, which means there are probably way more people trying to steal passwords through Heartbleed than before.  Which is the greater risk?  Entering in a new password while the site is leaking passwords in a potentially hacker-infested environment, or leaving your potentially stolen password there until the site has updated?  Worse, if people *did not* change their password after the update because they already changed it *before* the update, they've got a false sense of security about the probability that their password was stolen.  Maybe some these companies updated for Heartbleed before saying that.  Maybe the bug was completely non-applicable for them.  Regardless, I think end users deserve to know that updating their password before the Heartbleed update carries a risk.  Users need to be told whether an update has been applied.  As James Lynn wrote for Forbes, "Forcing customers to guess or test themselves is just negligent." [8]

"Fluffspeek" is a play on "leetspeek", a term used to describe bits of text full of numbers and symbols that is attributed to silly "hackers".  Some PR fluff may be a deliberate attempt to exploit others, similar in some ways to the manipulation techniques popular among black hat hackers, called social engineering.  Even when it's not deliberate, this kind of garbage is probably about as ugly to most people with half a brain as "I AM AN 31337 HACKER!!!1", so is still fitting.










 8. "Avoiding Heartbleed Hype, What To Do To Stay Safe" (I can't link to this for some reason but you can do a search.)






 14. "A Billion Smartphone Users May Be Affected by the Heartbleed Security Flaw" (I can't link to this for some reason but you can do a search.)




View more: Next