Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

That Magical Click

54 Post author: Eliezer_Yudkowsky 20 January 2010 04:35PM

Followup toNormal Cryonics

Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists.  34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.

Except for one thing.

During one conversation, I said something about there being no magic in our universe.

And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"

Sigh.  We all know how this conversation is going to go, right?

So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"

"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"

Click.

She got it, just like that.

This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:

"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it.  She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that.  So my mother laughed and said that if I still felt that way when I was older, she wouldn't object.  Later, when I was older and signing up for cryonics, she objected."

Click.

It's... kinda frustrating, actually.

There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).

Freezing damage?  I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...

But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does.  It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old.  (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)

Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.

Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew.  It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.

And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious.  (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.)  It would have been convenient if I'd discovered some particular key insight that convinced people.  If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance."  Then I could just emphasize that argument, and people would sign up.

But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor."

In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...

Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to "get it" just from hearing a casual mention on the radio.  Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.

If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five.  As it is, I have no problem with shrugging and saying "People are crazy, the world is mad."

But it really, really raises the question of what the hell is in that click.

There's this magical click that some people get and some people don't, and I don't understand what's in the click.  There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.  I myself failed to click on one notable occasion, but the topic was probably just as clickable.

(In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)

A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it.  I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago.  But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green.  And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons.  Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.

We can define a "click" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.

What makes it happen?  What goes into that click?

It's a question of life-or-death importance, and I don't know the answer.

That generation of cryonicists seemed so normal apart from that...

What's in that click?

The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well.  Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it.  That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal.  (Michael Vassar is one example of a "superclicker".)  But it is still true AFAICT that people who click on one problem are more likely than average to click on another.

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time.  Clicky people would tend to be people who take all of their beliefs at face value.

The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.

Is that really just it?  Is there no special sanity to add, but only ordinary madness to take away?  Where do superclickers come from - are they just born lacking a whole lot of distractions?

What the hell is in that click?

Comments (392)

Comment author: bshock 20 January 2010 10:27:30PM 54 points [-]

Once upon a time, I had a job where most of what I did involved signing up people for cryonics. I'm guessing that few other people on this site can say they've ever made a salary off that (unless you're reading this, Derek), and so I can speak with some small authority. Over those four excruciating years at Alcor, I spent hundreds of hours discussing the subject with hundreds of people.

Obviously I never came up with a definitive answer as to why some people get it and most don't. But I developed a working map of the conceptual space. Rather than a single "click," I found that there were a series of memetic filters.

The first and largest by far tended to be religious, which is to say, afterlife mythology. If you thought you were going to Heaven, Kolob, another plane of existence, or another body, you wouldn't bother investing the money or emotional effort in cryonics.

Only then came the intellectual barriers, but the boundary could be extremely vague. I think that the vast majority of people didnt have any trouble grasping the basic scientific arguments for cryonics; the actual logic filter always seemed relatively thin to me. Instead, people used their intellect to rationalize against cryonics, either motivated by existing beliefs (from one end) or by resulting anxieties (from the other).

Anxieties relating to cryonics tended to revolve around social situation and/or death. Some people identified so deeply with their current social situation, the idea of losing that situation (family, friends, standing, culture, etc.) was unthinkable. Others were afflicted by a sort of hypothetical survivor guilt; why did they deserve to live, when so many of their loved ones had died? Perhaps the majority were simply repulsed by any thought of death itself; most of them spent their lives trying not to think about the fact that we would die, and found it extremely depressing or disorienting when forced to confront that fact.

I don't think I could categorize the stages of approach to cryonics quite as neatly (and questionably) as the Kubler-Ross stages of dying. Clearly there was nothing inevitable about coming to accept cryonics, and approximately 90-95% of everyone I met never made it past the first filter. Even when people passed all of the memetic filters I've mentioned, they still had a tendency to become mired at the beginning or middle of their cryonics arrangements, floating in some sort of metastable emotional fog (starting cryonics arrangements felt like retreating from death, proceeding with them felt like approaching it).

Oh well, I haven't thought about this subject much since 1999. This is just my off-the-cuff memory of how I used to make a living.

Comment author: MichaelGR 21 January 2010 12:08:56AM *  19 points [-]

Thank you for writing this.

If you ever feel like writing a longer post about your experience in the cryonics world, I'd love to read it and I suspect others would too.

Comment author: BrandonReinhart 21 January 2010 01:22:14AM 11 points [-]

If I might ask: why did you quit?

Comment author: John_Maxwell_IV 21 January 2010 07:34:00AM 3 points [-]

Betcha he got frustrated with how irrational people were. No joke.

Comment author: byrnema 21 January 2010 04:57:18PM *  5 points [-]

I wish I had more of the knowledge that you have so that I could use it to update my models of people -- at the moment, I can't locate a place in my model to accommodate people being so reluctant to sign up for cryonics while believing that it could work.

(a) Could you give some information regarding the setting? Were these people that approached you, or did you approach them? Did you meet in a formal place, like an office in Alcor, or an informal setting, like on their way from one place to another?

(b)

The first and largest by far tended to be religious, which is to say, afterlife mythology. If you thought you were going to Heaven, Kolob, another plane of existence, or another body, you wouldn't bother investing the money or emotional effort in cryonics.

How long would your conversations with these religious people be, on average? It seems they would have already made up their minds. How did their fear of screwing up their afterlife square with the typical belief that people can be resuscitated after 'flat-lining' in a hospital with souls intact?

(c)

Some people identified so deeply with their current social situation, the idea of losing that situation (family, friends, standing, culture, etc.) was unthinkable.

What would you think of the hypothesis that people don't much value life outside their social connections? (A counter-argument is that people have taken boats and sailed to strange and foreign continents throughout history, but maybe they represent a small fraction of personalities.) Were people much more likely to sign up in groups of 3 or more?

(d)

Perhaps the majority were simply repulsed by any thought of death itself; most of them spent their lives trying not to think about the fact that we would die, and found it extremely depressing or disorienting when forced to confront that fact.

This I find least intuitive, because cryonics would be a way to be in denial about death. They could imagine that the probability of successful awakening is as high as they want it to be. Do you think that they could have been repulsed or disoriented by something else like -- just speculating -- a primal fear of being a zombie / being punished for being greedy / the emotional consequences of having unfounded hope in immortality?

If you have an interest in answering any subset of these questions, thanks in advance.

Comment author: whowhowho 27 February 2013 02:33:08PM 1 point [-]

Some people identified so deeply with their current social situation, the idea of losing that situation (family, friends, standing, culture, etc.) was unthinkable.

What's the connotation of that? That they're deplorably irrational, or that ardent Cryonicists are weirdly asocial?

Comment author: TheAncientGeek 05 April 2014 03:14:09PM 1 point [-]

Waking up in a future society, where you don't know anyone, yours skills are useless, etc, is equivalent to exile, which is generally considered a punishment.

Comment author: RichardKennaway 05 April 2014 05:35:01PM 1 point [-]

Exile is only a punishment because it is worse than staying at home. When the alternative is being dead, most people will take exile, as demonstrated by refugees from war zones.

Comment author: Kutta 20 January 2010 08:55:16PM *  34 points [-]

Is that really just it? Is there no special sanity to add, but only ordinary madness to take away?

I think this is the primary factor. I've got a pretty amusing story about this.

Last week I met a relatively distant relative, a 15 year old guy who's in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about "robots", and in a shortly unfolding 15 minute spontaneous conversation I've managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I'm fairly sure that he had no previous knowledge about the subject.

First I made a remark in connection to the TV program's poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.

Then he asked if I think they'll get that smart, and I answered that it's quite possible in this century. I explained recursive self-improvement in two sentences, to illustrate the reasons why they could potentially get very, very smart in a small amount of time. I talked about the technology that would probably allow AIs to act upon the world with great efficiency and power. Next, he said something like "that's good, wouldn't AI's would be a big help, like, they will invent new medicine?" At this point I was pretty amused. I assured him that AIs indeed have great potentials. I talked then very shortly about most basic AI topics, providing the usual illustrations like Hollywood AIs, smiley-tiled solar systems and foolish programmers overlooking the complexity of value. I delineated CEV in a simplified "redux" manner, focusing on the idea that we should optimally just extract all relevant information from human brains by scanning them, to make sure nothing we care about is left out. "That should be a huge technical problem, to scan that much brains", he said.

And now:

"But if the AI gets so potent, would not it be a problem anyway, even if it's perfectly friendly, that it can do everything much better than humans, and we'll get bored?"

"Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It'll be taken into account by the AI, and it will make sure it'll not pamper you excessively."

"Ah, that sounds pretty reasonable".

Now, all of this happened in the course of roughly 15 minutes. No absurdity heuristic, no getting lost, no objections; he just took everything I said at face value, assuming that I'm more knowledgeable on these matters, and I was in general convinced that nothing I explained was particularly hard to grasp. He asked relevant questions and was very interested in what I said.

Some thoughts why this was possible:

  • The guy belongs to a certain social strata in Hungary, namely to those who newly entered the middle class by free entrepreneurship that became a possibility after the country switched to capitalism. At first, the socialist regime repressed religion and just about every human rights, then eased up, softened, and became what's known as the "happiest barrack". People became unconcerned with politics (which they could not influence) and religion (which was though of as a highly personal matter that should not be taken to public), they just focused on their own wealth and well-being. I'm convinced that the parents of the guy care zero about any religion, the absence of religion, doctrine, ideology or whatever. They just work to make a living and don't think about lofty matters, leaving their son ideologically perfectly intact. Just like my own parents.

  • Actually, AI is not intrinsically abstract or hard to digest; my interlocutor knew what an AI is, even if from movies, and probably watched just enough Discovery to have a sketchy picture about future technologies. The mind design space argument is not that hard (he had known about evolution because it's taught in school. He immediately agreed that AIs can be much smarter than humans because if we wait a million years, maybe humans can also become much smarter, so it's technically possible), and the smiley-tiled solar system is an entertaining and effective explanation about morality. I think that Eliezer has put extreme amounts of effort to maximize the chance that his AI ideas will get transmitted even to people who are primed or biased against AI or at risk of motivated skepticism. So far, I've had great success using his parables, analogues and ways of explanation.

  • My perceived status as an "intellectual" made him accept my explanations at face value. He's a football player in a smallish countryside city and I'm a serious college student in the capital city (it's good he doesn't know how lousy a student I am). Still, I do not think this was a significant factor. He probably does not talk about AI among football players, but being a male he has some basic interests in futuristic or gadgety subjects.

In the end, it probably all comes down to lacking some specific ways of craziness. Cryonics seemed normal on that convention Eliezer attended, and I'm sure every idea that is epistemically and morally correct can in principle be a so-called normal thing. Besides this guy, I've even had full success lecturing a 17 year old metal drummer on AI and SIAI - and he was situated socioeconomically very similarly to the first guy, and neither he had any previous knowledge.

Comment author: komponisto 20 January 2010 09:54:03PM 8 points [-]

in Hungary

Surprise level went down from gi-normous to merely moderate at this point.

Comment author: Kevin 22 January 2010 01:30:13PM *  5 points [-]

This is a great post, and I'd be interested in seeing you write out a fuller version of what you said to your relative as a top level post, something like "Friendly AI and the Singularity explained for adolescents."

Also, do you speak English as a second language? If so, I am especially impressed with your writing ability.

On a tangent, am I the only one that doesn't like the usage of boy, girl, or child to describe adolescents? It seems demeaning, because adolescents are not biologically children, they've just been defined to be children by the state. I suppose I'm never going to overturn that usage, but I'd like to know if there is some reason why I shouldn't be bothered by the common usage of the words for children.

Comment author: Kutta 22 January 2010 03:41:40PM *  7 points [-]

Yes, English is second language for me and I mostly learned it via reading things on the Internet.

Excuse me for the boy/guy confusion, I did not have any particular intent behind the wording. It was an unconscious application of my native language's tendency to refer to <18 year old males with the "boy" equivalent word. As I'm mostly a lurker I have much less writing than reading experience; currently I usually make dozens of spelling/formulation corrections on longer posts, but some weirdly used words or mistakes are guaranteed to remain in the text.

Comment author: Kevin 22 January 2010 07:55:51PM 2 points [-]

The boy usage is correct in English as well; I just don't like that usage, but I'm out of the mainstream.

Comment author: Cyan 20 January 2010 08:59:24PM *  4 points [-]

I had essentially this conversation with my sister-in-law's boyfriend (Canadian art student in his early twenties) just about four weeks ago. Didn't get to the boredom question, but did talk a bit about cryonics. Took about 25 minutes.

Comment author: pwno 21 January 2010 07:12:23PM 3 points [-]

"Hahh, not at all. If you think that getting all bored and unneeded is bad, then it is a real preference inside your head. It'll be taken into account by the AI, and it will make sure it'll not pamper you excessively."

But wouldn't the knowledge that the AI could potentially do your work be psychologically harmful?

Comment author: denisbider 25 January 2010 04:21:57PM 6 points [-]

When you play an engaging computer game, does it detract from your experience knowing that all the tasks you are performing are only there for your pleasure, and that the developers could have easily just made you click an "I Win" button without requiring you to do anything else?

Comment author: Wei_Dai 25 January 2010 07:54:20PM 2 points [-]

I suspect that status effects might be important here. When we play a video game, we choose to do it voluntarily, and so the developers are providing us a service. But if the universe is controlled by an AI, and we have no choice but to play games that it provides us, then it would feel more like being a pet.

The AI could also try to take that into account, I suppose, but I'm not sure what it could do to alleviate the problem without lying to us.

Comment author: Vladimir_Nesov 26 January 2010 10:01:23PM *  7 points [-]

If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.

Comment author: BrienneStrohl 14 June 2013 12:14:48AM 2 points [-]

There seem to be two ways for the AI thing to click. Some people click and go "Oh yeah, that makes sense," and then if you ask them about it they'll tell you they believe it's a problem, but they won't change their behavior very much otherwise. The other people click and go, "0_0 Wtf am I doing with my life???" and then they move to the Bay Area or New York and join the other people devoting their every resource to preventing paperclip maximizers and the like. Which type were your people, and what do you think causes the difference?

Comment author: pjeby 20 January 2010 06:34:15PM 127 points [-]

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science.

An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.

Well, not congratulating me, exactly. He was saying, "wow, that turned out really well", and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like it was luck, like, "wow, what nice weather we had."

So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place -- as opposed to heroic efforts during the project -- was quite an eye opener for him.

Fortunately, he (and his boss) were "clicky" enough in other areas (i.e., they didn't believe computers were magic, for example) that I was able to make the math of what I was doing click for them at that "teachable moment".

Unfortunately, most people, in most areas of their lives treat everything as magic. They're not used to being able to understand or control anything but the simplest of things, so it doesn't occur to them to even try. Instead, they just go along with whatever everybody else is thinking or doing.

For such (most) people, reality is social, rather than something you understand/ control.

(Side note: I find myself often trying to find a way to express grasp/control as a pair, because really the two are the same. If you really grasp something, you should be able to control it, at least in principle.)

Comment author: arundelo 20 January 2010 10:49:36PM *  28 points [-]

Such people have no problem with the idea of magic, because everything is magic to them, even science.

Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn't understand it because his explanation didn't seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn't understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.

Comment author: Eliezer_Yudkowsky 21 January 2010 12:24:24AM 31 points [-]

Huh. I guess that if I tell myself, "Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it", then I do feel a little less confused.

Comment author: pjeby 21 January 2010 02:15:03AM 30 points [-]

Most people simply do not expect reality to make sense

More precisely, different people are probably using different definitions of "make sense"... and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people "make sense". (Certainly, it's what helped me become aware of the issue in the first place.)

So, here are some short snippets from the book "Using Your Brain For A Change", wherein the author comments on various cognitive strategies he's observed people using in order to decide whether they "understand" something:

There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different....

A second kind of understanding simply allows you to have a good feeling: "Ahhhh." It's sort of like salivating to a bell: it's a conditioned response, and all you get is that good feeling. That's the kind of thing that can lead to saying, "Oh, yes, 'ego' is that one up there on the chart. I've seen that before; yes, I understand." That kind of understanding also doesn't teach you to be able to do anything.

A third kind of understanding allows you to talk about things with important sounding concepts, and sometimes even equations.... Concepts can be useful, but only if they have an experiential basis [i.e. "near" beliefs that "pay rent"], and only if they allow you to do something different.

Obviously, we are talking mostly about "clicking" being something more like this latter category of sense-making, but the author actually did mention how certain kinds of "fuzzy" understanding would actually be more helpful in social interaction:

However, a fuzzy, bright understanding will be good for some things. For example, this is probably someone who would be lots of fun at a party. She'll be a very responsive person, because all she needs to do to feel like she understands what someone says is to fuzz up her [mental] pictures. It doesn't take a lot of information to be able to make a bright, fuzzy movie. She can do that really quickly, and then have a lot of feelings watching that bright movie. Her kind of understanding is the kind I talked about earlier, that doesn't have much to do with the outside world. It helps her feel better, but it won't be much help in coping with actual problems.

Most of the chapter concerned itself with various cognitive strategies of detailed understanding used by a scientist, a pilot, an engineer, and so on, but it also pointed out:

What I want you all to realize is that all of you are in the same position as that ... woman who fuzzes images. No matter how good you think your process of understanding is, there will always be times and places where another process would work much better for you. Earlier someone gave us the process a scientist used -- economical little pictures with diagrams. That will work marvelously well for [understanding] the physical world, but I'll predict that person has difficulties understanding people -- a common problem for scientists. (Man: Yes, that's true.)

Anyway, that chapter was a big clue for me towards "clicking" on the idea that the first two obstacles to be overcome in communicating a new concept are 1) getting people to realize that there's something to "get", and 2) getting them to get that they don't already "get" it. (And both of these can be quite difficult, especially if the other person thinks they have a higher social status than you.)

Comment author: MichaelGR 21 January 2010 03:32:55PM 6 points [-]

Would you recommend that book? ("Using Your Brain For A Change")

Is the rest of it insightful too, or did you quote the only good part?

Comment author: pjeby 22 January 2010 02:33:50AM 11 points [-]

Is the rest of it insightful too, or did you quote the only good part?

There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can't make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It's too difficult to pay attention to both the steps in the book and what's going on in my head at the same time.

Comment author: sketerpot 21 January 2010 02:28:12AM *  22 points [-]

I'm still confused, but now my eyes are wide with horror, too. I don't dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?

I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don't understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher's password -- the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I'm always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.

Is this something that people can learn in general? How? I consider this a hugely important question.

Comment author: John_Maxwell_IV 21 January 2010 07:31:49AM 6 points [-]

I wouldn't be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?

Comment author: wedrifid 21 January 2010 07:44:37AM 4 points [-]

There are certainly a disproportionate number of computer programmers on Less Wrong, right?

And those that aren't computer programmers would display a disproportionate amount of aptitude if they tried.

Comment author: John_Maxwell_IV 21 January 2010 09:08:25AM 9 points [-]

Certainly; I think this is a case where there are 3 types of causality going on:

  1. Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
  2. Improving as a programmer makes you more attracted to Less Wrong.
  3. Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
Comment author: TheOtherDave 22 November 2010 01:33:32AM 2 points [-]

Is there any way to get someone to start expecting reality to make sense?

If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded.

The programming example you give is a good one. There's a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don't work, and writing a working program is far more rewarding than anything else you might do in a programming class.

Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize.

But... is any of that the same as getting people to "expect reality to make sense"; is it the same as that "click" the OP is talking about? Is any of it the same as what the LW community refers to as "being rational"?

I'm not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense).

The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that's too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise... perhaps as part of a "Methods of Rationality" video game or something like that.

Comment author: Aurini 21 January 2010 03:57:30AM 29 points [-]

During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that 'higher' frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations - though presumably not engineering).

Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear - sometimes it's worse than finding the duplicate mp3s in my music library.

Comment author: bogus 21 January 2010 08:02:52PM *  4 points [-]

Our trainer was explaining something (the tracert command) but I didn't understand it because his explanation didn't seem to make sense.

Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it's normally decremented by routers) and the ICMP TTL Exceeded message. But I'm not sure that a tech support drone would be expected to understand any of these.

Comment author: arundelo 22 January 2010 12:13:17AM *  24 points [-]

To properly understand how traceroute works one would need to know about the TTL field

I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: "Both." A charitable interpretation of this would be "They measure round trip times between the source and that router, but it's just a matter of arithmetic to use those to estimate round trip times between any two routers in the list" -- but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.

Edit: I think I remember him saying something like "You're expecting it to be logical, but things aren't always logical".

Comment author: denisbider 25 January 2010 03:43:52PM 29 points [-]

Jesus Christ. "Things aren't always logical." The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn't seem that way is when one lacks understanding.

Comment author: MichaelGR 20 January 2010 11:53:05PM 1 point [-]

Looks like he was just repeating various teacher's passwords.

Comment author: AndyWood 20 January 2010 07:24:01PM *  24 points [-]

This is overwhelmingly how I perceive most people. This in particular: 'reality is social'.

I have personally traced the difference, in myself, to receiving this book at around the age of three or four. It has illustrations of gadgets and appliances, with cut-out views of their internals. I learned almost as soon as I was capable of learning, that nothing is a mysterious black box, things that seem magical have internal detail, and there are explanations for how they work. Whether or not I had anything like a pre-existing disposition that made me love and devour the book in the first place, I still consider it to have had a bigger impact on my whole world view than anything else I can remember.

Comment author: VAuroch 31 December 2013 07:35:51PM 1 point [-]

I got Macaulay's The Way Things Work (the original) at a slightly higher age. I suspect a big reason I became a computer scientist was the joy of puzzling through the adder diagrams and understanding why they worked.

Comment author: FourFire 23 January 2014 09:14:34PM 1 point [-]

This is mine which I recieved at around age six. I don't recall how many tens of times I read and reread those pages.

Comment author: Stuart_Armstrong 20 January 2010 07:54:51PM 19 points [-]

This is worth an entire post by itself. Cheers.

Comment author: wedrifid 20 January 2010 11:16:40PM 5 points [-]

Yes, please!

Comment author: gwern 20 January 2010 08:07:11PM 14 points [-]

So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place -- as opposed to heroic efforts during the project -- was quite an eye opener for him.

The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'

Comment author: pjeby 20 January 2010 08:47:01PM 13 points [-]

The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'

What makes you think it was the only one, or one of a few out of many?

The specific project was only relevant because my bosses prior to that point in time already implicitly understood that there was something my team was doing that got our projects done on time when others under their authority were struggling - but they attributed it to intelligence or skill on my part, rather than our methodology/philosophy.

The newer boss, OTOH, didn't have any direct familiarity with my track record, and so didn't attribute the success to me at all, except that obviously I hadn't screwed it up.

Comment author: thomblake 20 January 2010 07:04:37PM 4 points [-]

Not usually a fan of your thoughts, but these seem right on the money.

Comment author: MrHen 20 January 2010 06:43:51PM 20 points [-]

Mmm... I am a click-hunter. I keep pestering a topic and returning over and over until I feel it click. I can understand something well enough to start accurately predicting results but still refuse to be satisfied until I feel it click. Once it clicks I move on.

You and I may be describing different types of clicks, however. Here is a short list of things I have observed about the clicks in my life.

  • The minor step from not having a subject click and having a subject click is enormous. It is the single greatest leap in knowledge I will likely experience in a subject matter. I may learn more in one click than with a whole semester of absorbing knowledge from a book.

  • Clicks don't translate well. It is hard to describe the actual path up to and through a click.

  • What causes a subject to click for me will not cause it to click for another. Clicks seem to be very personal experiences, which is probably why it is so hard to translate.

  • Clicks tend to be most noticeable with large amounts of critical study. I assume that day-in-day-out clicks are not terribly noticeable but I suspect that they exist. A simple example I can think of is suddenly discovering a quicker route through town.

  • Clicks do not require large amounts of critical study, however, as I have had clicks drop on me from nowhere with all of the answers to a particular problem laying around in plain sight.

  • Once a click happens, the extra perspective appears obviously true. Clicks are often accompanied with phrases like, "Oh!" or "Why didn't I see this before?!"

  • Even for complicated subjects, it takes trivial amounts of conversation to learn if the subject has clicked in another person. Once you "get it," other people who get it know you got it.

  • Some people are much better at producing clicks in others.

  • Some people have no idea what a click is and have never felt one. Some of these people are very smart, but I seem to notice that they have a weakness for abstract thought or are more likely to be satisfied with stopping once they have accurate predictors. Perhaps learning why the model ended up being that particular model is extraneous and not needed to predict and so is an unwanted extra step.

  • Mind-dumping helps things click. I find that if I just blah on a page, start over and blah again, and repeat the process a click will probably happen at some point in the cycle.

  • There are topics that have not clicked for me yet but I suspect they would if I kept pushing them.

  • Perspectives from other people help clicks happen. Listening to someone else struggle to understand the concept helps clicks happen.

Comment author: Sly 20 January 2010 11:33:02PM *  6 points [-]

I found this:

The minor step from not having a subject click and having a subject click is enormous.

To be very true.

Many times in my classes I have barely grasped what the professor was saying throughout the year only to click the subject at a later time when a fellow student explained it to me in a way that grokked. Whenever this happens, I feel like I have learned more in that brief period then in the entire class before then.

Comment author: sketerpot 21 January 2010 02:40:46AM *  7 points [-]

This is actually how I approach difficult textbooks. I read through as much as I can before I just totally collapse in confusion, look up related information on the internet, take a few days off, and then go back through from the beginning. The textbook usually makes vastly more sense then, as all the disjointed pieces come together in a way that's obvious in retrospect.

This is how I was able to read through and understand an algorithms textbook in junior high, even though it terrifies and befuddles people in their third year of college. It's just not that hard if you attack it in multiple passes, because multipass studying is much more likely to get you to the click of understanding.

Comment author: Jonathan_Graehl 20 January 2010 11:12:12PM 3 points [-]

More so than with other descriptors of internal mental state, I wonder which people saying "click" mean the same thing.

I feel quite satisfied when I change my mind as a result of a new insight, but also a little hesitant to consider the case closed until time passes - I feel apprehensive that another insight+reversal may follow in the consequent mental shifting. Is that a "click"?

Comment author: MrHen 20 January 2010 11:57:07PM 2 points [-]

Maybe, but it doesn't really match my feelings when I get a click. This doesn't mean you are I have better or worse clicks. It could just mean we react to them differently.

I think if there is a difference between your click and mine it is that my clicks tend to be reactions to things generally considered to be factual or true but something I have trouble understanding. Clicks tend not to be brand new discoveries but rather a full, complete understanding of someone else's discovery. The easiest example is from mathematics. A complicated piece of linear algebra is True but I don't fully Get It until it clicks.

Comment author: brian_jaress 21 January 2010 11:06:48AM 34 points [-]

There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I think it's a mistake to put all the opinions you agree with in a special category. Why do some people come quickly to beliefs you agree with? There is no reason, except that sometimes people come quickly to beliefs, and some beliefs happen to match yours.

People who share one belief with you are more likely to share others, so you're anecdotally finding people who agree with you about non-cryonics things at a cryonics conference. Young people might be more likely to change their mind quickly because they're more likely to hear something for the first time.

Comment author: gregconen 22 January 2010 01:11:14AM 22 points [-]

More strongly, is there any reason to believe that people are more likely to "click" to rational beliefs than irrational ones?

As an example, papal infallibility once clicked for me (during childhood religious education), which I think most people here would agree is wrong, even conditioned on the existence of God.

Comment author: PeterS 21 January 2010 07:51:53PM 3 points [-]

People who share one belief with you are more likely to share others

True. In this case, once you get the consequentialist/utilitarian "click", you're more likely to come down with the rest of the clicks - the examples he listed are highly entangled.

Comment author: steven0461 20 January 2010 07:22:00PM *  8 points [-]

There's also the valuable trait where, between being presented with an argument and going "click", one's brain cleanly goes "duhhh", rather than producing something that sounds superficially like reasoning.

Comment author: thomblake 20 January 2010 07:44:39PM 4 points [-]

I greatly value that one. I'm in the (apparently small) group of people who, when presented with a statistics/probability problem, will say, "Clearly the solution involves math. The answer is to consult someone who knows how to solve the problem." rather than come up with the wrong answer that "feels right" or alternatively knowing how to find the right answer.

Comment author: wedrifid 20 January 2010 11:11:02PM 3 points [-]

I'm in the (apparently small) group of people who, when presented with a statistics/probability problem, will say, "Clearly the solution involves math. The answer is to consult someone who knows how to solve the problem."

Does that group also include those for whom 'consult someone who knows' wouldn't occur until 'learn how to do it' was thoroughly ruled out?

Comment author: Corey_Newsome 20 January 2010 07:58:18PM 3 points [-]

Hm, interesting point. I'm not sure I have this trait, because instead of thinking "duhhh" when I hear a well-reasoned and compelling argument, I like to make a few sanity checks and run it past my skepticism meter before allowing the clicking mechanism to engage. I wonder if that's ever produced results; at any rate, I feel like it's my duty to keep good epistemic hygiene, though my skeptical reasoning might be superficial. For this reason it normally takes a few seconds before I allow things to click, which slows conversation a tad. Perhaps I should tentatively accept the premises of hypotheses first and then be skeptical later, when I have time and resources?

Also, I wonder to what extent the desire to be skeptical is more related to the desire not to appear gullible than to a desire to find truth.

Comment author: jimmy 20 January 2010 06:45:42PM *  8 points [-]

I've often described learning in terms of 'clicking'.

It's most memorable to me when thinking about hard problems that I can't solve right away. It feels like something finally puts the last piece of the puzzle in place and for the first time I can 'see' the answer.

When trying to teach people, I've noticed that some people have a very obvious 'click response'- they'll light up at a distinct moment and just get it from then on.

Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.

For me though, learning about cryonics and the intelligence explosion idea didn't seem very 'click like' since it just seemed obviously true the first time I heard about it, rather than there being a delay that makes the evaporation of confusion more satisfying. I suspect the learning mechanism is actually the same though.

Comment author: pjeby 20 January 2010 09:15:59PM 12 points [-]

Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.

How about 4) they don't really get it, and just think they do, or 5) they don't realize there's anything to "get" in the first place, because they think knowledge is a mysterious thing that you memorize and regurgitate. I think that's actually the most common case, but the others are perhaps plausible as well.

Comment author: wedrifid 20 January 2010 11:12:35PM 5 points [-]

Your 5) seems the best fit.

Comment author: JackChristopher 21 January 2010 02:14:53AM 2 points [-]

"people have a very obvious 'click response'- they'll light up at a distinct moment and just get it from then on."

Here's the facial expression I've noticed: Head tilts upward but off to the side, eyes rolling upward. Followed by quick head nod downward, as if to say "Yes" — It's almost always followed with an apt question.

I do this. But of course someone could fake it. One sign is they add nothing to the conversation after it. You'll notice that. If you aren't sure quiz them.

Comment author: Rain 25 January 2010 07:34:58PM *  18 points [-]

Thank you for writing this post. It's one of the topics that has kept me from participating in the discussion here - I click on things very often, as a trained and sustained act of rationality, and often find it difficult to verbalize why I feel I am right and others wrong. But when I feel that I have clicked, then I have very high confidence in my rightness, as determined by observation and many years of evidence that my clicks are, indeed, right.

I use the phrase, "My subconscious is way smarter than I am," to describe this event. My best guess is that my subconscious has built-in pathways to notice logical flaws, lack of evidence, and has already chewed through problems over many years of thought ("creating a path"?), and I have trained myself to follow these "feelings" and form them into conscious words/thoughts/actions. It seems to be related to memory and number of facts in some ways, as the more reading I have done on a topic, the better I'm able to click on related topics. I do not use the word "feeling" lightly - it really does feel like something, and it gives me a sort of built-in filter.

I click on people (small movements, small statements leading to huge understanding gains, to the point where I can literally say what they're thinking from the slightest gesture), I click on tests (memorization), I click on big topics (X-risks, shut-up-and-multiply), philosophy, etc. Quantum mechanics, I have failed to click anything, and have been avoiding.

What I've found is that my click decisions, when thought is applied, have dozens of reasons behind them, all unrealized at the time I was able to make the decision. Writing them all out afterward makes for an incredibly powerful argument in favor of my decision, and oftentimes shows that I really did weigh all the positives and negatives, just not in a rigorous 'proof'. Like not showing your work on a math problem, but still being able to look at the numbers and know the result.

One of the things I had to eliminate for my clickiness to become truly powerful was the desire to hold onto current beliefs. Openness to change is essential to letting the click take over your thoughts and lead you in a new direction. People get frustrated when arguing with me on occasion, as I will be a strong proponent of a specific position, then they present to me a single fact that demolishes it, and I will immediately begin arguing a new position using that fact as support. They typically laugh and shake their head, as if I had never supported my previous position and am now just arguing for argument's sake, when in reality, I clicked on the new fact and realized the implications, adjusting my beliefs accordingly, all in an instant.

Another thing which I've noticed I do which helps greatly with click thinking is absorb a huge amount of information on a specific topic. When I need to make an important decision, I get books from the library, I hit up google and click links out to 25+ pages of results, I check for authoritative forums and lurk there, reading, learning, rarely asking questions but picking up as much as I can none-the-less. I did this for WoW, for audio tech, for financial investment, for Singularity and rationality topics... and after I've spent a few months at integrating myself with that information center, I'm able to click like crazy on all the important things. Some topics, once I've clicked, I drop it and move on, but others I continue to practice and read, like first order philosophy. Thought experiments (roleplaying, etc.) tend to help if the topic is complex enough. I play a character in my head, or in a game, for a month or so, then when I need to ask a really important question about the topic the character was designed around, I just click to the answer without consciously thinking, because I just know. For clicking on people, this involves observation and spending time around that person, and asking lots of questions about what they're thinking (I've gotten very good at asking without seeming annoying).

In addition to being open to change and gorging on data, I've found it very important to trust yourself. And by this I don't mean, 'trust that you're making the right decision', but as an almost 'trust that other person who used your name at the time but who you don't even remember or think like any more'. In a sense, trust Rain-2007, even though I am now Rain-2010. I realize this will be counter to Eliezer's standard, "Do not listen to Eliezer-2001 - he was wrong," but that's not how I mean it. Instead, I'm saying that, at the point of information-glut and focus on a topic, I'm far more clickable than at some distant point in the future. I should trust that decision unless I'm willing to go through the same process of gather, read, learn, focus, and decide anew. That past click will be more right than any decision I make removed from that focus. It also means that you should trust your "instincts" - heresy, I know, considering the inherent biases, but a click really does "feel" different from a template-bias response.

One other thing I do, but which I'm not sure contributes to clickability, is avoid deep jargon or too-specific thinking. The click seems related to generalized thought processes rather than specific verbiage. So, for example, rather than reading and learning what Kolomogorov complexity is (probably spelled wrong - as I said, I'm avoiding this stuff), I'd rather do a roleplaying exercise where my character exists at various technology levels, and generalize the universe around them. This step may seem at odds with the information-glut step, but I combine the two - when reading, every link I check often has only one or two sentences, maybe a paragraph or a whole page, that I actually "use" as in try to retain. The rest of it, I consider useless/worthless, and discard as best I can.

Which reminds me, I also ignore / forget information in order to more carefully focus on what I'm thinking about in the present (this is part of why I have to trust my past self so much). I try not to overload myself with knowledge or memories that don't help me make decisions now or in the anticipated future. Some people find this frustrating, as I don't remember I pushed them down the stairs when I was 10, but that thing in my childhood was so different from me that I found no point in remembering much of it.

I think the click is a result of years of gathering information and thinking on (potentially general) topics, the ability to rapidly change, the ability to recognize how it feels to click, and to place the trust in that feeling that it deserves. It seems to be a trained skill, starting with (prerequisite of?) a good memory.

The one thing I'm getting out of writing this post is that the 'click' you describe is, in my opinion, not simple or a single effect, but rather a complex interaction of events, abilities, and predispositions. It may not be reproducible or trainable.

Sorry this post is late. Another part of the information-gathering strategy is to let conversations resolve themselves (wait a few days after a post) and read it all at once so points and counterpoints are all neatly together at the same time.

Comment author: Rain 26 January 2010 03:06:30PM *  6 points [-]

Other phrases to describe "click": intuition, grok, cached understanding, pattern recognition.

I especially like the last one, pattern recognition - click feels a lot like seeing a giant web of things, and how it all fits together, or how one piece completes that image - using a sort of mental glyph or a simple phrase to represent complex, sophisticated ideas as a singular, understandable entity.

Comment author: Kevin 21 January 2010 08:20:35AM *  17 points [-]

I had a funny click with my girlfriend earlier this evening. I suggested that she should sign up for cryonics at some point soon, and I was surprised that she was against the idea. In response to her objections, I explained it was vitrification and not freezing, etc. etc. but she wasn't giving me any rational answers, until she said that she really wanted to see the future, but she also wanted to watch the future unfold.

She thought by cryonics that I meant right now, Futurama style. After a much needed clarification she immediately agreed that cryonics was a good idea.

Comment author: MichaelGR 22 January 2010 05:53:47PM *  15 points [-]

So based on her understanding of what you said, she was actually right to object.

I guess the lesson here is that we must learn not to skip steps in the explanation of unconventional ideas because there is a risk that people will be opposed to things that aren't even part of the proposal, and there is a further risk that we won't notice that's what is going on (in your case, you noticed it and corrected the situation, but what if there had been a huge fight and the subject had never been brought up again? That would have been a sad reason not to sign up for cryonics...).

Comment author: Vladimir_Nesov 22 January 2010 05:36:36PM *  6 points [-]

Now this is disturbing: she assumes by default that you are suggesting to freeze her alive, "to see the future". Not the kind of "click" we'd be looking for, "everything is possible" is actually worse than absurdity heuristic-enabled epistemic hygiene.

Comment author: Kevin 22 January 2010 08:18:26PM *  6 points [-]

I think her default understanding was more like "Kevin is really morally depraved and probably not serious anyways".

It was funnier in the real world; I sucked away most of the humor with my written retelling.

Comment author: DavidNelson 21 January 2010 07:03:38AM 6 points [-]

and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...

As far as I know, the idea that there are organizations capable of reading overwritten data off of a hard drive is an urban legend. See http://www.nber.org/sys-admin/overwritten-data-gutmann.html

Comment author: Eliezer_Yudkowsky 21 January 2010 08:01:56PM 3 points [-]

I think I saw that paper before, either on here or on Hacker News, and it was replied to by someone who claimed to be from a data-recovery service that could and did use electron microscopes to retrieve the info, albeit very expensively.

EDIT: http://news.ycombinator.com/item?id=511541

Comment author: DavidNelson 21 January 2010 09:27:24PM 10 points [-]

I would be more inclined to take ErrantX seriously if he said what company he works for, so I could do some investigation. You would think that if they regularly do this sort of thing, they wouldn't mind a link. The "expensive" prices he quotes actually seem really low. DriveSavers charges more than $1000 to recover data off of a failed hard drive, and they don't claim to be able to recover overwritten data. Given all of that, I tend to think he is either mistaken (he does say it isn't really his field), or is lying.

Comment author: Douglas_Knight 22 January 2010 02:52:30AM 6 points [-]

The "expensive" prices he quotes actually seem really low.

Plus, he says it would take them a month. How could they possibly charge only 1000 for a month of anything, even computer time?

Comment author: Christian_Szegedy 22 January 2010 03:06:52AM *  5 points [-]

I agree.

I remember that the c't, an excellent German computer magazine, around 2005 ran a test with once-zeroed hard-drives: They sent it to a lot of companies to recover the data, but all of them refused to give a quote, saying that the task was impossible.

Most of these companies manage to recover data from technically defect hard-drives after mechanical failures, and it costs several thousand dollars, but none of them were ready to help in case of zeroed out drives.

Comment author: Christian_Szegedy 20 January 2010 11:31:02PM 16 points [-]

Interesting. Eliezer took some X years to recognize that even "normal looking" persons can be quick on the uptake? ;)

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

I guess it has a bit deeper explanation than that. I think clickiness happens if two people managed to build very similar mental models and they are ready to manipulate and modify models incrementally. Once the models are roughly in sync, it takes very little time to communicate and just slight hints can create the right change in the conversation partner's model, if he is ready to update.

I think a lot of us has been trained hard to stop model building at certain points. There is definitely a personal difference between people on how much do they care about taboos the society imposes on them which can result in mental red lights: "Don't continue building that model! It's dangerous!" This is what I think Eliezer's notion of "compartmentalization" refers to.

A lot of intelligent people have much less brakes and generally used to model building and do this uninhibitedly, can maintain several models and have fun doing that.

But in general, most people are lazy model builder, they build their models once and stick to it, or just find rationalizations to cut down on "mental effort" of generating and incrementally updating models.

However, I don't think that brakes are the only reason people don't click. If I just don't know how to build the right model (miss the know-how, experience, etc.) I won't click regardless.

For example I am not a musician and if I found it hard to have conversations with experienced musicians over music. Musicians among themselves can build very quickly models of musical concepts and click with each other. I can try to be maximally open minded, I won't manage to click. I simply fail the necessary skills to build the required models.

Comment author: Tyrrell_McAllister 20 January 2010 06:23:26PM *  14 points [-]

What the hell is in that click?

I'm not seeing that there's anything so mysterious here. From your description, to click is to realize an implication of your beliefs so quickly that you aren't conscious of the process of inference as it happens. You add that this inference should be one that most people fail to draw, even if the reasoning is presented to them explicitly.

I expect that, for this to happen, the relevant beliefs must happen to be

  1. cached in a rapidly-accessible part of your mind,

  2. stored in a form such that the conclusion is a very short inferential step beyond them, and

  3. free of any obstructing beliefs.

By an obstructing belief, I don't mean a belief contradicting the other beliefs. I mean a belief that lowers you estimate of the conditional probability of the conclusion that you would otherwise have reached.

When you are trying to induce other people to click, you can do something about (1) and (2) above. You can format the relevant beliefs in the most transparent way possible, and you can use emphasis and repetition to get the beliefs cached.

But if your interlocutors still fail to click, it's probably because (3) didn't happen. That is, it's probably just a special case of the usual reason why people fail to be convinced by an argument, even when they grant the premises. People fail to be convinced because they have other beliefs, which, when taken into account, seem to lower the overall probability of your conclusion. So, typically, a failure to click is no more mysterious than a general failure to be convinced by arguments.

On a more cynical note, I'm pretty sure that the "click" is almost the only decision procedure for the vast majority of people*. When a question arises, one answer will seem to be manifestly the right answer, and the rest will seem obviously wrong. When they change their mind, it will be because another answer abruptly seems to be manifestly the right answer. If no answer clicks for them, they will just chalk the problem up as "mysterious".


*Here I'm using "click" to include inferences that aren't necessarily rare, and which might in fact be very common.

Comment author: MrHen 20 January 2010 07:02:19PM 4 points [-]

I like this comment but do not know if I agree with it or not. The upvote was for making me stop and think long and hard about the subject. The wheels are still spinning and no conclusion is imminent, but thank you for the thoughts. :)

Comment author: MichaelGR 22 January 2010 05:37:49AM *  5 points [-]

I think that this post should be linked prominently here for those who haven't been around on LW/OB for long and who might not follow all the back-links:

http://lesswrong.com/lw/wq/you_only_live_twice/

Comment author: Rlive 20 January 2010 08:11:47PM *  5 points [-]

I think "clicky" people are people who are not emotionally vested in their beliefs.

Many people need their beliefs to be true in order to feel like they are valuable and worthwhile people.

Clicky people simply don't need that (or at least need that to a lesser extent). Instead, clicky people need to be right whether or not that means they were initially wrong.

Comment author: MichaelGR 20 January 2010 08:17:27PM *  4 points [-]

It sounds like it might have something to do with what Carol Dweck describes as the "Growth Mindset", as opposed to the "Fixed Mindset".

Here's something I wrote about it a couple years ago based on a Nigel Holmes graphic (still one of the most popular posts on my blog):

http://michaelgr.com/2007/04/15/fixed-mindset-vs-growth-mindset-which-one-are-you/

Comment author: ShannonVyff 21 January 2010 01:24:23PM 4 points [-]

Hah, "Magic Click" --I see that all the time, people who don't know cryonics is real-or have not met anyone actually signed up. Left and right, every day kids and adults think it is a "cool" idea, they express interest--but they don't go through the steps to become a signed cryonicist. I'm not sure what causes one person to go through all the paperwork and another just thinks they might want to do that some day--from what I've seen, people who sign up for cryonics have had a brush with death and seem more motivated--it could come down to a person's personality and their level of planning things in life too. Thank you for your comment bshock. Some religions could take the stance that you must be signed up for cryonics, because if it works then it is your purpose to do more good for your faith. I could imagine how frustrating it would be to work at Alcor, the paperwork truly needs to be simplified-but cryonics is not yet large enough to be safely accepted, they have to cover every angle possible that could come up. CI is a lot easier to sign up with paperwork, but they don't have a higher success rate--I think that you are right-at first cryonics is fun because it seems like a way to escape death possibly, then finalizing it is acknowledging that you will die.

Comment author: komponisto 20 January 2010 10:36:03PM *  10 points [-]

This post, in addition to being a joy to read, contains one particular awesome insight:

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

Here's some confirmation: I must have at least some clickiness, since I "got" the intelligence explosion/FAI/transhumanism stuff pretty much immediately, despite not having been raised on science fiction.

And, it turns out: I hate, hate, HATE compartmentalization. Just hate it -- in pretty much all its forms. For example, I have always despised the way schools divide up learning into different "classes", which you're not supposed to relate to each other. (It's particularly bad at the middle/high school level, where, if you dare to ask why you shouldn't be able to study both music and drama, or both French and Spanish, they look at you with a puzzled expression, as though such thoughts had occurred to no human being before.) I hate C.P. Snow's goddamned "Two Cultures". I hate the way mathematicians in different areas use the exact same concepts and pretend they don't by employing different notation and terminology. I hate the way music theorists invent separate theoretical universes for different historical periods.

In general, don't get me started on "separate magisteria"....

Eliezer, you're seriously onto something here.

Comment author: komponisto 21 January 2010 02:10:46PM 14 points [-]

I propose the term "clack" to denote the opposite of "click" -- that is, resisting an obviously correct conclusion.

Comment author: wedrifid 21 January 2010 02:32:37PM 8 points [-]

That's more polite that most of the terms I tend to use.

Comment author: loqi 20 January 2010 06:32:57PM 9 points [-]

I've met very few people for whom the concept "simulating consciousness is analogous to simulating arithmetic" is obvious-in-retrospect, even among atheists. A special case of a "generalized anti-zombie" click?

life-is-good/death-is-bad

Widespread failure to understand this most basic principle ever drives me crazy and leaves me feeling physically sick. I'd appreciate efforts to raise the sanity waterline for this reason alone.

Comment author: CronoDAS 21 January 2010 03:41:57AM *  5 points [-]
Comment author: loqi 21 January 2010 10:14:24PM *  3 points [-]

What if life isn't good?

I'm not quite sure how to assign meaning to a normative counterfactual. Asserting "life is bad" is tantamount to declaring war on existence. Humans have massive, sprawling goal complexes, most of which seem to be predicated on existence. It seems extremely implausible that such goals could be consistent with a preference for non-existence. Consciously stroking yourself into a nihilistic fervor says more about the flexibility of your conscious perception than it does about the ultimate "goodness" of life (related Nesov comment).

It's the "most basic principle ever" because:

  • It's implicit in virtually all other normative principles.
  • Most people have no intention nor desire to declare war on everyone else.

But feel free to let me know if you those don't apply to you, so I can file you away as "pure evil".

What if other people dying is good for the survivors?

This is a narrower question that requires answering other questions like "which life?" and "how good?". It can't contradict the premise of life being good, it can only attempt to make it more precise.

Comment author: JamesAndrix 21 January 2010 06:58:34AM 2 points [-]

What if life isn't good?

That appears not to be the case. In general, we want to live and want others to live. Where this does not hold, it is generally viewed as the result of something bad, or as a necessary means to prevent something bad.

What if other people dying is good for the survivors?

If that were the case, it would be an example of preventing something bad.

Comment author: JulianMorrison 20 January 2010 11:31:11PM 8 points [-]

I have what I hope is an interesting perspective here - I'm a super-not-clicker. I had to be dragged through most of the sequences, chipping away one dumb idea after another, until I Got It. I recognize this as basically my number one handicap. Introspecting about what causes it, I'll back Eliezer's compartmentalization idea.

For me, input flows into categorized and contextual storage. I can access it, but I have to look (and know to look, which I won't if it's not triggered). This is severe enough to impact my memory; I find I'm relying on almost-stigmergy, to-do-list cues activated by context, and I can literally slip out of doing one thing and into another if my contextual cues are off.

I think this is just my problem, but I wonder if it's an exaggerated form of the way other people can just divert facts into a box and sit on them.

Comment author: Eliezer_Yudkowsky 21 January 2010 08:08:42PM 5 points [-]

I had to be dragged through most of the sequences, chipping away one dumb idea after another, until I Got It.

Wow - you Got It after a lot of hard work? That must put you in the bottom 99.9% of all rationalists! I think you might be suffering from a bit of underconfidence here.

Comment author: Strange7 19 April 2011 03:59:09AM 3 points [-]

It seems odd to refer to a "magical click" leading to understanding the incoherence of the idea of magic.

Say you were playing a real-time strategy game, with limited forces to allocate among possible targets to attack and potential vulnerabilities to defend. You can see all sorts of information about a given target, but the really important stuff either requires resources to discover or is hidden outright. The game's bootlegged and you can't find a copy of the manual, so even for the visible numbers you don't know exactly what they mean.

Poking around, you notice that some targets require a protracted and expensive siege to capture, others seemingly can't be captured at all with the resources and technologies available to you, while still others start flying your flag and paying your taxes after the most perfunctory attempt at capture.

My guess would be that the latter category is neutral/uncontested territory, that which has not been claimed and garrisoned by a rival of similar geopolitical ambition.

Comment author: Friendly-HI 16 June 2011 01:19:20PM 2 points [-]

That explanation via analogy is actually quite good and may very well be true.

If for some reason memes fail to properly fortify themselves when they claim territory inside your brain, they may be very easy to replace by competing memes, which could explain the "clickiness" of some people.

If true, one thing we may expect from (as of yet) non-rationalist people whose minds have that clicking quality, is that they may be unusually susceptible to New Age crap or generally tend to alter their views quickly. It was certainly the case with me when I was young and still lacked the mental tools of rationality.

Also, a slight rebelliousness or disregard towards what other people think may be part of it. If you ever introduced someone to a position that is very unconventional or even something entirely new that they have never heard of, more often than not they display some deep gut reaction feeling of dismissal and come up with ridiculous on-the-spot rationalizations why that new position can not possibly be the case... and I have the impression, that one of the most determining factors in what their gut-reaction will root for will be heavily connected to what other people in their tribe think.

I know Eliezer's post is older, but I wonder if he probed the possibility that this clickiness may be predominantly a feature of people who simply have a general tendency or willingness for being a contrarian.

Comment author: Daniel_Burfoot 21 January 2010 02:50:24PM 3 points [-]

that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

Careful, man. If you bang too hard on this drum, people are going to start thinking "hey, why slog through the boring pre-FAI era? I'll just sign up for cryo, head over to the preservation facility, and down a bottle of Nembutal. Before long I'll be relaxing on a beach with super-intelligent robot sex dolls bringing me martinis!"

Comment author: Eliezer_Yudkowsky 21 January 2010 08:16:01PM 8 points [-]

Suicides automatically get autopsied, so not currently an option.

Otherwise... well, it seems fairly obvious to an expected utility maximizer who believes in the von Neumann/Morgenstern axiom of Continuity, that if being cryonically suspended is better than death, and there exists a spectrum of lives so horrible as to not be worth living, then there must exist some intermediate point of a life exactly horrible enough that it is not worth committing suicide but is worth deliberately suspending yourself if you have the option.

Comment author: komponisto 22 January 2010 01:13:46AM *  3 points [-]

Suicides automatically get autopsied, so not currently an option.

This seems quite unfair to sufferers of mental illness. What if a person signed up for cryonics later becomes depressed, resulting in suicide? (It could happen.)

I guess I shouldn't be surprised at the near-total absence of cryonics-friendly law, but it's still worth remarking upon.

Comment author: pdf23ds 22 January 2010 05:17:59AM 3 points [-]

For that matter, what if a person is depressed (or terminally ill) and wants to commit suicide, but wants to sign up for cryonics too? That's actually my situation, and I e-mailed Alcor about it, but have received no response, to my chagrin.

Another consideration is that legal physician assisted suicide (I believe it's still legal in Oregon) probably makes autopsy less likely. I'll research this a bit and get back.

Also, I totally do not understand the second para of Eliezer's comment. Also, it seems like the obvious reason that people would not commit suicide just to get suspended sooner is that most people's life has utility greater than (future utility if revived)*(probability of getting revived). OTOH, if you're dying of cancer, get yourself up to Oregon.

Comment author: Christian_Szegedy 22 January 2010 08:12:13AM *  9 points [-]

It is pretty obvious that Alcor must avoid even the slightest suspicion of helping (or even encouraging) you to commit suicide.

This would be an extremely slippery slope that they really have to avoid in order to prevent exposing themselves to a lot of unjustified attacks.

Comment author: AngryParsley 22 January 2010 01:46:46AM *  3 points [-]

Several states allow religious objections to autopsy. The coroner can override it in some cases (infectious disease that endangers the public, murder suspected), but it's better than nothing. The state doesn't care what religion you are, just that you've signed a form stating you object to an autopsy.

ETA: The override process is pretty involved in California. It involves petitioning the superior court to get an order to autopsy.

Comment author: rlpowell 20 January 2010 10:33:55PM 3 points [-]

In some piece of fiction (I think it was Orion's Arm, but the closest I can find is http://www.orionsarm.com/eg-topic/45b3daabb2329 and the reference to "the Herimann-Glauer-Yudkowski relation of inclusive retrospective obviousness") I saw the idea that one could order qualitatively-smarter things on the basis of what you're calling "clicks". Specifically, that if humans are level 1, then the next level above that is the level where if you handed the being the data on which our science is built, all the results would click immediately/be obvious.

I've seen it asserted that humans are essentially "Turing complete" with respect to intelligence; anything that can be understood by any intelligence can be understood by a non-broken human, given enough time and attention. I'm on the fence about that, frankly; there's a lot of stuff that I have real trouble understanding, despite being decently bright by most standards. But if there's such a thing as real quantitative (rather than qualitative) differences in intelligence, it seems to me that "clicks" are at the core of what such a thing would look like from the outside (not what it would be internally; no idea about that, of course).

-Robin

Comment author: RobinZ 20 January 2010 05:06:20PM 3 points [-]

I wonder if it ties in to some kind of confidence in your understanding.* If you don't trust your ability to understand a simple argument, you're really quite likely to overrate the strength of your heuristics relative to your reason.

...which sounds a lot like why I'm suspicious of cryonics, on introspection. I really need to run the numbers and see if I can afford it.

* Oh noes! Have I become one of those people with one idea they go back to for everything?

Comment author: thomblake 20 January 2010 05:20:05PM 6 points [-]

Oh noes! Have I become one of those people with one idea they go back to for everything?

Don't worry, that's practically everybody. Just be aware of it, excuse other people for not getting your big idea, and consider revising your stance in the future.

Comment author: RobinZ 20 January 2010 05:29:47PM 2 points [-]

Thanks - I'll try to think of it that way.

(One thing I was considering as a countermeasure was to imagine it as an analogue to a Short Duration Personal Savior - a ShorDurDivRev (divine revelation), perhaps!)

Comment author: VijayKrishnan 21 January 2010 08:36:28AM *  9 points [-]

I am puzzled by Eliezer's confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a "GODDAMNED SANE CIVILIZATION". I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.

  1. I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options.

(a) Have my life support system turned off and die peacefully.

(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence. In this case I would very likely choose (a).

  1. Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.

  2. Last but not the least is a point I made as a comment in response to Robin Hanson's post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to "be born" who would not have come into existence. And that this was for some reason a good thing. I suspect that Eliezer too has a similar opinion on this, and this is probably another place we widely differ.

    I couldn't care less if I weren't born. As the saying goes, I have been dead/not existed for billions of years and haven't suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above.

    We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don't think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing. Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.

    Unfortunately all the discussion on this forum including the writings by Eliezer seem to draw absolutely no distinction between the two scenarios:

A. Signing up for cryonics now, with all the associated risks/benefits that I just discussed.

B. Some form of payment for some experimental longetivity research that you need to make upfront when you are 30. If the research succeeds and is tested safe, you can use the drugs for free and live to be 200. If not, you live your regular lifespan and merely forfeit the money that you paid to sponsor the research.

I can readily see myself choosing (B) if the rates were affordable and if the probability of success seemed reasonable to justify that rate. I find it astounding that repeated shallow arguments are made on this blog which address scenario (A) as though it were identical to scenario (B).

Comment author: scotherns 22 January 2010 12:51:10PM 17 points [-]

If you were hit by a car tomorrow, would you be lying there thinking, 'well, I've had a good life, and being dead's not so bad, so I'll call the funeral service' or would you be calling an ambulance?

Ambulances are expensive, and doctors are not guaranteed to be able to fix you, and there is chance you might be in for some suffering, and you may be out of society for a while until you recover - but you call them anyway. You do this because you know that being alive is better than being dead.

Cryonics is just taking this one step further., and booking your ambulance ahead of time.

Comment author: Eliezer_Yudkowsky 21 January 2010 08:13:21PM 9 points [-]

I suspect that Eliezer too has a similar opinion on this

Nope, ongoing disagreement with Robin. http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/

Comment author: Technologos 22 January 2010 03:33:20PM 4 points [-]

Could you supply a (rough) probability derivation for your concerns about dystopian futures?

I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.

Comment author: JesterMatrix 22 January 2010 03:12:22PM 4 points [-]

Thank God, I've been lurking on this forum for years now, and its this one that I have never felt like such an outsider on this forum, especially with the very STRONG language Eliezer uses throughout both this post and the other one. It felt as if I was being called more than just a bit irrational but stupid for thinking there was a more than negligible chance that I upon waking I would be in a physical or mental state in which death was preferable yet I would be unable to deliver.

I can see it being very plausible to be awoken in extreme and constant agony, or perhaps in some sort of permanent vegetative state, or in some sort of not yet imagined unbreakable continued and torturous servitude for the 1,000+ years. I just do not the risks as outweighing the benefits of simply being alive.

Comment author: Morendil 22 January 2010 03:35:16PM 9 points [-]

It is not cryonics which carries this risk, it is the future in general.

Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.

The future can be scary, I'll grant you that. But the future isn't "200 years from now". The future is the next breath you take.

Comment author: jhuffman 25 January 2010 09:08:16PM 2 points [-]

It is not cryonics which carries this risk, it is the future in general.

Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century.

I think the chances of this are small - most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.

Comment author: Morendil 25 January 2010 09:37:06PM 7 points [-]

You don't have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.

I wasn't going to bring this up, because it's too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became "penniless refugees" in the space of a few minutes. You don't even have to postulate anything outlandish.

You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.

Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present - live or paused, as it were - or on the length of the pauses.

The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call "ambient risk": reduce the more extreme inequalities, attend to things like pollution and biodiversity, improve life-enhancing technologies, foster a political climate maximally protective of individual rights, and so on, up to and including global existential risks and the possibility of a Singularity.

Comment author: pdf23ds 22 January 2010 03:52:42PM *  1 point [-]

Eh. At least when you're alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you're vitrified in a canister for 75-300 years than they can while you're asleep. I prefer Technologos' reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.

Comment author: Eliezer_Yudkowsky 25 January 2010 06:30:14PM 16 points [-]

Most of what you're worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.

If you're worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don't have the warning required to commit suicide - in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.

You might expect more warning about uploads, but, given that you're worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WBE in an underground lab.

In short: As usual, arguments against cryonics, if applied evenhandedly, tend to also show that we should commit suicide immediately in the present day.

Morendil put it very well: "The future isn't 200 years from now. The future is the next breath you take."

Comment author: Morendil 20 January 2010 05:10:37PM 6 points [-]
Comment author: RobinZ 20 January 2010 05:25:16PM *  12 points [-]

I wasn't there at the time, but if EY's description is roughly accurate, I suspect the ordinary-seeming woman understood him in the opening anecdote. The specific chain I'm looking at is:

EY: Magic does not exist.
OSW: Science doesn't understand everything?
EY: Ignorance is in mind, not reality.
OSW: Magic is impossible!

I see no way that OSW could deduce this fact about magic unless she compared magic - stuff which you are necessarily ignorant of - to the correct interpretation of EY's point.

Comment author: erniebornheimer 20 January 2010 09:44:36PM 12 points [-]

At the risk of revealing my stupidity...

In my experience, people who don't compartmentalize tend to be cranks.

Because the world appears to contradict itself, most people act as if it does. Evolution has created many, many algorithms and hacks to help us navigate the physical and social worlds, to survive, and to reproduce. Even if we know the world doesn't really contradict itself, most of us don't have good enough meta-judgement about how to resolve the apparent inconsistencies (and don't care).

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.

And that's just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn't try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.

FWIW, I'm not interested in cryonics. I think it's not possible, but even if it were, I think I would not bother. Introspecting now, I'm not sure I can explain why. But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough. And I am guessing that something similar will keep most of us uninterested in cryonics forever.

Now that I think of it, I see interest in cryonics as a kind of crankish pastime. It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

To try to head off some objections:

  • I would certainly never dream of curtailing anyone else's freedom to be cryo-preserved, and I recognize I might change my mind (I just don't think it's likely, nor worth much thought).
  • Yes, I recognize how wonderful medical science is, but I see a qualitative difference between living longer and living forever.
  • No, I don't think I will change my mind about this as my own death approaches (but I'll probably find out). Nor do I think I would change my mind if/when the death of a loved one becomes a reality.

I offer this comment, not in an attempt to change anyone's mind, but to go a little way to answer the question "Why are some people not interested in cryonics?"

Thanks!

Comment author: JGWeissman 20 January 2010 09:52:17PM 18 points [-]

It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

It seems to me that you can't be more specific because there is not anything there to be more specific about.

Comment author: bgrah449 21 January 2010 03:09:19AM *  3 points [-]

What the hell, I'll play devil's advocate.

Right now, we're all going to die eventually, so we can make tradeoffs between life and other values that we still consider to be essential. But when you take away that hard stop, your own life's value suddenly skyrockets - given that you can almost certainly, eventually, erase any negative feelings you have about actions done today, it becomes hard to justify not doing horrible things to save one's own life if one was forced to.

Imagine Omega came to you and said, "Cryonics will work; you will be resurrected and have the choice between a fleshbody and simulation, and I can guarantee you live for 10,000 years after that. However, for reasons I won't divulge, this is contingent upon you killing the next 3 people you see."

Well, shit. Let the death calculus begin.

Comment author: orthonormal 21 January 2010 05:55:39AM 6 points [-]

You make a valid theoretical point, but as a matter of contingent fact, the only consequence I see is that people signed up will strongly avoid risks of having their brains splattered. Less motorcycle riding, less joining the army, etc.

Making people more risk-averse might indeed give them pause at throwing themselves in front of cars to save a kid, but:

  • Snap judgments are made on instinct at a level that doesn't respond to certain factors; you wouldn't be any less likely to react that way if you previously had the conscious knowledge that the kid had leukemia and wouldn't be cryopreserved.

  • In this day and age, risking your life for someone or something else with conscious premeditation does indeed happen even to transhumanists, but extremely rarely. The fringe effect of risk aversion among people signed up for cryonics isn't worth consigning all of their lives to oblivion.

Comment author: rlpowell 20 January 2010 11:30:12PM *  10 points [-]

(Edit: after having written this entire giant thing, I notice you saying that this was just a "why are some people not interested in cryo" comment, whereas I very much am trying to change your mind. I don't like trying to change people's minds without warning (I thought we were having that sort of discussion, but apparently we aren't), so here's warning.)

But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.

You're aware that your life expectancy is about 4 times that of the people who built the pyramids, even the Pharoahs, right? That assertion seems to basically be slapping all of your ancestors in the face. "I don't care that you fought and died for me to have a longer, better life; you needn't have bothered, I'm happy to die whenever". Seriously: if natural life span is good enough for you, start playing russian roulette once a year around 20 years old; the odds are about right for early humans.

As a sort-of aside, I honestly don't see a lot of difference between "when I die is fine" and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.

I'm aware this is a minority view, but that doesn't necessarily make it any less sensible; insert historical examples of once-popular-but-wrong views here.

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes.

Then they've failed at the actual task, which is to make all of your beliefs fit with reality.

When we get to human values, some of them REALLY ARE in conflict with others,

My values are part of reality. Some of them are more important than others. Some of them contradict each other. Knowing these things is part of what lining my beliefs up with reality means: if my map of reality doesn't include the fact that some of my values contradict, it's a pretty bad map.

You seem to have confused people who are trying to force their beliefs to line up with each other (an easy path to crazy, because you can make any belief line up with any other belief simply by inserting something crazy in the middle; it's all in your head after all) with people with people who are trying to force their beliefs to line up with reality. It's a very different process.

Part of reality is that one of my most dominant values, one so dominant that almost no other values touch its power, is the desire to keep existing and to keep the other people I care about existing. I'm aware that this is selfish, and my compromise is that if reviving me will use such resources that other people would starve to death or something, I don't want to be revived (and I believe my cryo documents specify this; or maybe not, it's kind of obvious, isn't it??). I don't have any difficulty lining up this value with the rest of my values; except for pretty landscapes, everything I value has come from other humans.

In some sense, I don't try to line this, or any other value, up with reality; I'm basically a moral skeptic. I have beliefs that are composed of both values ("death is bad") and statements about reality ("cryo has a better chance of saving me from death than cremation") such that the resulting belief ("cryo is good") is subservient to both matching up with reality (although I doubt anyone will come up with evidence that cryo is less likely to keep you alive than cremation) and my values, but having values and conforming my beliefs with reality are totally separate things.

-Robin

Comment author: wnoise 28 January 2010 09:21:25AM 7 points [-]

Careful with life-expectancy figures from earlier eras. There was a great chance of dying as a baby, and a great chance for women to die of childbirth. Excluding the first -- that is, just counting those that made it to, say, 5 years old, and the life-expectancy greatly shoots up, though obviously not as high as now.

Comment author: Vladimir_Nesov 21 January 2010 10:51:58AM 5 points [-]

As a sort-of aside, I honestly don't see a lot of difference between "when I die is fine" and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.

This is the Reversal test.

Comment author: Oligopsony 17 August 2010 04:27:09AM 3 points [-]

As a sort-of aside, I honestly don't see a lot of difference between "when I die is fine" and just committing suicide right now. Whatever it is that would stop you from committing suicide should also stop you from wanting to die at any point in the future.

An important reason for not dying at the moment is that it would make the people you most care about very distraught. Dying by suicide would make them even more distraught. Signing up for cryonics would not make them less distraught and would lead to social disapproval. Not committing suicide doesn't require that one place a great deal of intrinsic value in one's own continued existence.

Comment author: rlpowell 01 January 2011 02:53:15AM 2 points [-]

That's a really good point.

I think if the only reason you're staying alive is to stop other people from being sad, you've got a psychological bug WRT valuing yourself for your own sake that you really need to work on, but that is (obviously) a personal value judgment. If that is the only reason, though, you're right, suicide is bad and cryo is as bad or worse.

I imagine that such a person will have a really shitty life whenever people close to them leave or die; sounds really depressing. I can only hope, for their sake, that such a person dies before their significant other(s).

-Robin

Comment author: bgrah449 20 January 2010 10:24:43PM *  15 points [-]

I think it's not possible, but even if it were, I think I would not bother. Introspecting now, I'm not sure I can explain why. But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.

-Longer life has never been given; it has always been taken. There is no giver.

-"Enough is enough" is sour grapes - "I probably don't have access to living forever, so it's easier to change my values to be happy with that than to want yet not attain it." But if it were a guarantee, and everyone else was doing it (as they would if it were a guarantee), then this position would be the equivalent to advocating suicide at some ridiculously young age in the current era.

It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

I assert that the more extremely the idea "life is good, death is bad" is held, the more benefit other valuable parts of our humanity are rendered. I can't be more specific.

Comment author: kans 21 January 2010 02:54:10PM 2 points [-]

I'm not quite convinced of the merits of investing in cryonics at this point, though "enough is enough" does not strike me as a particularly salient argument either.

In terms of weighing the utility to me based on some nebulous personal function: Cryonics has an opportunity cost in terms of direct expenses and additionally in terms of my social interactions with other people. Both of these seem to be nominal, though the perhaps $300 or so dollars a year could add quite a bit of utility to my current life as I live on about $7K per year. Though I very well may die today, not having spent any of that potential money.

On the other side being revived in the distant future could be quite high in terms of personal utility. Though, I have no reason at all to believe the situation will be agreeable; in other words, permanent death very well could be for the best. I would imagine reviving a person from vitrification would be a costly venture even barring future miracle technology. Revival is not currently possible and there is no reason to think the current processes are being done in any sort of optimal way. At the very least, the cost of creating the tech to revive people will be expensive. Future tech or not, I see it likely that revival will come at some cost with perhaps no choice given to me in the matter. I see this as a likely possibility (at least more likely than a benevolent AI utopia) as science has never fundamentally made people better (more rational?)- so far at least; it certainly ticks forward and may improve the lives of some people, but they are all still fundamentally motivated by the same vestigial desires and all have the same deficiencies as before. Given our nature, I see the most likely outcome, past the novelty of the first couple of successful attempts, being some quid pro quo.

Succinctly, my projection of the most likely state of the world in which I would be revived is the same as today though with more advanced technology. Very often the ones to pioneer new technology aren't scrupulous. I very well may choose a non existence to one of abject suffering or one where my mind may be used to hurt others, etc. This would be an optimization for the worst case scenario.

Comment author: Rlive 20 January 2010 10:43:29PM *  9 points [-]

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.

This is not true of all non-compartmentalizers - just the ones you have noticed and remember. Rational non-compartmentalizers simply hold on to that puzzle piece that doesn't fit until they either

  • determine where it goes;

  • determine that it is not from the right puzzle; or

  • reshape it to correctly fit the puzzle.

Comment author: AngryParsley 20 January 2010 10:19:10PM *  9 points [-]

You seem to have two objections to cryonics:

  1. Cryonics won't work.

  2. Life extension is bad.

#1 is better addressed by the giant amount of information already written on the subject.

For #2 I'd like to quote a bit of Down and Out in the Magic Kingdom:

Everyone who had serious philosophical conundra on that subject just, you know, died, a generation before. The Bitchun Society didn't need to convert its detractors, just outlive them.

Even if you don't think life extension technologies are a good thing, it's only a matter of time before almost everyone thinks they are. Whatever part of "humanity" you value more than life will be gone forever.

ETA: Actually, there is an out: if you build FAI or some sort of world government and it enforces 20th century life spans on people. I can't say natural life spans because our lives were much shorter before modern sanitation and medicine.

Comment author: Zack_M_Davis 21 January 2010 11:54:38PM 7 points [-]

Even if you don't think life extension technologies are a good thing, it's only a matter of time before almost everyone thinks they are. Whatever part of "humanity" you value more than life will be gone forever.

Doesn't this argument imply that we should self-modify to become monomaniacal fitness-maximizers, devoting every quantum of effort towards the goal of tiling the universe with copies of ourselves? Hey, if you don't, someone else will! Natural selection marches on; it's only a matter of time.

Comment author: pdf23ds 22 January 2010 08:52:03AM *  4 points [-]

I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.

Comment author: pdf23ds 21 January 2010 09:00:25AM 6 points [-]

For #2, there's also Nick Bostrom's Fable of the Dragon-Tyrant.

Comment author: Steve_Rayhawk 22 January 2010 03:14:51AM 3 points [-]

In my experience, people who don't compartmentalize tend to be cranks.

[. . .]

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.

And that's just the physical world. When we get to human values, some of them REALLY ARE in conflict with others[. . .]

The post "Reason as memetic immune disorder" was related. I'll quote teasers so that you'll read it:

People who grow up with a religion learn how to cope with its more inconvenient parts by partitioning them off, rationalizing them away, or forgetting about them. Religious communities actually protect their members from religion in one sense - they develop an unspoken consensus on which parts of their religion members can legitimately ignore. New converts sometimes try to actually do what their religion tells them to do.

[. . .]

The reason I bring this up is that intelligent people sometimes do things more stupid than stupid people are capable of. There are a variety of reasons for this; but one has to do with the fact that all cultures have dangerous memes circulating in them, and cultural antibodies to those memes. The trouble is that these antibodies are not logical[. . . .] They are the blind spots that let us live with a dangerous meme without being impelled to action by it.

And my comment there:

If the culture is constrained to hold constant the religion or cultural norms, then the resulting selection will cause the culture to develop blind spots, and also develop an unspoken (because unspeakable) but viciously enforced meta-norm of not seeing the blind spots. But if the culture is constrained to hold opposite meta-norms constant, such as a norm of seeing the blind spots or a norm of actually doing what one's religion or cultural norms tell one do do, then the resulting selection will act against the dangerous memes instead.

Comment author: thomblake 20 January 2010 09:50:58PM 1 point [-]

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.

Going to quote this.

And that's just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn't try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.

And this.

Comment author: handoflixue 29 July 2011 11:35:27PM 5 points [-]

Oddly, I self identify as both being very good at "clicking", and very able to compartmentalize. I'm used to roleplaying an elf in WOW, a religious person at church, and a rationalist here. It makes it very easy to "click" because I can go "oh, of course, in the world that an elf inhabits, in that world view, this just makes sense", and because I have a lot of practice absorbing very odd ideas (I've worked out agricultural requirements for keeping a pet dragon...)

The big perk is that, say, my religious objections to cryonics don't even come up when I'm playing a rationalist and working it out there. So even if cryonics clashes with a lot of what I believe, I can still evaluate the idea. I don't have any commitment to being right or wrong, because my actual opinion is in a separate box.

Of course, it only works if I'm ensuring that the right mode/self/compartment is handling the right situation. If I'm debating cryonics, I want to be playing as a 12th level Rationalist, not my 3rd level Elven Ranger! :)

Comment author: bgrah449 20 January 2010 05:00:44PM 7 points [-]

That click is cultural. It seems magical because you've acclimated yourself to not encountering shared values with people very often, and so this cryonics gathering was a feast of connections.

Comment author: Zubon 26 January 2010 12:48:44AM 4 points [-]

Was there some particular bright line at which cryonics flipped from "impossible given current technology" to "failure to have universal cryonics is a sign of an insane society"? That is a sign change, not just a change in magnitude.

If we go back 50 or 100 years, we should be at a point where then-present preservation techniques were clearly inadequate. Maybe vitrification was the bright line, I do not pretend that preserving brains is a specialty of mine. I just empathize with those who still doubt that the technology is good enough to fulfill its claims prior to seeing a brain revived. We have a bold history of technological claims that turned out to be not all that, but we promise that it will work fine in twenty years.

That seems like a perfectly sane outside view: every (?) previous human preservation technique was found inadequate over a span of a few years or decades, so we assume against the latest one until proven otherwise.

We must still have large areas of the planet where it is still sane not to sign up your kids, notably where the per capita income is below $300/year.

Comment author: neuromancer92 14 February 2012 06:24:14AM 2 points [-]

Rather than being a sane view, this is a logical fallacy. I don't know of a specific name to give it, but survivorship bias and the anthropic principle are both relevant.

The fallacy is this: for anything a person tries to do, every relevant technology will be inadequate up to the one that succeeds. Inherently, the first success at something will end the need to make new steps towards it, so we will never see a new advance where past advances have been sufficient for an end.

The weak anthropic principle says that we only observe our universe when it is such that it will permit observers. Similarly, we can assume that if new developments are being made towards an aim, they are being made because past steps were inadequate. We cannot view new advances as having their chances of success biased by past failures since they come into existence only in the case that past attempts have indeed failed.

(I am aware that technologies are improved on even after they achieve their aim, but in these cases new objectives like "faster" or "cheaper" are still unsatisfied, and drive the progress.)

Comment author: RichardKennaway 14 February 2012 08:52:00AM 1 point [-]

Rather than being a sane view, this is a logical fallacy. I don't know of a specific name to give it, but survivorship bias and the anthropic principle are both relevant.

It's rather like the way that you only ever find something in the last place you look.

Comment author: Peter_de_Blanc 27 January 2010 05:38:56PM 1 point [-]

That seems like a perfectly sane outside view: every (?) previous human preservation technique was found inadequate over a span of a few years or decades, so we assume against the latest one until proven otherwise.

Really?? What's your source?

Comment author: wuwei 22 January 2010 11:58:04PM 2 points [-]

There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.

What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?

Comment author: AngryParsley 23 January 2010 11:05:31AM 2 points [-]

Hmm... you probably want to read Circular Altruism. There are different forms of utilitarian consequentialism. The LW wiki has a blurb with links to other useful posts.

Comment author: LauraABJ 20 January 2010 08:50:23PM 3 points [-]

Interesting. I remember my brother saying, "I want to be frozen when I die, so I can be brought back to life in the future," when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)

Comment author: nawitus 20 January 2010 05:21:52PM 3 points [-]

Isin't it more sane to donate money to organizations fighting against existential risks rather than spending money on cryonics?

Comment author: AngryParsley 20 January 2010 07:13:28PM *  7 points [-]

Isn't it more sane to donate money to organizations fighting against existential risks rather than spending money on-

Yes. Your argument applies to everything money can be spent on, not just cryonics. But unlike most things you can spend money on, cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.

Comment author: MichaelGR 20 January 2010 05:41:19PM 3 points [-]

Since most people who donate to fight existential risks don't donate everything they have above subsistance level, there's usually enough money to do both (since Cryonics via life insurance isn't very expensive afaik).

Comment author: Matt_Simpson 20 January 2010 06:53:08PM 2 points [-]

But surely you wouldn't be donating enough to, say, fighting existential risks so that the marginal utility of the next dollar spent there drops below that of the marginal utility spent on cryonics. Not that I'm suggesting that fighting existential risks necessarily has a higher marginal utility than cryonics. Rather, you probably don't have enough money to change the relative rankings, so you should donate to the cause with the highest marginal utility. Not both.

The exception may be donating enough to make sure YOU are reanimated after you die (I don't know what your utility function looks like), but in that case you aren't really donating.

Comment author: Eliezer_Yudkowsky 20 January 2010 07:06:48PM 15 points [-]

Surely you should be asking about the marginal utility of money spent on eating out before you ask about money spent on cryonics. What is this strange mental accounting where money spent on cryonics is immediately available to be redirected to existential risks, but money spent on burritos or French restaurants or an extra 100sqft in an apartment is not?

Comment author: alyssavance 20 January 2010 07:52:57PM *  10 points [-]

I have a theory about this, actually. How it works is: people get paid at the beginning of the month, and then pay their essential bills, food, rent, electricity, insurance, Internet, etc. What happens next is, people have a certain standard of living that they think they're supposed to have, based somewhat on how much money they make, but much more on what all their friends are spending money on. They then go out and buy stuff like fancy dinners and a house in the suburbs and what not, and this spending is not mentally available as something that can be cut back on, because they don't see it as "spending", so much as "things I need to do to maintain my standard of living"; people see it as a much larger burden to write a single check for $2,000 than to spend $7 every day on coffee, because they come out of different mental pools. Anything left over after that gets put into cryonics, or existential risk, or savings, or investments, etc. That's why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.

Comment author: gwern 20 January 2010 08:04:53PM 7 points [-]

"A man does not 'by nature' wish to earn more and more, but simply to live as he is accustomed to live and earn as much is necessary for that purpose... & a people only work because and so long as they are poor."

--Max Weber, Protestant Ethic

That's why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.

We do?

Comment author: RobinZ 20 January 2010 08:06:03PM 3 points [-]

That's why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.

We do?

I was going to comment on that, but I don't see any millionaires at all, so I thought I shouldn't.

Comment author: Douglas_Knight 21 January 2010 06:06:30AM 2 points [-]

I was going to comment on that, but I don't see any millionaires at all, so I thought I shouldn't.

The main point of "the Millionaire Next Door" is that you might not notice millionaires.

Comment author: alyssavance 20 January 2010 08:24:28PM *  2 points [-]
Comment author: gwern 20 January 2010 10:32:28PM *  4 points [-]

It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn't just giving a lot of anecdotes?

I would be very surprised to hear that, because it implies that one is substantially more likely to become a millionaire by plumbing than by lawyering, since there are ~500,000 plumbers in the US and >1.1million lawyers.

Comment author: Douglas_Knight 21 January 2010 06:42:55AM 2 points [-]

It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn't just giving a lot of anecdotes?

According to wikipedia it (1) generally cites statistics and (2) says that doctors, lawyers, and accountants save a much lower proportion of money than other occupations. google books says that it doesn't mention plumbers at all.

I would guess that pretty much all lawyers permanently employed at BIGLAW are millionaires and pretty much no other lawyers are; but that's probably enough to beat plumbers. I think the other lawyers have a similar income distribution to plumbers.

Comment author: MichaelVassar 21 January 2010 06:48:32AM 5 points [-]

That seems natural enough to me, it's the net income of the very limited part of you that identifies as "you" because it can sometimes talk and think about abstractions.

Comment author: Eliezer_Yudkowsky 21 January 2010 06:17:01PM 4 points [-]

On the one hand, yes, but on the other hand, I sometimes worry that we're getting a little too cynical around these Hansonian parts.

In any case, cryonics is a one-time expenditure for that part of you. It looms large in the imagination in advance, but afterward the expenditure almost instantly fades into the background of the monthly rent, less salient than burritos.

Comment author: MichaelVassar 22 January 2010 04:18:21AM 6 points [-]

Cynicism is boring. Build a map that matches the territory. That map looks terribly Hansonian but doesn't have its 'cynical' bit set to 'yes'.

Comment author: Douglas_Knight 21 January 2010 07:48:25AM 2 points [-]

The deliberative part of "you" that thinks about cryonics may not be the same part that chooses restaurants, but doesn't it play a role in choosing apartments?

Comment author: MichaelVassar 22 January 2010 04:07:55AM 3 points [-]

Agreed, but the deliberative part may actually think that the larger and better located apartment contributes more to global utility, at least if you are the head of the Singularity Institute and you just spent the last 6 years living with a wife in 200 square feet.

Comment author: Matt_Simpson 21 January 2010 12:58:16AM 2 points [-]

If you have a sufficiently selfish utility function, it may make sense to spend that extra money on french restaurants and the bigger apartment. But otherwise, yes, the lowest hanging fruit are spending less money on things like going out or new electronic toys.

Comment author: thomblake 20 January 2010 07:09:27PM 1 point [-]

It occurs to me to suggest that donating to both allows you to hedge your bets; one or the other might end up not producing results at all.

Which seems to be a similar impulse to the one causing guess 70% blue and 30% red, though the situation is different enough that it might make sense here.

Comment author: RobertWiblin 29 January 2010 05:09:20PM 2 points [-]

Sorry, this may be a stupid question, but why is it a good for people to get cyonically frozen? Obviously if they don't they won't make it to the future - but other people will be born or duplicated in the future and the total number of people will be the same.

Why care more about people who live now than future potential people?

Comment author: Alicorn 29 January 2010 05:32:06PM 14 points [-]

Because we exist already, and they don't. Our loss is death; theirs is birth control.

Comment author: RobertWiblin 29 January 2010 05:35:50PM 3 points [-]

Why is it worse to die (and people cryonically frozen don't avoid the pain of death anyway) than to never have been born? Assuming the process of dying isn't painful, they seem the same to me.

Comment author: Alicorn 29 January 2010 05:44:24PM 15 points [-]
  1. Once a person exists, they can form preferences, including a preference not to die. These preferences have real weight. These preferences can also adjust, although not eliminate, the pain of death. If I were to die with a cryonics team standing over me ready to give me a chance at waking up again, I would be more emotionally comfortable than if I were to die on the expectation of ceasing to exist. Someone who does not exist yet does not have such preferences.

  2. People do not all die at the same time. Although an impermanent death is, like a permanent one, also a loss to the living (of time together), it's not the same magnitude of loss. Beyond a certain point, it doesn't matter very much to most people to be able to create new people (not that they wouldn't resent being disallowed).

  3. It's not clear that anyone's birth will really be directly prevented by cryonics. (I mean, except in the sense that all events have some causal impact that influences, among other things, who jumps whose bones when, and therefore who has which children.) A society that would revive cryonics patients probably isn't one that has a population problem such that the cryonics patients make a difference.

Comment author: ciphergoth 29 January 2010 05:25:32PM *  4 points [-]

I care more about myself than future potential people.

More seriously, I value a diversity of minds, and if the future does too they may be glad to have us along.

Comment author: Vladimir_Nesov 29 January 2010 11:47:14PM 4 points [-]

Agree with "myself", disagree with "diversity of minds". If the future needs diversity, it has its random number generators and person templates. Additional argument: death is bad, life-creation is not morally reversible.

Comment author: ciphergoth 30 January 2010 08:30:37AM 3 points [-]

I don't know why I said "more seriously" when it's by far the less defensible argument.

Comment author: sheldon 21 January 2010 01:40:17PM 2 points [-]

<i>if you wake up in the Future, it's probably going to be a nicer place to live than the Present.</i>

How do we know this? How can we possibly think it's possible to know this? I can think of at least three scenarios that seem much more likely than this sunny view that things will just keep progressing while you're dead and when you wake up you'll slip right into a nicer society:

1) We run out of cheap energy and hence cheap food; tensions rise; most of the world turns into what Haiti looks like now.

2) Somebody sets off a nuclear weapon, leading to worldwide retaliation.

3) Humans do keep progressing . . . and evolving, and when you wake up, you'll be in the same position as an ape in today's society. Society is indeed nicer today for us than if we were apes, but it's not necessarily nicer for the actual ape.

Comment author: Blueberry 21 January 2010 05:03:01PM 5 points [-]

It seems unlikely that people would be revived in those scenarios, especially in 1 and 2. As for 3, biological evolution takes a long long time, and even then it's likely the future humans would provide a decent environment for us if they revive us. Unlike apes, we and future humans will both be capable to communicate and engage in abstract thought, so I don't think that analogy works.

Comment author: Ryan 23 January 2010 06:15:56AM 6 points [-]

Yep. As far as I can tell a world where people can be and are being revived is almost certainly one I want to live in.

Comment author: Blueberry 24 January 2010 04:07:32AM 10 points [-]

a world where people can be and are being revived is almost certainly one I want to live in.

Exactly, and that's really well stated. By being cryo-preserved, you're self-selecting for worlds where there is a high likelihood that

(a) contracts are honored (your resources are being used to revive you, as you intended),

(b) human lives, even very different humans from long ago, are respected (otherwise why go to all the trouble of dethawing),

(c) there is advanced neurotechnology sufficient to bring people back, and humanity is still around and has learned to live with it,

and (d) society is rationalist enough not to prohibit cryonics out of fear of zombies or something.

It's not perfect, but it's a good filter.

Comment author: Nick_Tarleton 24 January 2010 04:12:36AM 1 point [-]

Excellent observation!

Comment author: AndyWood 23 January 2010 07:01:34AM *  2 points [-]

This is the single consideration in the cryonics debate that I remain unconvinced of.

It seems very easy for me to imagine lots of futures that others might find worthwhile, that I would find very unpleasant. Off the top of my head: what if society is more regimented? What if one is expected to be very patriotic? What if it is a very collectivist culture? What if I still have to submit to hierarchies of one kind or another, for one reason or another? ... ...

Are there good reasons to believe that human life in the future will be enjoyable to me? Can I do better than beginning with a bottom line that says "The future will be pleasant", and inventing justifications for why that's more likely than not?

Comment author: MichaelGR 21 January 2010 05:13:19PM 4 points [-]

As for 3, biological evolution takes a long long time, and even then it's likely the future humans would provide a decent environment for us if they revive us. Unlike apes, we and future humans will both be capable to communicate and engage in abstract thought, so I don't think that analogy works.

Evolution by natural selection is indeed too slow to be a problem, but self-modification via technological means could mean rapid change for humanity.

It might still not be a problem since it's doubtful that a smarter civilization would totally lose the capability to communicate with humans v1.0 (knowing they have a bunch of frozen people around, they'd at least keep a file somewhere about the 21st century, or scan a bunch of brains to learn what they need to know).

And if they could improve themselves, there's a good chance that they'll also be able to improve the revived people so that they can fit in the new society, or at least accomodate comfortably humans 1.0 who don't want to be modified (who knows how a smarter than human friendly intelligence with highly advanced technology would deal with that problem? All we can guess is that the solution would probably be pretty effective).

Comment author: Zack_M_Davis 23 January 2010 08:12:31AM 3 points [-]

biological evolution takes a long long time,

Upload evolution could be very fast (due to clock speedup, fast copying, ability to test and revert mutations, &c.).

Comment author: righteousreason 20 January 2010 11:58:45PM *  2 points [-]

Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?

http://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324

Amazon.com Review

Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military maneuvers, he persuades readers to think small and focus on the meaning of "thin slices" of behavior. The key is to rely on our "adaptive unconscious"--a 24/7 mental valet--that provides us with instant and sophisticated information to warn of danger, read a stranger, or react to a new idea.

Gladwell includes caveats about leaping to conclusions: marketers can manipulate our first impressions, high arousal moments make us "mind blind," focusing on the wrong cue leaves us vulnerable to "the Warren Harding Effect" (i.e., voting for a handsome but hapless president). In a provocative chapter that exposes the "dark side of blink," he illuminates the failure of rapid cognition in the tragic stakeout and murder of Amadou Diallo in the Bronx. He underlines studies about autism, facial reading and cardio uptick to urge training that enhances high-stakes decision-making. In this brilliant, cage-rattling book, one can only wish for a thicker slice of Gladwell's ideas about what Blink Camp might look like. --Barbara Mackoff

Comment author: MichaelGR 21 January 2010 12:39:53AM *  5 points [-]

I haven't read it, so I can't comment directly on it.

But you should probably know that Gladwell has been criticized a lot for un-scientific methodology and for turning interesting anecdotes and "just-so" stories into generalizations and supposed "laws" (without much evidence).

The most recent example of high profile criticism of Gladwell is probably this review by Steven Pinker: Malcolm Gladwell, Eclectic Detective

I don't know if this criticism applies to Blink, though, but if you read it, your BS detector should probably be turned up a notch.

Comment author: knb 21 January 2010 08:55:39AM *  3 points [-]

The Harding hate is sadly predictable. Harding is so abused by people who nothing about the man. Historians hate him because they have a bias toward hyperactive presidents like TR and FDR.

Rated by the historians in the "worst" category, by contrast, is, you guessed it, Warren G. Harding: a president who successfully promoted economic prosperity, cut taxes, balanced the budget, reduced the national debt, released all of his predecessor's political prisoners, supported anti-lynching legislation, and instituted the most substantial naval arms reduction agreement in world history. Go figure.

Yes, Harding was prone to verbal gaffes, and had a few scandals, but he was basically a solid leader, ahead of his time in many ways, like in civil rights.

Comment author: Alicorn 21 January 2010 12:01:25AM 1 point [-]

I enjoyed Blink. You can read some essays by the author here - if you get a lot out of them, you'll probably react similarly to the book.

Comment author: Bo102010 21 January 2010 03:01:26AM *  1 point [-]

I liked it. The promotional material and summaries of it don't do justice to the content, I think, though. The book has many examples of how people who are experts at things can make good snap judgments in their domains of expertise, but it is not about how any normal person can make great decisions without thinking about them.

Also, Malcolm Gladwell could write a cookbook and make it the most entertaining thing you'll read all year.

Comment author: CronoDAS 21 January 2010 04:12:51AM *  3 points [-]

The book has many examples of how people who are experts at things can make good snap judgments in their domains of expertise, but it is not about how any normal person can make great decisions without thinking about them.

Jon Finkel is probably the world's best Magic player. However, he is not good at explaining how to make correct decisions when playing; to him, the right play is simply obvious, and he doesn't even notice all the wrong ones. His skill is almost entirely unconscious.

Comment author: pdf23ds 21 January 2010 10:08:01AM 1 point [-]

Reminds me of Marion Tinsley, the greatest checkers player ever. He lost 7 games out of thousands in his 45 year career of playing for the World Championship, two of them to the program that would eventually go on to solve checkers. (That excludes his early years studying the game.) He was arguably the most dominant master of any game, ever. He, too, couldn't explain his skill.

Comment author: wedrifid 21 January 2010 12:03:32PM 2 points [-]

Do they still have World Championships in checkers now the game is understood to be a somewhat more complex tic-tac-toe variant?

Comment author: gregconen 23 January 2010 12:20:16AM 3 points [-]

I believe so, though I've heard the first few moves are now randomized, as only perfect play, rather than all board positions, is solved.

Of course, every perfect-information deterministic game is "a somewhat more complex tic-tac-toe variant" from the perspective of sufficient computing power.

Comment author: jhuffman 21 January 2010 05:43:06PM *  2 points [-]

Cryonics does not prevent you from dying. Humans are afraid of dying. Cryonics does not address the problem (fear of death). It instead offers a possible second life.

I'm afraid of dying, because I know that when I am dying I will be very afraid. So I'm afraid of being afraid. Cryonics would offer very little to me right now in terms of alleviating this fear. Sure it might work; but I won't know that it will while I'm dying, and so my fear while dying will not be mitigated.

You might say hey wait jhuff- isn't actually being dead, and not have a chance at a second life a problem (or at least a missed opportunity)? Well I don't see how it is once I'm dead. Heck the expression "I'm dead" doesn't even make sense - there is no consciousness or awareness of being dead - if I really wanted to be a pedant I'd argue there is no ego that is dead; only egos that have died.

So anyway until I have died the overwhelming problem with death is my fear of it. After I have died, I don't exist. So certainly no problems there for me (I can't have problems if I don't exist). Cryonics doesn't seem to offer me much utility value.

Comment author: CronoDAS 21 January 2010 07:04:07PM 5 points [-]

Thought:

Is there a significant difference between the process of being suspended and revived, and the process of going to sleep and waking up?

Comment author: jhuffman 21 January 2010 09:45:58PM 5 points [-]

When I go to sleep I expect to wake up.

When I die, even if I had a cryogenic stand-by all ready to go, I would not expect to be revived. So dying would be a lot more emotionally painful than going to sleep.

In the future, if cryonic suspension and revival is an ordinary fact of life (for space travel or whatever) then I think there would be not much difference. The main emotional difference would be that you know you are going to be "away" for a long time. You may know people will miss you etc. Just like if you were taking a long trip with no communications.

So different from sleep/wake but not different from other ordinary human experiences.

Comment author: denisbider 25 January 2010 03:13:23PM 1 point [-]

I agree, cryonics is failing to "click" with me for largely the same reason - that the estimate of me benefitting from cryonics is not 95%, but more like 5%. If the likelihood of my revival and resumption of awareness is only 5%, then it doesn't much alleviate the emotional trauma of death.

Plus, I can imagine the possibility of a harmful revival, where the mind is cloned and resumes awareness, only to become a lab experiment that gets reused tens of thousands of times.

Comment author: Vladimir_Nesov 25 January 2010 03:50:52PM 3 points [-]

I agree, cryonics is failing to "click" with me for largely the same reason - that the estimate of me benefitting from cryonics is not 95%, but more like 5%.

Think of it as insurance, in the literal sense. When you buy e.g. insurance for your house against fire, there is only something like 0.2% chance or less that you'll benefit from the fact that you've bought insurance (you only benefit if fire happens), and 99.8% chance that you'll only lose money by paying for insurance, which is by the way not a trivial sum.

The analogy is not intuitively very salient on first sight, because "fire" may connote with "death", while actually the analogy likens "fire" to successful revival, and death is just a fact of the scenery. A cryonics contract ensures you against the risk of successful revival. If it turns out that you can be successfully revived, then you get the premium of open-ended future.

Comment deleted 25 January 2010 03:22:46PM *  [-]
Comment author: Morendil 25 January 2010 05:19:58PM 1 point [-]

5% is how many times better than 0% ?

Comment author: ciphergoth 25 January 2010 05:57:30PM 1 point [-]

But this invites Pascal's Wager/Pascal's Mugging type arguments. It's not enough to argue that it's more than zero - it has to be enough to be worth the investment.

Comment author: Furcas 25 January 2010 06:45:01PM *  4 points [-]

The real flaw in Pascal's Wager isn't that the probability of getting the desired payoff is extremely low, it's that the probability of getting the payoff by holding any one belief from a set of different beliefs is the same. For example, the probability of being rewarded for being an atheist by a God who loves epistemic rationalism is at least as big as the probability of being rewarded by Yahweh for being a Christian.

The probability of cryonics getting us the payoff, however, is a lot bigger than the probability that not signing up for cryonics will get us the payoff, so it's not a Pascal's Wager type argument to point out that cryonics is worth it even if the probability of it working is very small.

Comment author: Cyan 21 January 2010 05:51:55PM *  7 points [-]

So anyway until I have died the overwhelming problem with death is my fear of it.

For me, the overwhelming problem with death is that I don't get to exist anymore.

If you're going to be afraid of dying whether or not you've signed up for cryonics, then your decision not to sign up cannot depend on your anticipation of being afraid, as that is invariant across the two scenarios.

Comment author: jhuffman 21 January 2010 06:02:42PM *  1 point [-]

For me, the overwhelming problem with death is that I don't get to exist anymore.

I don't really understand that statement. Your problem - here and now - is that after you die you don't exist anymore?

I can't tell you what your problems or fears are but is it possible the real problem here and now is that you are afraid of not existing after your death?

edit to follow-up this remark:

If you're going to be afraid of dying whether or not you've signed up for cryonics, then your decision not to sign up cannot depend on your anticipation of being afraid, as that is invariant across the two scenarios.

So then Cryonics is just a solution looking for a problem. I don't have a problem it can solve.

Comment author: JGWeissman 21 January 2010 06:57:32PM 8 points [-]

Fear of not existing after death is not just some silly uncomfortable emotion to be calmed. Rather it reveals a disconnect between one's preference and expectations about the actual state of reality.

The real problem is not existing after death. Fear is a way of representing that.

Comment author: jhuffman 21 January 2010 09:35:50PM *  2 points [-]

Fear of not existing after death is not just some silly uncomfortable emotion to be calmed. Rather it reveals a disconnect between one's preference and expectations about the actual state of reality.

I never said it was silly, I hope it didn't come across that way. And I am not at all suggesting that we shouldn't prefer life, and shouldn't take all reasonable steps to continue living as long as living is worthwhile.

Comment author: Cyan 21 January 2010 06:31:30PM *  5 points [-]

I don't really understand that statement. Your problem - here and now - is that after you die you don't exist anymore?

I value my continued existence; I'm surprised that this is at all confusing.

Is penicillin also a solution looking for a problem? How about looking both ways before you cross the street? Do you really place no value on the longer life you would have the possibility of living if you signed up? If so, why does the same consideration not also extend to the common death-preventing steps (ETA: limit that to sudden death, the kind where you experience no opportunity to feel fear) you presumably currently take?

Comment author: jhuffman 21 January 2010 06:55:01PM 2 points [-]

Is penicillin also a solution looking for a problem? How about looking both ways before you cross the street?

Of course not, penicillin prevents death. So does watching both ways before I cross the street. Cryonics does not prevent death.

Do you really place no value on the longer life you would have the possibility of living if you signed up?

Well, I could try and calculate the utility based on my guess at the odds of it working, but I estimate that the utility of the time invested in doing that would exceed the marginal utility I'd find when finished. So I'm not going to look into it, for the same reason that I don't read all the email messages I get from potential business partners in Nigeria. Surely there must be a chance that one of those is real, but I consider that chance to be so vanishingly small that its -EV for me to read the emails.

Comment author: thomblake 21 January 2010 08:13:00PM 4 points [-]

Funny that your expected value from posting this comment was higher than researching cryonics.

Comment author: cousin_it 21 January 2010 02:44:15PM 1 point [-]

With an unforgivable naivete, a childish stupidity, we all still think history is leading us towards good, that some happiness awaits us ahead; but it will bring us to blood and fire, such blood and such fire as we haven't seen yet.

-- Nikolai Strakhov, 1870s

Comment author: PatSwanson 27 December 2012 09:27:34PM 2 points [-]

Compared to now, he was wrong.

Why should we think he will be wrong about our future when he was wrong about his own?

Comment author: xamdam 01 September 2010 04:03:22PM 1 point [-]

when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew

I am not defending OJ in general, but your objection, while philosophically valid, was misplaced. It's ok, you were only 5 :)

Talmudic law is let's say a "ritual law" where performance of certain acts is "fulfilled", in a pure legalistic sense. There is a long standing argument whether mitzhot (commandments) require intentional performance to be fulfilled. E.g. if you "hear the shofar" by accident, you do not have to hear it again. Given that prayer is a ritual act the same kind of technical question arises: whether understanding the words is required for fulfilling the commandment to pray 3 times a day. This is all completely orthogonal to philosophy. There is even a talmudic expression "vile person with Torah's permission", which means he observes all the laws. Your objection was philosophical, while the law is purely technical and internally consistent that way.

Comment author: ialdabaoth 31 December 2013 07:48:42PM *  0 points [-]

Anecdotally, when I was a child I was a super-clicker. (I often wonder what child-Ialdabaoth would have accomplished, if not for sub-100 IQ parents, a fearfully fundamentalist Christian upbringing, and a cliche'd-bad experience with the public school system.)

As an adult, I find that it is much, much harder for me to just "click" on things - and it is invariably due to a panic reaction when presented with information that might cause me to lose status among an imagined group of violent authoritarians.

It would be interesting to see how many more "clicks" can occur among people given muscle relaxants and anti-anxiety drugs before being given a particular logos-based "clickable" pitch, vs. an ethos-based "control" pitch.