Followup toNormal Cryonics

Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists.  34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.

Except for one thing.

During one conversation, I said something about there being no magic in our universe.

And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"

Sigh.  We all know how this conversation is going to go, right?

So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"

"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"

Click.

She got it, just like that.

This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:

"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it.  She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that.  So my mother laughed and said that if I still felt that way when I was older, she wouldn't object.  Later, when I was older and signing up for cryonics, she objected."

Click.

It's... kinda frustrating, actually.

There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).

Freezing damage?  I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...

But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does.  It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old.  (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)

Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.

Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew.  It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.

And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious.  (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.)  It would have been convenient if I'd discovered some particular key insight that convinced people.  If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance."  Then I could just emphasize that argument, and people would sign up.

But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor."

In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...

Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to "get it" just from hearing a casual mention on the radio.  Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.

If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five.  As it is, I have no problem with shrugging and saying "People are crazy, the world is mad."

But it really, really raises the question of what the hell is in that click.

There's this magical click that some people get and some people don't, and I don't understand what's in the click.  There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.  I myself failed to click on one notable occasion, but the topic was probably just as clickable.

(In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)

A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it.  I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago.  But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green.  And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons.  Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.

We can define a "click" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.

What makes it happen?  What goes into that click?

It's a question of life-or-death importance, and I don't know the answer.

That generation of cryonicists seemed so normal apart from that...

What's in that click?

The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well.  Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it.  That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal.  (Michael Vassar is one example of a "superclicker".)  But it is still true AFAICT that people who click on one problem are more likely than average to click on another.

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time.  Clicky people would tend to be people who take all of their beliefs at face value.

The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode.  (Why?)

The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.

Is that really just it?  Is there no special sanity to add, but only ordinary madness to take away?  Where do superclickers come from - are they just born lacking a whole lot of distractions?

What the hell is in that click?

New Comment
416 comments, sorted by Click to highlight new comments since: Today at 11:01 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science.

An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.

Well, not congratulating me, exactly. He was saying, "wow, that turned out really well", and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like i... (read more)

Such people have no problem with the idea of magic, because everything is magic to them, even science.

Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn't understand it because his explanation didn't seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn't understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.

Huh. I guess that if I tell myself, "Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it", then I do feel a little less confused.

Most people simply do not expect reality to make sense

More precisely, different people are probably using different definitions of "make sense"... and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people "make sense". (Certainly, it's what helped me become aware of the issue in the first place.)

So, here are some short snippets from the book "Using Your Brain For A Change", wherein the author comments on various cognitive strategies he's observed people using in order to decide whether they "understand" something:

There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different....

A second kind of understanding simply allows you to have a good feeling: "Ahhhh." It's sort of like salivating to a bell: it's a conditioned response, and all you get is that good feeling. That's the kind of thing that can lead to saying, "Oh, yes, 'ego' is that one up there on the chart. I've seen that before; yes, I understand."

... (read more)
7MichaelGR14y
Would you recommend that book? ("Using Your Brain For A Change") Is the rest of it insightful too, or did you quote the only good part?

Is the rest of it insightful too, or did you quote the only good part?

There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can't make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It's too difficult to pay attention to both the steps in the book and what's going on in my head at the same time.

I'm still confused, but now my eyes are wide with horror, too. I don't dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?

I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don't understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher's password -- the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I'm always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.

Is this something that people can learn in general? How? I consider this a hugely important question.

I wouldn't be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?

8wedrifid14y
And those that aren't computer programmers would display a disproportionate amount of aptitude if they tried.

Certainly; I think this is a case where there are 3 types of causality going on:

  1. Using Less Wrong makes you a better programmer. (This is pretty weak; for most programmers, there are probably other things that will improve your programming skill a hundred times faster than reading Less Wrong.)
  2. Improving as a programmer makes you more attracted to Less Wrong.
  3. Innate rationality aptitude makes you a better programmer and more attracted to Less Wrong. (The strongest factor.)
2[anonymous]9y
I am planning an article about how to use LW-ideas for debugging. However there is a meta-idea behind a lot of LW-ideas that I have not yet seen really written down and I wonder what would be the right term to use. It is roughly that in order to figure out what could cause an effect, you need to look at not only stuff but primarily on the differences between stuff. So if a bug appears in situation 1 and not in 2, don't look at all aspects of situation 1, just the aspects that differ from situation 2. Does this have a name? It sounds very basic but I was not really doing this before, because I had the mentality that to really solve a problem I need to understand all parts of a "machine", not just the malfunctioning ones.
1[anonymous]9y
Does it not follow from the Pareto principle?
1[anonymous]9y
I don't think it really does... or even that it is necessarily true. The kind of issues I find tend to have a more smoother distribution. It really depends on the categorization. Is user error one category or one per module or one per function or?...
4[anonymous]9y
I think not, but but it may matter what is your native language. As mine is not English and programming languages are generally based on it, when I was 12 and exploring Turbo Pascal, I simply thought of it as Precise-English, while what my language tutor taught me was Sloppy-English. (I still don't really understand on the gut level why they tend to compare programming with math! Math is Numberish and Greekletterish to me, while programming is Precise-English! Apparently I really really care how symbols look like, for some reason.) Anyway, if it is your own native language it may be far more confusing why a precise version of it exists and can be executed by a computer and maybe if you overcome that challenge that helps you. I think I would dislike it if programming languages would be based on my language because it would always confuse me that that when they call a class a class, I think of a school class. For example when I was learning assembler, I got really confused by using the term accumulator. I thought those belong into cars - we call batteries accumulators here. I got to be at least 16 years old when I finally understood that words can have multiple meanings and all of them can be correct usage, but even now at 37 I don't exactly like it. It is untidy... but if I had to deal with that sort of challenge that could potentially make me a better problem solver.
2TheOtherDave13y
If I want to condition someone into applying some framing technique T, I can put them in situations where their naive framing Fn obtains no reward and an alternate framing Fa does, and Fa is a small inferential step away from Fn when using T, and no framings easily arrived at using any other technique are rewarded. The programming example you give is a good one. There's a particular technique required to get from a naive framing of a problem to a program that solves that problem, and until you get the knack of thinking that way your programs don't work, and writing a working program is far more rewarding than anything else you might do in a programming class. Something similar happens with puzzle-solving, which is another activity that a lot of soi-disant rationalists emphasize. But... is any of that the same as getting people to "expect reality to make sense"; is it the same as that "click" the OP is talking about? Is any of it the same as what the LW community refers to as "being rational"? I'm not sure, actually. The problem is that in all of these cases the technique comes out of an existing scenario with an implicit goal, and we are trying to map that post facto to some other goal (rationality, click, expecting reality to make sense). The more reliable approach would be to start from an operational definition of our goal (or a subset of our goal, if that's too hard) and artificially construct scenarios whose reward conditions depend on spanning inferential distances that are short using those operations and long otherwise... perhaps as part of a "Methods of Rationality" video game or something like that.
4ialdabaoth10y
This is a testable hypothesis. To test it, see how deeply wise you appear when explaining to people who seem crazy that everything actually does have an underlying explanation, and then giving a quick and salient example.

During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that 'higher' frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations - though presumably not engineering).

Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear - sometimes it's worse than finding the duplicate mp3s in my music library.

4bogus14y
Could you clarify? To properly understand how traceroute works one would need to know about the TTL field in the IP header (and how it's normally decremented by routers) and the ICMP TTL Exceeded message. But I'm not sure that a tech support drone would be expected to understand any of these.

To properly understand how traceroute works one would need to know about the TTL field

I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: "Both." A charitable interpretation of this would be "They measure round trip times between the source and that router, but it's just a matter of arithmetic to use those to estimate round trip times between any two routers in the list" -- but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.

Edit: I think I remember him saying something like "You're expecting it to be logical, but things aren't always logical".

Jesus Christ. "Things aren't always logical." The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn't seem that way is when one lacks understanding.

1MichaelGR14y
Looks like he was just repeating various teacher's passwords.

This is overwhelmingly how I perceive most people. This in particular: 'reality is social'.

I have personally traced the difference, in myself, to receiving this book at around the age of three or four. It has illustrations of gadgets and appliances, with cut-out views of their internals. I learned almost as soon as I was capable of learning, that nothing is a mysterious black box, things that seem magical have internal detail, and there are explanations for how they work. Whether or not I had anything like a pre-existing disposition that made me love and devour the book in the first place, I still consider it to have had a bigger impact on my whole world view than anything else I can remember.

5VAuroch10y
I got Macaulay's The Way Things Work (the original) at a slightly higher age. I suspect a big reason I became a computer scientist was the joy of puzzling through the adder diagrams and understanding why they worked.
1Nisan10y
I traced those adder diagrams as a child as well, and it surely was a formative experience.
3FourFire10y
This is mine which I recieved at around age six. I don't recall how many tens of times I read and reread those pages.

This is worth an entire post by itself. Cheers.

7wedrifid14y
Yes, please!

So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place -- as opposed to heroic efforts during the project -- was quite an eye opener for him.

The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'

The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'

What makes you think it was the only one, or one of a few out of many?

The specific project was only relevant because my bosses prior to that point in time already implicitly understood that there was something my team was doing that got our projects done on time when others under their authority were struggling - but they attributed it to intelligence or skill on my part, rather than our methodology/philosophy.

The newer boss, OTOH, didn't have any direct familiarity with my track record, and so didn't attribute the success to me at all, except that obviously I hadn't screwed it up.

8Bruno Mailly2y
Reminiscent of [CODING HORROR] Separating Programming Sheep from Non-Programming Goats Ask programming students what a trivial code snippet of an unknown language does. * Some form a consistent model. Right or wrong, these can learn programming. * Others imagine a different working every time they encounter the same instruction. These will fail no matter what. I suspect they treat it as a discussion, where repeating a question means a new answer is wanted.
6thomblake14y
Not usually a fan of your thoughts, but these seem right on the money.
3[anonymous]9y
Sorry, old comment, but: There are cases where understanding something may lower your status or at least seem so, and this may play a role. About 10-20 years ago, it was computers, understanding them made you a geek with everything that comes with it. So people were very proudly saying "I am not a techie, I just use it but I don't understand it!" meaning roughly that their social status is largely higher than that of techies. Of course they did not mean to have higher social status than Bill Gates, rather than just the local IT department. There is something similar going on with young men being really proud about not knowing to cook, as this brag suggests either affording to eat out or being really attractive and always finding girlfriends who like to cook. The point is, ignorance can be a luxury and that way a pretty big status signal, affording to not understand certain things can be like that. On a parallel Earth, I could imagine the richest kids even claiming they cannot read because it would be a huge "I don't need to work to survive!" message. The point is, this can easily he internalized. "I don't want to look like the kind of person who needs this knowledge" -> "I don't understand"
2Gunnar_Zarncke10y
thus the concept of gods.
2gwern13y
Relevant quote: http://lesswrong.com/lw/26y/rationality_quotes_may_2010/1y6j?c=1
2Sniffnoy14y
Well, anything mathematical would be an exception to that, at the least.
5pjeby14y
If you really grasp something mathematical, you ought to be able to apply it -- at least in principle.
2Sniffnoy14y
OK but that's not really what "control" normally means, is it? "Manipulate" might be a better word here.
3RobinZ14y
"Manipulate" would also extend the thinking-as-holding metaphor of "grasp". (I have to admit that I was confused by "control" as well.)

Once upon a time, I had a job where most of what I did involved signing up people for cryonics. I'm guessing that few other people on this site can say they've ever made a salary off that (unless you're reading this, Derek), and so I can speak with some small authority. Over those four excruciating years at Alcor, I spent hundreds of hours discussing the subject with hundreds of people.

Obviously I never came up with a definitive answer as to why some people get it and most don't. But I developed a working map of the conceptual space. Rather than a single "click," I found that there were a series of memetic filters.

The first and largest by far tended to be religious, which is to say, afterlife mythology. If you thought you were going to Heaven, Kolob, another plane of existence, or another body, you wouldn't bother investing the money or emotional effort in cryonics.

Only then came the intellectual barriers, but the boundary could be extremely vague. I think that the vast majority of people didnt have any trouble grasping the basic scientific arguments for cryonics; the actual logic filter always seemed relatively thin to me. Instead, people used their intellect to ra... (read more)

Thank you for writing this.

If you ever feel like writing a longer post about your experience in the cryonics world, I'd love to read it and I suspect others would too.

-5Karlo11y

If I might ask: why did you quit?

3John_Maxwell14y
Betcha he got frustrated with how irrational people were. No joke.
5byrnema14y
I wish I had more of the knowledge that you have so that I could use it to update my models of people -- at the moment, I can't locate a place in my model to accommodate people being so reluctant to sign up for cryonics while believing that it could work. (a) Could you give some information regarding the setting? Were these people that approached you, or did you approach them? Did you meet in a formal place, like an office in Alcor, or an informal setting, like on their way from one place to another? (b) How long would your conversations with these religious people be, on average? It seems they would have already made up their minds. How did their fear of screwing up their afterlife square with the typical belief that people can be resuscitated after 'flat-lining' in a hospital with souls intact? (c) What would you think of the hypothesis that people don't much value life outside their social connections? (A counter-argument is that people have taken boats and sailed to strange and foreign continents throughout history, but maybe they represent a small fraction of personalities.) Were people much more likely to sign up in groups of 3 or more? (d) This I find least intuitive, because cryonics would be a way to be in denial about death. They could imagine that the probability of successful awakening is as high as they want it to be. Do you think that they could have been repulsed or disoriented by something else like -- just speculating -- a primal fear of being a zombie / being punished for being greedy / the emotional consequences of having unfounded hope in immortality? If you have an interest in answering any subset of these questions, thanks in advance.
0Capla9y
If 90-95% made it past filter one, what are your estimates for the other filters?
0whowhowho11y
What's the connotation of that? That they're deplorably irrational, or that ardent Cryonicists are weirdly asocial?
5TheAncientGeek10y
Waking up in a future society, where you don't know anyone, yours skills are useless, etc, is equivalent to exile, which is generally considered a punishment.
1Richard_Kennaway10y
Exile is only a punishment because it is worse than staying at home. When the alternative is being dead, most people will take exile, as demonstrated by refugees from war zones.
2TheAncientGeek10y
Who stay with their families and compatriots.
0Richard_Kennaway10y
If they can. And with enough signed up, the same may be true of those taking the long sleep.
0TheAncientGeek10y
But there is nothing they can do to exert any control over that.

Is that really just it? Is there no special sanity to add, but only ordinary madness to take away?

I think this is the primary factor. I've got a pretty amusing story about this.

Last week I met a relatively distant relative, a 15 year old guy who's in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about "robots", and in a shortly unfolding 15 minute spontaneous conversation I've managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I'm fairly sure that he had no previous knowledge about the subject.

First I made a remark in connection to the TV program's poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.

Then he asked if I think they'll get that smart, and I answered that it's quite possible in this century. I explained rec... (read more)

in Hungary

Surprise level went down from gi-normous to merely moderate at this point.

7Kevin14y
This is a great post, and I'd be interested in seeing you write out a fuller version of what you said to your relative as a top level post, something like "Friendly AI and the Singularity explained for adolescents." Also, do you speak English as a second language? If so, I am especially impressed with your writing ability. On a tangent, am I the only one that doesn't like the usage of boy, girl, or child to describe adolescents? It seems demeaning, because adolescents are not biologically children, they've just been defined to be children by the state. I suppose I'm never going to overturn that usage, but I'd like to know if there is some reason why I shouldn't be bothered by the common usage of the words for children.
7Kutta14y
Yes, English is second language for me and I mostly learned it via reading things on the Internet. Excuse me for the boy/guy confusion, I did not have any particular intent behind the wording. It was an unconscious application of my native language's tendency to refer to <18 year old males with the "boy" equivalent word. As I'm mostly a lurker I have much less writing than reading experience; currently I usually make dozens of spelling/formulation corrections on longer posts, but some weirdly used words or mistakes are guaranteed to remain in the text.
2Kevin14y
The boy usage is correct in English as well; I just don't like that usage, but I'm out of the mainstream.
1AdeleneDawner14y
You're not. I find it demeaning and more than a little confusing.
2pdf23ds14y
"Child" is probably never OK for people older than 12-13, but "girl", "guy", and occasionally "boy" are usually used by teens, and often by 20-somethings to describe themselves or each other. ("Boy" usually by females, used with a sexual connotation.)
3AdeleneDawner14y
I'm aware of it, and am actually still getting into the habit of referring to women about my age or younger as women rather than girls. I still trip over it when other people use the words that way, though - I automatically think of 8-year-olds if it's not very clear who's being referred to.
3pdf23ds14y
Right. "Girl" really has at least two distinct senses, one for children and one for peers/juniors of many ages. "Guy" isn't used in the first sense, and the second sense of "boy" is more restricted. The first sense of "boy"/"girl" is the most salient one, and thus the default absent further context. I don't think the first sense needs to poison the second one. But its use in the parent comment this discussion wasn't all that innocent. (I've been attacked before, by a rather extreme feminist, for using it innocently.)
5pwno14y
But wouldn't the knowledge that the AI could potentially do your work be psychologically harmful?
9denisbider14y
When you play an engaging computer game, does it detract from your experience knowing that all the tasks you are performing are only there for your pleasure, and that the developers could have easily just made you click an "I Win" button without requiring you to do anything else?
2Wei Dai14y
I suspect that status effects might be important here. When we play a video game, we choose to do it voluntarily, and so the developers are providing us a service. But if the universe is controlled by an AI, and we have no choice but to play games that it provides us, then it would feel more like being a pet. The AI could also try to take that into account, I suppose, but I'm not sure what it could do to alleviate the problem without lying to us.
7Vladimir_Nesov14y
If you think of FAI as Physical Laws 2.0, this particular worry goes away (for me, at least). Everything you do is real within FAI, and free will works the same way it does in any other deterministic physics: only you determine your decisions, within the system.
1Wei Dai14y
It's not quite the same, because when the FAI decided what Physical Laws 2.0 ought to be, it must have made a prediction of what my decisions would be under the laws that it considered. So when I make my decisions, I'm really making decisions for two agents: the real me, and the one in FAI's prediction process. For example, if Physical Laws 2.0 appears to allow me to murder someone, it must be that the FAI predicted that I wouldn't murder anyone, and if I did decide to murder someone, the likely logical consequence of that decision is that the FAI would have picked a different set of Physical Laws 2.0. It seems to me that free will works rather differently... sort of like you're in a Newcomb's Problem that never ends.
0Vladimir_Nesov14y
It just means that you were mistaken and PL2.0 doesn't actually allow you to murder. It's physically (rather, magically, since laws are no longer simple) impossible. This event has been prohibited.
0JGWeissman14y
I would expect that an FAI would not force us to play games, but would make games available for us to choose to play.
2Wei Dai14y
It's not that an FAI would force us to play games, but rather there's nothing else to do. All the real problems would have been solved already.
6JamesAndrix14y
That's not necessarily true. We might still have to build a sturdy bridge to cross a river, it's just that nobody dies if we mess up. Likewise, if one's mind is too advanced for bridge building to not be boring, then there will be other more complex organizations we would want, which the FAI is under no obligation to hand us. I think we can have a huge set of real problems to solve, even after FAI solves all the needed ones.
3Wei Dai14y
How is bridge-building not a game when the FAI could just flick a switch and transport you across the river in any number of ways that are much more efficient? When you're building a bridge in that situation, you're not solving the problem of crossing a river, you're just using up resources in order to not be bored.
3CronoDAS14y
Because it refuses to do so? If you're 16 and your parents refuse to buy something for you (that they could afford without too much trouble) and instead make you go out and earn the money to buy it yourself, was solving the problem of how to get the money "just a game"?
1denisbider14y
Yes, if the parents will always be there to take care of you.
0JamesAndrix14y
We can wirehead children now. We want them to be more than that.
-2denisbider14y
The only reason we want that is that civilization would collapse without anyone to bear it. If FAI bears it, there is no pressure on anyone.
3JamesAndrix14y
What does it mean for FAI to bear civilization? It can give us bridges, but if I'm going to spend time with you, you'd better be socialized. A life of obedient catgirls would harm your ability to deal with real humans (or posthumans) And ignoring that, I don't think that we want to be more than we are just in order to get stuff done. Both of these are things we to to achieve complex values. Some of the things we want are things which can't be handed to us, and some of those are thing which we can't achieve if everything which can be handed to us, is handed to us.
0denisbider14y
The companions FAI creates for you don't have to be obedient, nor catgirls. Instead, they can be companions that far exceed the value you can get from socializing with fellow humans or posthumans. Once there is FAI, the best companion for anyone is FAI. The only reason you want "complex values" is because your environment has inculcated in you that you want them. The reason your environment has inculcated this in you is because such inculcation is necessary in order to have people who will uphold civilization. Once there is FAI, such inculcation is no longer necessary, and is in fact counter-productive.
2JamesAndrix14y
How rude can I be to my FAI companion before it starts crying in the corner? How rude will I become if it doesn't? Why didn't it just build the bridge the first time I asked? then I wouldn't have to yell. Does she mind that I call her 'it'? Proper companions don't always give you what you want. Also, even though FAI could create perfectly balanced agents, and even if creating said agents wasn't in itself morally reprehensible, I think the is a value for interacting with other 'real' humans. Edit: Newline: Ok, this is a big deal: The fact that a value I have is something evolution gave me is not a reason to abandon that value. Pleasure is also something I want because evolution made me want it. Right now, I want those complex values, and I'm not going to press a button to self modify to stop wanting them
-2denisbider14y
I don't see why creating perfectly balanced agents would be morally reprehensible - nor why, given such agents, there would be value in interacting with other humans - necessarily less suited to each other's progress than the agents would be. It may well be considered morally reprehensible to communicate with other humans, because it may undermine and slow down the personal development that each human would otherwise benefit from in the company of custom-tailored companions, designed perfectly for one's individual progress. It may well be morally better for the FAI to make you think that you're communicating with a 'real' human, when in fact you are communicating with an agent specifically designed to provide you with that learning experience.
0JamesAndrix14y
If these agents are people in a morally significant way, then their needs must be taken into account. FAI can't just create slave beings. It's very difficult for me at this point to say whether it's possible for the FAI to create a being that perfectly meets some human needs, and in turn has all its own needs met just as perfectly. Every new person it creates just adds more complexity to the moral balance. It might be doable, but it might not, and it's a lot more work-thought-energy to do it that way. If they are not people, if they are some kind of puppet zombie robot, then we will have billions of humans falling in love with puppet zombie robots. Because that is their only option. And having puppet zombie robot children. Maybe that's what FAI will conclude is best, but I doubt it.
0denisbider14y
I actually think that all our current ways of thinking, feeling and going about life would be so antiquated, post-FAI, as a horse buggy on an interstate highway. Once an AI can reforge us into more exalted creatures than we currently are, I'm not sure why anyone would want to continue living (falling in love? having children?) the old fashioned way. It would be as antiquated as the lifestyle of the Amish.
3thomblake14y
Some people want to be Amish. It seems like your statement could just as well be "I'm not sure why anyone would want to be Amish" and I'm not sure that communicates anything useful.
0denisbider14y
On the one hand, as long as there are sufficient resources for some people to engage in Amish-like living while not depriving everyone else, that could be okay. On the other hand, if the AI determines that a different way of being is much preferable to insistance on human traditions, then it has its infinite intelligence at its disposal to convince people to go along for the ride. If the AI is barred both from modifying people or from using its intelligence to convince them, then still, at one point, resources become scarce, and for the benefit of everyone, the resource consumption of the refuseniks has to be optimized. I can envision a (to them) seamless transition where they continue living an Amish-like lifestyle in a simulation.
1JamesAndrix14y
What would we want to be exalted for? So we can more completely appreciate our boredom? It doesn't make sense to me that we'd get some arbitrary jump in mindpower, and then start an optimized advancement. (we might get some immediate patches, but there will be reasons for them.) Why not pump us all the way to multi-galaxy-brains? Then the growth issues are moot. Either way, if we're abandoning our complex evolved values, then we don't need to be very complex beings at all. If we don't, then I don't expect that even our posthuman values will be satisfied by puppet zombie companions.
-1denisbider14y
Is there some reason to believe our current degree of complexity is optimal? Why would we want to be reforged as something that suffers boredom, when we can be reforged as something that never experiences a negative feeling at all? Or experiences them just for variety, if that is what one would prefer? If complexity is such a plus, then why stop at what we are now? Why not make ourselves more complex? Right now we chase after air, water, food, shelter, love, social status, why not make things more fun by making us all desire paperclips, too? That would be more complex. Everything we already do now, but now with paperclips! Sounds fun? :)
6thomblake14y
Possibly relevant: I already desire paperclips.
4JamesAndrix14y
I don't, at all. Also you're conflating our complexity with the complexity of our values. I think that our growth will best start from a point relatively close to where we are now in terms of intelligence. We should grow into jupiter brains, but that should be by learning. I'm not clear on what it is you want to be reforged as, or why. By what measure is postFAI-Dennis better than now-dennis? By what measure is it still 'Dennis', and why were those features retained? The complexity of human value is not good for its being complex. Rather, these are the things we value, there happens to be a lot of them and they are complexly interrelated. Chopping away at huge chunks of them and focusing on pleasure is probablya bad thing, which we would not want. It may be the case that the FAI will extrapolate much more complex values, or much simpler values, but our current values must be the starting point and our current values are complex.
1Vladimir_Nesov14y
This is an extreme statement about everyone's preference, not even your own preference or your own belief about your own preference. One shouldn't jump that far.
1Vladimir_Nesov14y
It can't actually do that, because it's not what its preference tells it to do. The same way you can't jump out of the window given you are not suicidal.
1Wei Dai14y
By that reasoning, World of Warcraft is not a game because the admins can't make me level 80 on day 1, because that's not what their preferences tell them to do... Or am I missing your point?
0Vladimir_Nesov14y
I'm attacking a specific argument that "FAI could just flick a switch". Whether it moves your conclusion about the described situation being a game depends on how genuine your argument for it being a game was and on how much you accept my counter-argument.
0Paul Crowley14y
Could one of you précis the disagreement in a little more detail and with background? When you and Wei Dai disagree, I'd really like to understand the discussion better, but the discussion it sprang out of doesn't seem all that enlightening - thanks!
0Wei Dai14y
I originally said that post-FAI, we'd have no real problems to solve, so everything we do would be like playing games, and we'd take a status hit because of that. Nesov allegedly found a way to recast the situation so that we can avoid taking the status hit, but I remain unconvinced. I admit this is one of our more trivial discussions. :)
1Vladimir_Nesov14y
I originally didn't bother to do so explicitly, only wrote this reply that seems to have not been understood, but in light of Eliezer's post about flow of the argument, I'll recast the structure I see in the last few comments: Wei: Bridge-building is a game, because FAI could just flick a switch. (Y leads to X having property S; Y="could flick a switch", X="FAI's world", S="is a game") Vlad: No it couldn't, its preference (for us having to make an effort) makes it impossible for that to happen. (Y doesn't hold for X) Wei: But there are games where players don't get free charity as well. (Z have property S without needing Y) Vlad: I'm merely saying that Y doesn't hold, so if Y held any weight in the argument that "Y leads to X having property S", then having established not-Y, I've weakened the support for X having property S, and at least refuted the particular argument for X having property S, even if I haven't convincingly argued that X doesn't have property S overall.
0Wei Dai14y
When I wrote "Bridge-building is a game, because FAI could just flick a switch" the intended meaning of "could" was "could if it wanted to". When I cited WoW later, I was trying to point out that your interpretation of "could" as "could given its actual preferences" can't be what I intended because it would rule out WoW as a game. I guess I failed to get my point across, and then thought the argument was too inconsequential to continue. But now that you're using it as an example, I want to clear up what happened.
1Kevin14y
Is this a disagreement that is more about the meaning of words than anything else? I think you are Nesov are disagreeing about the meanings of game and real problems or maybe problems. Both of you defining those terms would help.
0Paul Crowley14y
In the short term, I think you are correct. However, in the long term, I'm hoping that the FAI will find a non-disastrous way for us to become superintelligent ourselves, and therefore again be able to participate in solving real problems.
1JamesAndrix14y
When I build a bridge in a game, I get an in-game reward. I don't get easier transport to anywhere. If I neglect to build the bridge or play the game at all, I still get to use all the bridges otherwise available to me. 'Real' bridges are at the top level of reality available to me. Even the simulation hypothesis does not make these bridges a game. Why do I want to cross the bridge? To not be bored, to find my love, or to meet some other human value. The AI could do that for me too, and cut out the need for transport. If we follow that logic even a short way, it would be obvious that we don't want the AI doing certain things for us. If there is danger of us being harmed because the FAI could help but won't it need merely help a little more, getting closer to those things we want to do ourselves. If we're in danger of being harmed by our own laziness, it need only back off. (It might do this at the level of the entires species, for all time, so individuals might be bored or angry or not cross rivers as soon as they would like, but it might optimize for everybody moment to moment.) If there are things we couldn't stand to have a machine do, and couldn't stand for it to not help us with, I think those would be incoherent volitions.
0denisbider14y
One way I imagine that would work for me is if the AI explained with sufficient persuasion that there simply isn't anything more meaningful for me to do than to play games. If there actually is something more meaningful for people to do, then the AI should probably let people do that.
3Vladimir_Nesov14y
An AI could persuade you to become a kangaroo -- this is a broken criterion for decision-making.
4Jack14y
I am skeptical that rationality and exponentially greater-than-human intelligence actually confers this power.
7Nick_Tarleton14y
It doesn't matter if it does or not; the fact that you can conceive of situations where persuadability would fail as a criterion immediately means it fails.
5pdf23ds14y
Well, that was the big controversy over the AI Box experiments, so no need to rehash all that here.
1Jack14y
This is a category error. Meaningfulness is in your mind and in intersubjective constructions, not in the objective world. There is no fact of the matter for the AI to explain to you.
0[anonymous]14y
o shit
5Cyan14y
I had essentially this conversation with my sister-in-law's boyfriend (Canadian art student in his early twenties) just about four weeks ago. Didn't get to the boredom question, but did talk a bit about cryonics. Took about 25 minutes.
3LoganStrohl11y
There seem to be two ways for the AI thing to click. Some people click and go "Oh yeah, that makes sense," and then if you ask them about it they'll tell you they believe it's a problem, but they won't change their behavior very much otherwise. The other people click and go, "0_0 Wtf am I doing with my life???" and then they move to the Bay Area or New York and join the other people devoting their every resource to preventing paperclip maximizers and the like. Which type were your people, and what do you think causes the difference?
[-][anonymous]14y380

There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I think it's a mistake to put all the opinions you agree with in a special category. Why do some people come quickly to beliefs you agree with? There is no reason, except that sometimes people come quickly to beliefs, and some beliefs happen to match yours.

People who share one belief with you are more likely to share others, so you're anecdotally finding people who agree with you about non-cryonics things at a cryonics conference. Young people might be more likely to change their mind quickly because they're more likely to hear something for the first time.

More strongly, is there any reason to believe that people are more likely to "click" to rational beliefs than irrational ones?

As an example, papal infallibility once clicked for me (during childhood religious education), which I think most people here would agree is wrong, even conditioned on the existence of God.

4PeterS14y
True. In this case, once you get the consequentialist/utilitarian "click", you're more likely to come down with the rest of the clicks - the examples he listed are highly entangled.
0Capla9y
This is a great insight.

Thank you for writing this post. It's one of the topics that has kept me from participating in the discussion here - I click on things very often, as a trained and sustained act of rationality, and often find it difficult to verbalize why I feel I am right and others wrong. But when I feel that I have clicked, then I have very high confidence in my rightness, as determined by observation and many years of evidence that my clicks are, indeed, right.

I use the phrase, "My subconscious is way smarter than I am," to describe this event. My best guess is that my subconscious has built-in pathways to notice logical flaws, lack of evidence, and has already chewed through problems over many years of thought ("creating a path"?), and I have trained myself to follow these "feelings" and form them into conscious words/thoughts/actions. It seems to be related to memory and number of facts in some ways, as the more reading I have done on a topic, the better I'm able to click on related topics. I do not use the word "feeling" lightly - it really does feel like something, and it gives me a sort of built-in filter.

I click on people (small movements, small sta... (read more)

7Rain14y
Other phrases to describe "click": intuition, grok, cached understanding, pattern recognition. I especially like the last one, pattern recognition - click feels a lot like seeing a giant web of things, and how it all fits together, or how one piece completes that image - using a sort of mental glyph or a simple phrase to represent complex, sophisticated ideas as a singular, understandable entity.
1[anonymous]13y
Good summary.
1AndyWood14y
That image also implies that one won't 'click' until one has acquired all the necessary pieces in the web, which is how I would answer the question "What does it take to get someone to get it?"
1[anonymous]13y
As applies specifically to cryonics, I remember my sole rejection was a lack of affordability, and it turned out I was wrong about that. I think the first time I heard about the possibility I was 9 years old, and it clicked for me then -- but until just a few weeks ago, I was laboring under the misconception that you basically have to be rich to get it. That click can be painful when it generates conflicts between your utility function and the circumstances of your life. I wonder if that might not lead some people to reject cryonics who'd otherwise be amenable, on the basis of false information -- to want it is painful or feels like a desire unfulfillable, so they seek to eliminate the dissonance by finding reasons it's not truly possible/desireable. Certainly this doesn't seem to explain the typical rejection I've encountered very well, but I can think of a number of geeky/intellectual types (until recently, m'self included) whose main issue seemed to be that it didn't seem within reach. (And on that note, nervous about hearing back on life insurance quotes -- I'm poor, so if for some reason my health issues render me uninsurable, it's gonna be pretty painful. Facing that uncertainty very nearly stopped me from applying anytime soon even after I'd found out it might in principle be possible!)
1[anonymous]13y
I seem to experience a lot of this too. The best interpretation I have is that my mind forms association-chains well, and my hobbies and interests have led me to develop some really large, well-indexed networks of information about certain parts of the real world -- so attempting to extend those, or make use of them in reasoning analogies, or just spot meaningful related patterns is exceptionally effortless. It's got some fairly obvious downsides, at least in terms of being able to articulate the "click" to people after the fact. I'm actually kind of terrible at arguing a point because of this -- on a subconscious level, it's not my strongest method of reasoning, and being on the autistic spectrum has given me a weird perceptual relationship to language.
1Rain14y
As part of the filtering process in information gathering, I also apply a simple test before going very far: "how related is this to what I want to know?" If it's not related, generally ignore or skim. If it is related, mine it for important info (the nuggets / truly useful sentences I mention above). I also leave myself constantly open to new information on a topic I'm "actively" researching. No thought or action on it in days, then somebody makes a comment, and I think, "that fits in with everything else, and is added to my mental store on the focus topic."
0Rain14y
Note also that the most common adjective used to describe me is "weird" (except by those who are sensitive to status effects, in which case it's "smart"). I have no idea if clicks as I feel them are typical among more normal people.

I had a funny click with my girlfriend earlier this evening. I suggested that she should sign up for cryonics at some point soon, and I was surprised that she was against the idea. In response to her objections, I explained it was vitrification and not freezing, etc. etc. but she wasn't giving me any rational answers, until she said that she really wanted to see the future, but she also wanted to watch the future unfold.

She thought by cryonics that I meant right now, Futurama style. After a much needed clarification she immediately agreed that cryonics was a good idea.

So based on her understanding of what you said, she was actually right to object.

I guess the lesson here is that we must learn not to skip steps in the explanation of unconventional ideas because there is a risk that people will be opposed to things that aren't even part of the proposal, and there is a further risk that we won't notice that's what is going on (in your case, you noticed it and corrected the situation, but what if there had been a huge fight and the subject had never been brought up again? That would have been a sad reason not to sign up for cryonics...).

8Vladimir_Nesov14y
Now this is disturbing: she assumes by default that you are suggesting to freeze her alive, "to see the future". Not the kind of "click" we'd be looking for, "everything is possible" is actually worse than absurdity heuristic-enabled epistemic hygiene.

I think her default understanding was more like "Kevin is really morally depraved and probably not serious anyways".

It was funnier in the real world; I sucked away most of the humor with my written retelling.

Mmm... I am a click-hunter. I keep pestering a topic and returning over and over until I feel it click. I can understand something well enough to start accurately predicting results but still refuse to be satisfied until I feel it click. Once it clicks I move on.

You and I may be describing different types of clicks, however. Here is a short list of things I have observed about the clicks in my life.

  • The minor step from not having a subject click and having a subject click is enormous. It is the single greatest leap in knowledge I will likely experience in a subject matter. I may learn more in one click than with a whole semester of absorbing knowledge from a book.

  • Clicks don't translate well. It is hard to describe the actual path up to and through a click.

  • What causes a subject to click for me will not cause it to click for another. Clicks seem to be very personal experiences, which is probably why it is so hard to translate.

  • Clicks tend to be most noticeable with large amounts of critical study. I assume that day-in-day-out clicks are not terribly noticeable but I suspect that they exist. A simple example I can think of is suddenly discovering a quicker route through town.

  • C

... (read more)
8Sly14y
I found this: To be very true. Many times in my classes I have barely grasped what the professor was saying throughout the year only to click the subject at a later time when a fellow student explained it to me in a way that grokked. Whenever this happens, I feel like I have learned more in that brief period then in the entire class before then.
8sketerpot14y
This is actually how I approach difficult textbooks. I read through as much as I can before I just totally collapse in confusion, look up related information on the internet, take a few days off, and then go back through from the beginning. The textbook usually makes vastly more sense then, as all the disjointed pieces come together in a way that's obvious in retrospect. This is how I was able to read through and understand an algorithms textbook in junior high, even though it terrifies and befuddles people in their third year of college. It's just not that hard if you attack it in multiple passes, because multipass studying is much more likely to get you to the click of understanding.
1Paul Crowley14y
Glad to know I'm not the only one who does this!
1Vladimir_Nesov14y
I found that this approach sorta-works, but results in much more shallow reading of the material than if you studied the prerequisites first (at least in math, an algorithms textbook might be an exception).
1sketerpot14y
I already had the prerequisites for learning about algorithms. It's just that the topic itself was hard to fully grasp. I mean, on the first reading I'm sure I could have written a hash table or a mergesort, but it wasn't until I read it again that I got the depth of understanding that lets me optimize hash tables for special applications, or quickly understand timsort. The multipass approach was how I got past a shallow reading of the material.
4Jonathan_Graehl14y
More so than with other descriptors of internal mental state, I wonder which people saying "click" mean the same thing. I feel quite satisfied when I change my mind as a result of a new insight, but also a little hesitant to consider the case closed until time passes - I feel apprehensive that another insight+reversal may follow in the consequent mental shifting. Is that a "click"?
3MrHen14y
Maybe, but it doesn't really match my feelings when I get a click. This doesn't mean you are I have better or worse clicks. It could just mean we react to them differently. I think if there is a difference between your click and mine it is that my clicks tend to be reactions to things generally considered to be factual or true but something I have trouble understanding. Clicks tend not to be brand new discoveries but rather a full, complete understanding of someone else's discovery. The easiest example is from mathematics. A complicated piece of linear algebra is True but I don't fully Get It until it clicks.
0John_Maxwell14y
OK, I'm not sure I experience clicks the way you do. Thinking for a bit I realize I integrated a fairly decent sized insight in a fairly short amount of time reading this blog post; does the same happen to you?
4MrHen14y
Erm, not really, no. I learned enough about a subject to regurgitate what someone else has said and can probably start making inferences from the subject material I know, but no click. But this isn't a field I know anything about. I have no way of knowing if anything he wrote is likely or unlikely to be true. It sounds good, but that isn't enough for a click. The best example of a click I can think of is linear algebra. As soon as I finally got my mind wrapped around 3D matrices and could "visualize" it the whole subject clicked and now 4D matrices, 5D, 2x1, rotations, and pretty much everything else was a cake walk. Comparing that experience to reading this blog, I know almost nothing more now than when I started. I know no more facts; I know only a few theories; I know a few places to look if the subject interests me later. That being said, the author of the post likely had a click one day when thinking about sexual reproduction. The end result of that click is the post that he wrote. But don't forget that "click" is a fuzzy word. Even if we came up with a clear definition another word would slip into its place because this is not a universal experience. It seems to be common enough to get a word but different enough that we are willing to debate what it means for hours. :)
0CronoDAS14y
I've actually heard that theory before.

I propose the term "clack" to denote the opposite of "click" -- that is, resisting an obviously correct conclusion.

That's more polite that most of the terms I tend to use.

Interesting. Eliezer took some X years to recognize that even "normal looking" persons can be quick on the uptake? ;)

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

I guess it has a bit deeper explanation than that. I think clickiness happens if two people managed to build very similar mental models and they are ready to manipulate and modify models incrementally. Once the models are roughly in sync, it takes very little time to communicate and just slight hints can create the right change in the conversation partner's model, if he is ready to update.

I think a lot of us has been trained hard to stop model building at certain points. There is definitely a personal difference between people on how much do they care about taboos the society imposes on them which can result in mental red lights: "Don't continue building that model! It's dangerous!" This is what I think Eliezer's notion of "compartmentalization" refe... (read more)

What the hell is in that click?

I'm not seeing that there's anything so mysterious here. From your description, to click is to realize an implication of your beliefs so quickly that you aren't conscious of the process of inference as it happens. You add that this inference should be one that most people fail to draw, even if the reasoning is presented to them explicitly.

I expect that, for this to happen, the relevant beliefs must happen to be

  1. cached in a rapidly-accessible part of your mind,

  2. stored in a form such that the conclusion is a very short inferential step beyond them, and

  3. free of any obstructing beliefs.

By an obstructing belief, I don't mean a belief contradicting the other beliefs. I mean a belief that lowers you estimate of the conditional probability of the conclusion that you would otherwise have reached.

When you are trying to induce other people to click, you can do something about (1) and (2) above. You can format the relevant beliefs in the most transparent way possible, and you can use emphasis and repetition to get the beliefs cached.

But if your interlocutors still fail to click, it's probably because (3) didn't happen. That is, it's probably just ... (read more)

4MrHen14y
I like this comment but do not know if I agree with it or not. The upvote was for making me stop and think long and hard about the subject. The wheels are still spinning and no conclusion is imminent, but thank you for the thoughts. :)

This post, in addition to being a joy to read, contains one particular awesome insight:

My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.

Here's some confirmation: I must have at least some clickiness, since I "got" the intelligence explosion/FAI/transhumanism stuff pretty much immediately, despite not having been raised on science fiction.

And, it turns out: I hate, hate, HATE compartmentalization. Just hate it -- in pretty much all its forms. For example, I have always despised the way schools divide up learning into different "classes", which you're not supposed to relate to each other. (It's particularly bad at the middle/high school level, where, if you dare to ask why you shouldn't be able to study both music and drama, or both French and Spanish, they look at you with a puzzled expression, as though such thoughts had occurred to no human being before.) I hate C.P. Snow's goddamned "Two Cultures". I hate the ... (read more)

At the risk of revealing my stupidity...

In my experience, people who don't compartmentalize tend to be cranks.

Because the world appears to contradict itself, most people act as if it does. Evolution has created many, many algorithms and hacks to help us navigate the physical and social worlds, to survive, and to reproduce. Even if we know the world doesn't really contradict itself, most of us don't have good enough meta-judgement about how to resolve the apparent inconsistencies (and don't care).

Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.

And that's just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn't try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.

FWIW, I'm not interested in cryonics. I... (read more)

It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

It seems to me that you can't be more specific because there is not anything there to be more specific about.

4bgrah44914y
What the hell, I'll play devil's advocate. Right now, we're all going to die eventually, so we can make tradeoffs between life and other values that we still consider to be essential. But when you take away that hard stop, your own life's value suddenly skyrockets - given that you can almost certainly, eventually, erase any negative feelings you have about actions done today, it becomes hard to justify not doing horrible things to save one's own life if one was forced to. Imagine Omega came to you and said, "Cryonics will work; you will be resurrected and have the choice between a fleshbody and simulation, and I can guarantee you live for 10,000 years after that. However, for reasons I won't divulge, this is contingent upon you killing the next 3 people you see." Well, shit. Let the death calculus begin.
8orthonormal14y
You make a valid theoretical point, but as a matter of contingent fact, the only consequence I see is that people signed up will strongly avoid risks of having their brains splattered. Less motorcycle riding, less joining the army, etc. Making people more risk-averse might indeed give them pause at throwing themselves in front of cars to save a kid, but: * Snap judgments are made on instinct at a level that doesn't respond to certain factors; you wouldn't be any less likely to react that way if you previously had the conscious knowledge that the kid had leukemia and wouldn't be cryopreserved. * In this day and age, risking your life for someone or something else with conscious premeditation does indeed happen even to transhumanists, but extremely rarely. The fringe effect of risk aversion among people signed up for cryonics isn't worth consigning all of their lives to oblivion.
1Bindbreaker14y
I don't worry about this for the same reason that Eliezer doesn't worry about waking up with a blue tentacle for his arm.
3bgrah44914y
Thanks for that generous spirit. But fine: You see a woman being dragged into an alley by a man with a gun. Scenario A) You have terminal brain cancer and you have 3 months to live. You read that morning that scientists have learned several new complications arising from freezing a brain. Scenario B) Your cryonics arrangements papers went through last night. You read that morning that scientists have successfully simulated a dog's brain in hardware after the dog has been cryogenically frozen for a year. Now what?
1CronoDAS14y
Obviously, you dial 911 on your cell phone. (Or whatever the appropriate emergency number is in your area.)
3bgrah44914y
The generous spirit overfloweth. You don't have a cell phone. Or it's broken.
-2CronoDAS14y
Well, it's not like I have much of a chance of saving the woman. He has a gun, and I don't. Whether the woman gets shot is entirely up to the man with the gun. If I try to interfere (and I haven't contacted the police yet), I think that I'm as likely to make things worse than I am to help. For example, the man with the gun might panic if it seems like he's losing control of the situation. I'm also physically weaker than most men, so the chances of my managing to overpower him with my bare hands are pretty small. So, either way, I probably won't try to be Batman.
7bgrah44914y
This strikes me as purposefully obtuse. Does cryonics increase the present value of future expected life? I think it does. Does that increase affect decisions where we risk our life? I think it does; do you agree?
5CronoDAS14y
Yes, I basically agree; I was mostly nitpicking the specific scenario instead of addressing the issue. If I modify the scenario a bit and say that the assailant has a knife instead of a gun (and my phone's batteries are dead), then things are different. If he has a knife, intervening is still dangerous, but it's much easier to save the woman - all I need to do is put some distance between the two so that the woman can run away. I might very well be seriously injured or killed in the process, but I can at least count on saving the woman from whatever the assailant had in store for her. (This is probably the least convenient possible world that you wanted.) So, yes, I'd be much more likely to play hero against a knife-wielding assailant if I had brain cancer than if I were healthy and had heard about a major cryonics breakthrough.
0Bindbreaker14y
This seems unusual. You are much more likely to be injured against a knife than you are against a gun. I am moderately confident that I can take a handgun away from someone before they shoot me, given sufficiently close conditions; I am much less confident in my ability to deal with a knife.
5AngryParsley14y
From http://www.ncjrs.gov/txtfiles/fireviol.txt Injury rates were higher for robbers with knives, but people are probably less likely to fight back or otherwise provoke a robber with a gun.
5CronoDAS14y
That makes the knife scenario an even better dilemma than the gun scenario! The reason I'm more likely to intervene against a knife is that it's easier to protect the woman from a knife than from a gun. Against a knife, all she needs is some time to start running, but if a gun is involved, I need to actually subdue the assailant, which I can't. After all, he is bigger and stronger than me, and even has a weapon that can do serious damage. If all he has is a knife, though, all I need to do is buy enough time; even if I end up dead, the woman will probably get away.
2Cyan14y
He was just responding to the specific scenario you posited. The fact that you had the broader issue of the effect of cryonics on the value of life at the forefront of your mind does not mean that his failure to comment on it is evidence of purposeful obtuseness.
4bgrah44914y
Commenting in this thread, on this post, and it's unrecognizable to someone that the effects of cryonics on the value of life is what's being discussed? I'm not buying it.
2Cyan14y
I don't find it contrary to expectation that someone might get caught up in the discussion of the concrete scenario presented to them and ignore the more abstract issue prompting the scenario. Furthermore, the Recent Comments page makes it easy for people to jump into the middle of a conversation without necessarily reading upthread (e.g., Vladimir Nesov today).
0[anonymous]14y
There was an apology edited into that.
2nazgulnarsil14y
if you live in the sorts of neighborhoods where women get dragged into alleys not having a gun seems pretty negligent.

I think it's not possible, but even if it were, I think I would not bother. Introspecting now, I'm not sure I can explain why. But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.

-Longer life has never been given; it has always been taken. There is no giver.

-"Enough is enough" is sour grapes - "I probably don't have access to living forever, so it's easier to change my values to be happy with that than to want yet not attain it." But if it were a guarantee, and everyone else was doing it (as they would if it were a guarantee), then this position would be the equivalent to advocating suicide at some ridiculously young age in the current era.

It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

I assert that the more extremely the idea "life is good, death is bad" is held, the more benefit other valuable parts of our humanity are rendered. I can't be more specific.

2kans14y
I'm not quite convinced of the merits of investing in cryonics at this point, though "enough is enough" does not strike me as a particularly salient argument either. In terms of weighing the utility to me based on some nebulous personal function: Cryonics has an opportunity cost in terms of direct expenses and additionally in terms of my social interactions with other people. Both of these seem to be nominal, though the perhaps $300 or so dollars a year could add quite a bit of utility to my current life as I live on about $7K per year. Though I very well may die today, not having spent any of that potential money. On the other side being revived in the distant future could be quite high in terms of personal utility. Though, I have no reason at all to believe the situation will be agreeable; in other words, permanent death very well could be for the best. I would imagine reviving a person from vitrification would be a costly venture even barring future miracle technology. Revival is not currently possible and there is no reason to think the current processes are being done in any sort of optimal way. At the very least, the cost of creating the tech to revive people will be expensive. Future tech or not, I see it likely that revival will come at some cost with perhaps no choice given to me in the matter. I see this as a likely possibility (at least more likely than a benevolent AI utopia) as science has never fundamentally made people better (more rational?)- so far at least; it certainly ticks forward and may improve the lives of some people, but they are all still fundamentally motivated by the same vestigial desires and all have the same deficiencies as before. Given our nature, I see the most likely outcome, past the novelty of the first couple of successful attempts, being some quid pro quo. Succinctly, my projection of the most likely state of the world in which I would be revived is the same as today though with more advanced technology. Very often the ones

(Edit: after having written this entire giant thing, I notice you saying that this was just a "why are some people not interested in cryo" comment, whereas I very much am trying to change your mind. I don't like trying to change people's minds without warning (I thought we were having that sort of discussion, but apparently we aren't), so here's warning.)

But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.

You're aware that your life expectancy is about 4 times that of the people who built the pyramids, even the Pharoahs, right? That assertion seems to basically be slapping all of your ancestors in the face. "I don't care that you fought and died for me to have a longer, better life; you needn't have bothered, I'm happy to die whenever". Seriously: if natural life span is good enough for you, start playing russian roulette once a year around 20 years old; the odds are about right for early humans.

As a sort-of aside, I honestly don't see a lot of difference between "when I die is fine" and just committing suicide right now. Whatever it is that would stop... (read more)

Careful with life-expectancy figures from earlier eras. There was a great chance of dying as a baby, and a great chance for women to die of childbirth. Excluding the first -- that is, just counting those that made it to, say, 5 years old, and the life-expectancy greatly shoots up, though obviously not as high as now.

6Oligopsony14y
An important reason for not dying at the moment is that it would make the people you most care about very distraught. Dying by suicide would make them even more distraught. Signing up for cryonics would not make them less distraught and would lead to social disapproval. Not committing suicide doesn't require that one place a great deal of intrinsic value in one's own continued existence.
3rlpowell13y
That's a really good point. I think if the only reason you're staying alive is to stop other people from being sad, you've got a psychological bug WRT valuing yourself for your own sake that you really need to work on, but that is (obviously) a personal value judgment. If that is the only reason, though, you're right, suicide is bad and cryo is as bad or worse. I imagine that such a person will have a really shitty life whenever people close to them leave or die; sounds really depressing. I can only hope, for their sake, that such a person dies before their significant other(s). -Robin
6Vladimir_Nesov14y
This is the Reversal test.
0Paul Crowley14y
When it comes to our values, there is no "reality", but we can hope to adjust them to be coherent and consistent under reflection. I think your paragraph "As a sort-of aside" is an example of exactly that kind of moral thinking.
0DanielVarga14y
This statement is simply not true in this form. My survival instincts prevent me from committing suicide, but they don't tell me anything about cryonics. On another thread, VijayKrishnan explained this quite clearly: One can try to construct a low-complexity formalized approximation to our survival instincts. ("This is how you would feel about it if you were smarter.") I have two issues with this. First, these will not actually be instincts (unless we rewire our brain to make them so). Second, I'm not sure that such a formalization will logically imply cryonics. Here is a sort of counterexample: On a more abstract level, the important thing about "having a clone in the future" aka survival is that you have the means to influence the future. So in a contrived thought experiment you may objectively prefer choosing "heroic, legendary death that inspires billions" to "long, dull existence", as the former influences the future more. And this formalization/reinterpretation of survival is, of course, in line with what writers and poets like to tell us.

My survival instincts prevent me from committing suicide, but they don't tell me anything about cryonics.

Well, your instincts evolved primarily to handle direct, immediate threats to your life. You could say the same thing about smoking cigarettes (or any other health risk): "My survival instincts prevent me from committing suicide, but they don't tell me anything about whether to smoke or not."

But your instincts respond to your beliefs about the world. If you know the health risks of smoking, you can use that to trigger your survival instincts, perhaps with the emotional aid of photos or testimony from those with lung cancer. The same is true for cryonics: once you know enough, not signing up for cryonics is another thing that shortens your life, a "slow suicide".

You seem to have two objections to cryonics:

  1. Cryonics won't work.

  2. Life extension is bad.

#1 is better addressed by the giant amount of information already written on the subject.

For #2 I'd like to quote a bit of Down and Out in the Magic Kingdom:

Everyone who had serious philosophical conundra on that subject just, you know, died, a generation before. The Bitchun Society didn't need to convert its detractors, just outlive them.

Even if you don't think life extension technologies are a good thing, it's only a matter of time before almost everyone thinks they are. Whatever part of "humanity" you value more than life will be gone forever.

ETA: Actually, there is an out: if you build FAI or some sort of world government and it enforces 20th century life spans on people. I can't say natural life spans because our lives were much shorter before modern sanitation and medicine.

9Zack_M_Davis14y
Doesn't this argument imply that we should self-modify to become monomaniacal fitness-maximizers, devoting every quantum of effort towards the goal of tiling the universe with copies of ourselves? Hey, if you don't, someone else will! Natural selection marches on; it's only a matter of time.
6pdf23ds14y
I find the likelihood of someone eventually doing this successfully to be very scary. And more generally, the likelihood of natural selection continuing post-AGI, leading to more Hansonian/Malthusian futures.
7pdf23ds14y
For #2, there's also Nick Bostrom's Fable of the Dragon-Tyrant.
9Rlive14y
This is not true of all non-compartmentalizers - just the ones you have noticed and remember. Rational non-compartmentalizers simply hold on to that puzzle piece that doesn't fit until they either * determine where it goes; * determine that it is not from the right puzzle; or * reshape it to correctly fit the puzzle.
4Steve_Rayhawk14y
The post "Reason as memetic immune disorder" was related. I'll quote teasers so that you'll read it: And my comment there:
2thomblake14y
Going to quote this. And this.
0[anonymous]14y
This is the most sane thing I've read on Less Wrong lately. It is probably true that there are things you value differently than other people here that causes you to be less interested in cryonics. However, for a site that says individuals get to choose their values they can simultaneously be very presumptuous about what those values are.
-3lessdazed13y
I was surprised by this word choice. No amount of medicine can make people immune to damage as if in a video game with cheats enabled.
0lessdazed13y
OK, this isn't the first time I miscommunicated...today. I was trying to be extra careful with my language and refrain from attacking because the opinion expressed is a minority one around here. What I was trying to point out is the difference between getting patched up for a longer span than 50-100 years like we do now and getting patched up for 200-1000 or so is smaller than that between living 200-1000 years and living forever. "Forever" is obviously ridiculous because it violates laws of physics and probability. I was trying to do something less combative than accuse the commenter of either consciously misrepresenting the concept, subconsciously being too irrational to understand it, or letting bias twist his words into falsity. My point is excellent, however poor at expressing it I am. Living forever is not what's usually under discussion. "Living indefinitely" would be more accurate. It really goes to the core of what erniebornheimer said: One reason his comment is so good is that it preempted and squarely responded to anticipated objections. My point is that I think the response to this one relies on a fallacy of equivocation to seem persuasive.

There's also the valuable trait where, between being presented with an argument and going "click", one's brain cleanly goes "duhhh", rather than producing something that sounds superficially like reasoning.

6thomblake14y
I greatly value that one. I'm in the (apparently small) group of people who, when presented with a statistics/probability problem, will say, "Clearly the solution involves math. The answer is to consult someone who knows how to solve the problem." rather than come up with the wrong answer that "feels right" or alternatively knowing how to find the right answer.
3wedrifid14y
Does that group also include those for whom 'consult someone who knows' wouldn't occur until 'learn how to do it' was thoroughly ruled out?
5Corey_Newsome14y
Hm, interesting point. I'm not sure I have this trait, because instead of thinking "duhhh" when I hear a well-reasoned and compelling argument, I like to make a few sanity checks and run it past my skepticism meter before allowing the clicking mechanism to engage. I wonder if that's ever produced results; at any rate, I feel like it's my duty to keep good epistemic hygiene, though my skeptical reasoning might be superficial. For this reason it normally takes a few seconds before I allow things to click, which slows conversation a tad. Perhaps I should tentatively accept the premises of hypotheses first and then be skeptical later, when I have time and resources? Also, I wonder to what extent the desire to be skeptical is more related to the desire not to appear gullible than to a desire to find truth.
1isacki14y
That's very interesting to read - I have the same trait and surely it must be fairly widespread and not particular to us. Essentially a trait to subject highly favoured, especially very trivial hypotheses to burdensome checking, for the sake of intellectual integrity or 'epistemic hygiene' which you intriguingly coin. Maybe this trait is called OCD. For example, in the post above: it is referenced that the woman suggests magic exists because science does not know everything, it is replied that lack of knowledge does not imply non-existence, and the woman is said to have 'clicked' by concluding that magic is inconsistent. While this final conclusion sounds very reasonable, for completeness I still felt the need to question: "inconsistent with what?" First let's lay out the reasonable premise: 1. I think the unspoken implied principle here here is that magic is defined to be what is unknown. So we can test the consistency of this principle. If one person does not know something, and the other person does, then regarding this something, magic would have to exist for one but not exist for the other, respectively. Therefore there is a logical inconsistency in this principle (unless we accept solipsism, but then we would have difficulty talking about real 'other people', would we?) However, I do suppose that if only one person existed, there would be no other person to create a logical inconsistency and in a strict sense magic would be consistent. It would merely constantly change based on your epistemological state, and you would probably need Occam's Razor to dispense with it.
1Corey_Newsome14y
Epistemic Hygiene was a term coined by Steve Rayhawk and Anna Salamon. No credit for me. :)
0steven046114y
No, it's way older.
0isacki14y
Although being even stricter, Occam's Razor is a heuristic and not a (dis)proof.

I am puzzled by Eliezer's confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a "GODDAMNED SANE CIVILIZATION". I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.

  1. I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options.

(a) Have my life support system turned off and die peacefully.

(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very l... (read more)

If you were hit by a car tomorrow, would you be lying there thinking, 'well, I've had a good life, and being dead's not so bad, so I'll call the funeral service' or would you be calling an ambulance?

Ambulances are expensive, and doctors are not guaranteed to be able to fix you, and there is chance you might be in for some suffering, and you may be out of society for a while until you recover - but you call them anyway. You do this because you know that being alive is better than being dead.

Cryonics is just taking this one step further., and booking your ambulance ahead of time.

I suspect that Eliezer too has a similar opinion on this

Nope, ongoing disagreement with Robin. http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/

5Technologos14y
Could you supply a (rough) probability derivation for your concerns about dystopian futures? I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.
4JesterMatrix14y
Thank God, I've been lurking on this forum for years now, and its this one that I have never felt like such an outsider on this forum, especially with the very STRONG language Eliezer uses throughout both this post and the other one. It felt as if I was being called more than just a bit irrational but stupid for thinking there was a more than negligible chance that I upon waking I would be in a physical or mental state in which death was preferable yet I would be unable to deliver. I can see it being very plausible to be awoken in extreme and constant agony, or perhaps in some sort of permanent vegetative state, or in some sort of not yet imagined unbreakable continued and torturous servitude for the 1,000+ years. I just do not the risks as outweighing the benefits of simply being alive.

It is not cryonics which carries this risk, it is the future in general.

Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.

The future can be scary, I'll grant you that. But the future isn't "200 years from now". The future is the next breath you take.

3jhuffman14y
Not entirely. People who are cryonically preserved are legally deceased. There are possible futures which are only dystopic from the point of view of the frozen penniless refugees of the 21st century. I think the chances of this are small - most people would recognize that someone revived is as human as anyone else and must be afforded the same respect and civil rights.

You don't have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.

I wasn't going to bring this up, because it's too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became "penniless refugees" in the space of a few minutes. You don't even have to postulate anything outlandish.

You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.

Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present - live or paused, as it were - or on the length of the pauses.

The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call &q... (read more)

1pdf23ds14y
Eh. At least when you're alive, you can see nasty political things coming. At least from a couple meters off, if not kilometers. Things can change a lot more when you're vitrified in a canister for 75-300 years than they can while you're asleep. I prefer Technologos' reply, plus that economic considerations make it likely that reviving someone would be a pretty altruistic act.

Most of what you're worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.

If you're worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don't have the warning required to commit suicide - in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.

You might expect more warning about uploads, but, given that you're worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WB... (read more)

and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...

As far as I know, the idea that there are organizations capable of reading overwritten data off of a hard drive is an urban legend. See http://www.nber.org/sys-admin/overwritten-data-gutmann.html

3Eliezer Yudkowsky14y
I think I saw that paper before, either on here or on Hacker News, and it was replied to by someone who claimed to be from a data-recovery service that could and did use electron microscopes to retrieve the info, albeit very expensively. EDIT: http://news.ycombinator.com/item?id=511541

I would be more inclined to take ErrantX seriously if he said what company he works for, so I could do some investigation. You would think that if they regularly do this sort of thing, they wouldn't mind a link. The "expensive" prices he quotes actually seem really low. DriveSavers charges more than $1000 to recover data off of a failed hard drive, and they don't claim to be able to recover overwritten data. Given all of that, I tend to think he is either mistaken (he does say it isn't really his field), or is lying.

6Christian_Szegedy14y
I agree. I remember that the c't, an excellent German computer magazine, around 2005 ran a test with once-zeroed hard-drives: They sent it to a lot of companies to recover the data, but all of them refused to give a quote, saying that the task was impossible. Most of these companies manage to recover data from technically defect hard-drives after mechanical failures, and it costs several thousand dollars, but none of them were ready to help in case of zeroed out drives.
6Douglas_Knight14y
Plus, he says it would take them a month. How could they possibly charge only 1000 for a month of anything, even computer time?

I have what I hope is an interesting perspective here - I'm a super-not-clicker. I had to be dragged through most of the sequences, chipping away one dumb idea after another, until I Got It. I recognize this as basically my number one handicap. Introspecting about what causes it, I'll back Eliezer's compartmentalization idea.

For me, input flows into categorized and contextual storage. I can access it, but I have to look (and know to look, which I won't if it's not triggered). This is severe enough to impact my memory; I find I'm relying on almost-stigmergy, to-do-list cues activated by context, and I can literally slip out of doing one thing and into another if my contextual cues are off.

I think this is just my problem, but I wonder if it's an exaggerated form of the way other people can just divert facts into a box and sit on them.

8Eliezer Yudkowsky14y
Wow - you Got It after a lot of hard work? That must put you in the bottom 99.9% of all rationalists! I think you might be suffering from a bit of underconfidence here.
1LauralH11y
You mean most people don't read the Sequences and go "Yeah, that's exactly right!" Hmm.
1JulianMorrison14y
More like a lot of small insights. Clicks - but with all the data "in cache", as loaded up by you and neatly lined up to connect. I don't think I'm "the worstest rationalist ever", just that I have a major problem clicking spontaneously from raw data. This seems to me to be the key skill of insight - drawing together a lot of local understandings into a global explanatory pattern. If I could crack this, I think my functional IQ would go up quite a bit.

I've often described learning in terms of 'clicking'.

It's most memorable to me when thinking about hard problems that I can't solve right away. It feels like something finally puts the last piece of the puzzle in place and for the first time I can 'see' the answer.

When trying to teach people, I've noticed that some people have a very obvious 'click response'- they'll light up at a distinct moment and just get it from then on.

Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.

For me though, learning about cryonics and the intelligence explosion idea didn't seem very 'click like' since it just seemed obviously true the first time I heard about it, rather than there being a delay that makes the evaporation of confusion more satisfying. I suspect the learning mechanism is actually the same though.

Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.

How about 4) they don't really get it, and just think they do, or 5) they don't realize there's anything to "get" in the first place, because they think knowledge is a mysterious thing that you memorize and regurgitate. I think that's actually the most common case, but the others are perhaps plausible as well.

6wedrifid14y
Your 5) seems the best fit.
2JackChristopher14y
Here's the facial expression I've noticed: Head tilts upward but off to the side, eyes rolling upward. Followed by quick head nod downward, as if to say "Yes" — It's almost always followed with an apt question. I do this. But of course someone could fake it. One sign is they add nothing to the conversation after it. You'll notice that. If you aren't sure quiz them.

I've met very few people for whom the concept "simulating consciousness is analogous to simulating arithmetic" is obvious-in-retrospect, even among atheists. A special case of a "generalized anti-zombie" click?

life-is-good/death-is-bad

Widespread failure to understand this most basic principle ever drives me crazy and leaves me feeling physically sick. I'd appreciate efforts to raise the sanity waterline for this reason alone.

5CronoDAS14y
What if life isn't good? What if other people dying is good for the survivors?
4loqi14y
I'm not quite sure how to assign meaning to a normative counterfactual. Asserting "life is bad" is tantamount to declaring war on existence. Humans have massive, sprawling goal complexes, most of which seem to be predicated on existence. It seems extremely implausible that such goals could be consistent with a preference for non-existence. Consciously stroking yourself into a nihilistic fervor says more about the flexibility of your conscious perception than it does about the ultimate "goodness" of life (related Nesov comment). It's the "most basic principle ever" because: * It's implicit in virtually all other normative principles. * Most people have no intention nor desire to declare war on everyone else. But feel free to let me know if you those don't apply to you, so I can file you away as "pure evil". This is a narrower question that requires answering other questions like "which life?" and "how good?". It can't contradict the premise of life being good, it can only attempt to make it more precise.
2JamesAndrix14y
That appears not to be the case. In general, we want to live and want others to live. Where this does not hold, it is generally viewed as the result of something bad, or as a necessary means to prevent something bad. If that were the case, it would be an example of preventing something bad.

Oddly, I self identify as both being very good at "clicking", and very able to compartmentalize. I'm used to roleplaying an elf in WOW, a religious person at church, and a rationalist here. It makes it very easy to "click" because I can go "oh, of course, in the world that an elf inhabits, in that world view, this just makes sense", and because I have a lot of practice absorbing very odd ideas (I've worked out agricultural requirements for keeping a pet dragon...)

The big perk is that, say, my religious objections to cryonics d... (read more)

I wasn't there at the time, but if EY's description is roughly accurate, I suspect the ordinary-seeming woman understood him in the opening anecdote. The specific chain I'm looking at is:

EY: Magic does not exist.
OSW: Science doesn't understand everything?
EY: Ignorance is in mind, not reality.
OSW: Magic is impossible!

I see no way that OSW could deduce this fact about magic unless she compared magic - stuff which you are necessarily ignorant of - to the correct interpretation of EY's point.

That click is cultural. It seems magical because you've acclimated yourself to not encountering shared values with people very often, and so this cryonics gathering was a feast of connections.

I think "clicky" people are people who are not emotionally vested in their beliefs.

Many people need their beliefs to be true in order to feel like they are valuable and worthwhile people.

Clicky people simply don't need that (or at least need that to a lesser extent). Instead, clicky people need to be right whether or not that means they were initially wrong.

4MichaelGR14y
It sounds like it might have something to do with what Carol Dweck describes as the "Growth Mindset", as opposed to the "Fixed Mindset". Here's something I wrote about it a couple years ago based on a Nigel Holmes graphic (still one of the most popular posts on my blog): http://michaelgr.com/2007/04/15/fixed-mindset-vs-growth-mindset-which-one-are-you/

Was there some particular bright line at which cryonics flipped from "impossible given current technology" to "failure to have universal cryonics is a sign of an insane society"? That is a sign change, not just a change in magnitude.

If we go back 50 or 100 years, we should be at a point where then-present preservation techniques were clearly inadequate. Maybe vitrification was the bright line, I do not pretend that preserving brains is a specialty of mine. I just empathize with those who still doubt that the technology is good enough... (read more)

3neuromancer9212y
Rather than being a sane view, this is a logical fallacy. I don't know of a specific name to give it, but survivorship bias and the anthropic principle are both relevant. The fallacy is this: for anything a person tries to do, every relevant technology will be inadequate up to the one that succeeds. Inherently, the first success at something will end the need to make new steps towards it, so we will never see a new advance where past advances have been sufficient for an end. The weak anthropic principle says that we only observe our universe when it is such that it will permit observers. Similarly, we can assume that if new developments are being made towards an aim, they are being made because past steps were inadequate. We cannot view new advances as having their chances of success biased by past failures since they come into existence only in the case that past attempts have indeed failed. (I am aware that technologies are improved on even after they achieve their aim, but in these cases new objectives like "faster" or "cheaper" are still unsatisfied, and drive the progress.)
2Richard_Kennaway12y
It's rather like the way that you only ever find something in the last place you look.
1Peter_de_Blanc14y
Really?? What's your source?

I think that this post should be linked prominently here for those who haven't been around on LW/OB for long and who might not follow all the back-links:

http://lesswrong.com/lw/wq/you_only_live_twice/

Hah, "Magic Click" --I see that all the time, people who don't know cryonics is real-or have not met anyone actually signed up. Left and right, every day kids and adults think it is a "cool" idea, they express interest--but they don't go through the steps to become a signed cryonicist. I'm not sure what causes one person to go through all the paperwork and another just thinks they might want to do that some day--from what I've seen, people who sign up for cryonics have had a brush with death and seem more motivated--it could come down t... (read more)

Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?

http://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324

Amazon.com Review

Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military m... (read more)

6MichaelGR14y
I haven't read it, so I can't comment directly on it. But you should probably know that Gladwell has been criticized a lot for un-scientific methodology and for turning interesting anecdotes and "just-so" stories into generalizations and supposed "laws" (without much evidence). The most recent example of high profile criticism of Gladwell is probably this review by Steven Pinker: Malcolm Gladwell, Eclectic Detective I don't know if this criticism applies to Blink, though, but if you read it, your BS detector should probably be turned up a notch.
0[anonymous]14y
Sounds like someone I know! jk
4knb14y
The Harding hate is sadly predictable. Harding is so abused by people who nothing about the man. Historians hate him because they have a bias toward hyperactive presidents like TR and FDR. Yes, Harding was prone to verbal gaffes, and had a few scandals, but he was basically a solid leader, ahead of his time in many ways, like in civil rights.
0CronoDAS14y
Edit: Okay, you've given good reasons.
4knb14y
Yes, and Wilson is always in the top 10, and he suspended habeus corpus and took political prisoners (mostly socialists and feminists). If you look at the list, you can see that historians tend to favor the politicians that took big dramatic actions, started wars, led imperially, etc. Theodore Roosevelt is also always near the top, and he basically advocated empire building and racist immigration policies. Historians are just awful drama queens mostly.
4CronoDAS14y
There's a pretty good argument to make for Lincoln as our worst President, too. He's the only President under which we had a civil war!
-3righteousreason14y
Is this really relevant ...
1knb14y
Hey, we're trying to get less wrong here.... :3
2righteousreason14y
well in that case, can you explain that emoticon (:3)? I have yet to hear any explanation that makes sense :)
4knb14y
Sure. The cat face emoticon is a reference to an anime trope. When a character is being deliberately mischievous, or slightly bad in some way, they're often shown with the "cat face" (If you want to see an example, go to the Banned Wiki and search "cat smile". I daren't link there. ). It was adopted as an emoticon since the "mouth" of the cat face is essentially a sideways 3. In the west it is usually used to indicate that one is joking lightheartedly, using a bad pun, or alternately, to indicate that one isn't really trying to troll.
-1Kaj_Sotala14y
Interesting. Most of the people I've seen using it (myself included) are using it as a kind of a variant of <3, the heart smiley. (There's a slight difference in meaning between that and the heart, but one that's too subtle for me to put my finger on right now.)
0knb14y
Hmmm, maybe the meaning is splintering as it becomes more common. I suspect it originated in anime/manga message boards, as that is where I first saw it. The TV tropes page mainly seems to describe my usage.
-1MrHen14y
:3 is more "kitty likes you! aww!" or "teehee" and <3 is more "I send my love/kisses" or "wish I was there". At least, that is how I see it.
-1Kaj_Sotala14y
Thank you, that fits my intuition. Though :3 can also be a more tender, delicate version of the "love message" in <3. <3 has a more powerful tone.
3Alicorn14y
I enjoyed Blink. You can read some essays by the author here - if you get a lot out of them, you'll probably react similarly to the book.
2Bo10201014y
I liked it. The promotional material and summaries of it don't do justice to the content, I think, though. The book has many examples of how people who are experts at things can make good snap judgments in their domains of expertise, but it is not about how any normal person can make great decisions without thinking about them. Also, Malcolm Gladwell could write a cookbook and make it the most entertaining thing you'll read all year.
4CronoDAS14y
Jon Finkel is probably the world's best Magic player. However, he is not good at explaining how to make correct decisions when playing; to him, the right play is simply obvious, and he doesn't even notice all the wrong ones. His skill is almost entirely unconscious.
2pdf23ds14y
Reminds me of Marion Tinsley, the greatest checkers player ever. He lost 7 games out of thousands in his 45 year career of playing for the World Championship, two of them to the program that would eventually go on to solve checkers. (That excludes his early years studying the game.) He was arguably the most dominant master of any game, ever. He, too, couldn't explain his skill.
3wedrifid14y
Do they still have World Championships in checkers now the game is understood to be a somewhat more complex tic-tac-toe variant?
4gregconen14y
I believe so, though I've heard the first few moves are now randomized, as only perfect play, rather than all board positions, is solved. Of course, every perfect-information deterministic game is "a somewhat more complex tic-tac-toe variant" from the perspective of sufficient computing power.
1pdf23ds14y
Yeah, sure. And I have a program that gives constant time random access to all primes less than 3^^^^3 from the perspective of sufficient computing power.
1wedrifid14y
Ahh, good idea. No, only the ones that are a tie.
0wedrifid14y
I've got an audio copy and have listened to it several times. It's definitely worth a look. I enjoyed it more than 'tipping point' but I did read blink first.

Interesting. I remember my brother saying, "I want to be frozen when I die, so I can be brought back to life in the future," when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)

Isin't it more sane to donate money to organizations fighting against existential risks rather than spending money on cryonics?

9AngryParsley14y
Yes. Your argument applies to everything money can be spent on, not just cryonics. But unlike most things you can spend money on, cryonics has the advantage of forcing you to care about the future. It provides an incentive to donate to fighting existential risk.
-1RHollerith14y
It also provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics. [ADDED: I concede that what I just said does not make sense; I went to delete it a few minutes after I submitted it but people had already replied. Please do not reply to this.] In other words, it provides a disincentive to pursue a strategy that discourages or suppresses existentially-risky research (on, e.g., AGI) so that less-risky research represents a larger share of the total research. In other words, it recruits the people most able to understand and to respond effectively to existential risks to spend (collectively) many millions of dollars in such a way that gives them a personal disincentive to pursue what I consider a very worthwhile strategy for addressing existential risks posed by certain lines of scientific research. Most people who have expressed an opinion seem to believe that there is no stopping or slowing down significantly lines of research that (like AGI) can be continued with just a PC and access to the open scientific literature. But I tend to think it can be stopped or slowed down a great deal if effective people put as much effort into explaining why it is bad as Eliezer and his followers are putting into convincing people to sign up for cryonics. According to my models, convincing people to sign up for cryonics at the current time does nothing to reduce existential risk. The opposite, in fact.
9Eliezer Yudkowsky14y
As far as I can tell, your argument supports the reverse of your conclusion: People signed up for cryonics have less incentive to do fast, risky things. This line of reasoning is sufficiently strange that I call motivated cognition on this one.
1RHollerith14y
Once there are a few thousand people working on existential risks, the marginal expected utility of recruiting another worker goes down. People start working at cross-purposes because of not knowing enough about each other's plans. Rather than increasing the number of e-risk workers as fast as possible, the recruiting strategy that minimizes e-risks is to figure out what personal qualities make for the best e-risk workers and differentially to recruit people with those qualities. And the most decisive personal quality I know about has to do with the "motivational structure" of the prospective worker. what natural human desires and pleasures (and perhaps natural human fears) motivate him or her? Two natural human motivations that cause many of the people currently working on e-risks or currently watching the public discourse about e-risks to persist in these activities are self-interest and altruism. A lot of recruiting consists of written and oral communications. it is fairly straightforward to tailor the communications in such a way that it is of strong interest to, e.g., people interested in being altruistic while being boring to people motivated by, e.g., their own personal survival. It gets harder the more the reader knows about e-risks and about the singularity, but at present, most very bright people do not know much about these topics. Consequently, communications that inform people about the singularity should be tailored to be interesting to those with the right motivations. Since not enough people of sufficiently-high prestige advocate cryonics to make an argument for cryonics by authority persuasive, the only effective way to persuade people to sign up is with an argument on the scientific merits, which entails explaining to them about the singularity. I.e., communications whose purpose is to get people to sign up for cryonics is necessarily also communications that inform people about the singularity -- and this might be its more important effect ev
1Jonathan_Graehl14y
We agree that getting people to sign up for cryonics increases their hope for post-singularity existence and thus their likelihood to support singularity-directed research, notwithstanding it doesn't require a singularity to revive a frozen near-defunct body or brain. Whether that's good or bad depends on your view of whether widespread efforts intending to reach a good singularity are likely to go disastrously wrong. Clearly, in case of widespread popularization of the goal, an enlightened FAI research program needs to spend effort on PR in order to steal funds from more sloppy aspirants. Considering all that, I expect widespread interest and funding for AI research to give only a change in the date, not the quality, of any singularity.
0RHollerith14y
Well, what's my personal motivation, then, if I am engaging in motivated cognition? But I do concede that my comment has a big problem here: "provides a personal incentive to hurry the intelligence explosion along so that it occurs before the death of the people signed up for cryonics" and I would have deleted my comment had you not replied already. Give me a few minutes to try to reconstruct the thinking that led to my conclusion. One part is that getting people who are living now to hope to live a very long time disincentivizes them to consider strategies in which the singularity happens after they die. But there was another part ISTR.
3AngryParsley14y
Based on your premises, don't you mean the opposite of everything you just said? If people are frozen we can take as much time as we need. If they age and die then we have an incentive to work faster. (Although if you do the math, the current world population is insignificant compared to the potential future of humanity, so cautiousness should win out either way.) My comment pointed out how cryonics creates a personal selfish reason to care about the future. I'd like for people to base their decisions on altruism, but the fact is that we're only human.
2JGWeissman14y
What?! If I do something to increase my chances of being revived given a positive Singularity after my death, then I should be more willing to pursue a strategy that increases the chances of an eventual Singularity being positive at the expense of the chances of a fast Singularity which would occur before my death. Cryonics, be increasing the time we can wait, reduces the pain of delay.
3MichaelGR14y
Since most people who donate to fight existential risks don't donate everything they have above subsistance level, there's usually enough money to do both (since Cryonics via life insurance isn't very expensive afaik).
3Matt_Simpson14y
But surely you wouldn't be donating enough to, say, fighting existential risks so that the marginal utility of the next dollar spent there drops below that of the marginal utility spent on cryonics. Not that I'm suggesting that fighting existential risks necessarily has a higher marginal utility than cryonics. Rather, you probably don't have enough money to change the relative rankings, so you should donate to the cause with the highest marginal utility. Not both. The exception may be donating enough to make sure YOU are reanimated after you die (I don't know what your utility function looks like), but in that case you aren't really donating.

Surely you should be asking about the marginal utility of money spent on eating out before you ask about money spent on cryonics. What is this strange mental accounting where money spent on cryonics is immediately available to be redirected to existential risks, but money spent on burritos or French restaurants or an extra 100sqft in an apartment is not?

I have a theory about this, actually. How it works is: people get paid at the beginning of the month, and then pay their essential bills, food, rent, electricity, insurance, Internet, etc. What happens next is, people have a certain standard of living that they think they're supposed to have, based somewhat on how much money they make, but much more on what all their friends are spending money on. They then go out and buy stuff like fancy dinners and a house in the suburbs and what not, and this spending is not mentally available as something that can be cut back on, because they don't see it as "spending", so much as "things I need to do to maintain my standard of living"; people see it as a much larger burden to write a single check for $2,000 than to spend $7 every day on coffee, because they come out of different mental pools. Anything left over after that gets put into cryonics, or existential risk, or savings, or investments, etc. That's why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.

8gwern14y
--Max Weber, Protestant Ethic We do?
3RobinZ14y
I was going to comment on that, but I don't see any millionaires at all, so I thought I shouldn't.
2Douglas_Knight14y
The main point of "the Millionaire Next Door" is that you might not notice millionaires.
2alyssavance14y
See The Millionaire Next Door, http://www.amazon.com/Millionaire-Next-Door-Thomas-Stanley/dp/0671015206 .
4gwern14y
It cites statistics, and actually says that there are X millionaire lawyers, and X+Y plumbers? It isn't just giving a lot of anecdotes? I would be very surprised to hear that, because it implies that one is substantially more likely to become a millionaire by plumbing than by lawyering, since there are ~500,000 plumbers in the US and >1.1million lawyers.
4Douglas_Knight14y
According to wikipedia it (1) generally cites statistics and (2) says that doctors, lawyers, and accountants save a much lower proportion of money than other occupations. google books says that it doesn't mention plumbers at all. I would guess that pretty much all lawyers permanently employed at BIGLAW are millionaires and pretty much no other lawyers are; but that's probably enough to beat plumbers. I think the other lawyers have a similar income distribution to plumbers.
7MichaelVassar14y
That seems natural enough to me, it's the net income of the very limited part of you that identifies as "you" because it can sometimes talk and think about abstractions.
4Eliezer Yudkowsky14y
On the one hand, yes, but on the other hand, I sometimes worry that we're getting a little too cynical around these Hansonian parts. In any case, cryonics is a one-time expenditure for that part of you. It looms large in the imagination in advance, but afterward the expenditure almost instantly fades into the background of the monthly rent, less salient than burritos.
8MichaelVassar14y
Cynicism is boring. Build a map that matches the territory. That map looks terribly Hansonian but doesn't have its 'cynical' bit set to 'yes'.
3Douglas_Knight14y
The deliberative part of "you" that thinks about cryonics may not be the same part that chooses restaurants, but doesn't it play a role in choosing apartments?
3MichaelVassar14y
Agreed, but the deliberative part may actually think that the larger and better located apartment contributes more to global utility, at least if you are the head of the Singularity Institute and you just spent the last 6 years living with a wife in 200 square feet.
2Matt_Simpson14y
If you have a sufficiently selfish utility function, it may make sense to spend that extra money on french restaurants and the bigger apartment. But otherwise, yes, the lowest hanging fruit are spending less money on things like going out or new electronic toys.
1thomblake14y
It occurs to me to suggest that donating to both allows you to hedge your bets; one or the other might end up not producing results at all. Which seems to be a similar impulse to the one causing guess 70% blue and 30% red, though the situation is different enough that it might make sense here.

Anecdotally, when I was a child I was a super-clicker. (I often wonder what child-Ialdabaoth would have accomplished, if not for sub-100 IQ parents, a fearfully fundamentalist Christian upbringing, and a cliche'd-bad experience with the public school system.)

As an adult, I find that it is much, much harder for me to just "click" on things - and it is invariably due to a panic reaction when presented with information that might cause me to lose status among an imagined group of violent authoritarians.

It would be interesting to see how many more &q... (read more)

In some piece of fiction (I think it was Orion's Arm, but the closest I can find is http://www.orionsarm.com/eg-topic/45b3daabb2329 and the reference to "the Herimann-Glauer-Yudkowski relation of inclusive retrospective obviousness") I saw the idea that one could order qualitatively-smarter things on the basis of what you're calling "clicks". Specifically, that if humans are level 1, then the next level above that is the level where if you handed the being the data on which our science is built, all the results would click immediately/... (read more)

It seems odd to refer to a "magical click" leading to understanding the incoherence of the idea of magic.

Say you were playing a real-time strategy game, with limited forces to allocate among possible targets to attack and potential vulnerabilities to defend. You can see all sorts of information about a given target, but the really important stuff either requires resources to discover or is hidden outright. The game's bootlegged and you can't find a copy of the manual, so even for the visible numbers you don't know exactly what they mean.

Poking ar... (read more)

4Friendly-HI13y
That explanation via analogy is actually quite good and may very well be true. If for some reason memes fail to properly fortify themselves when they claim territory inside your brain, they may be very easy to replace by competing memes, which could explain the "clickiness" of some people. If true, one thing we may expect from (as of yet) non-rationalist people whose minds have that clicking quality, is that they may be unusually susceptible to New Age crap or generally tend to alter their views quickly. It was certainly the case with me when I was young and still lacked the mental tools of rationality. Also, a slight rebelliousness or disregard towards what other people think may be part of it. If you ever introduced someone to a position that is very unconventional or even something entirely new that they have never heard of, more often than not they display some deep gut reaction feeling of dismissal and come up with ridiculous on-the-spot rationalizations why that new position can not possibly be the case... and I have the impression, that one of the most determining factors in what their gut-reaction will root for will be heavily connected to what other people in their tribe think. I know Eliezer's post is older, but I wonder if he probed the possibility that this clickiness may be predominantly a feature of people who simply have a general tendency or willingness for being a contrarian.
0wintermute9212y
I think the suggestion that clickiness leads to acceptance of all ideas is flawed. On a practical level, people who click on a number of topics tend to hold few or no inaccurate beliefs (bolstering the unconquered-territory theory), but significantly, they also tend to only adopt good beliefs. Some ideas click, while others which seem to be just as subject to first-blush judgement (eg. "People should only live out their natural lifespan.") are rejected near-instantly. On the metaphorical level, I think the game example holds up as long as we assume that the territories we are discussing are desirable - that is to say, everything we are looking to conquer is good, and some parts are easier to get than others. Those regions which are bad (eg. full of monsters, corresponding to some irrational belief like homeopathy) are discarded even before the "preclaimed/free" evaluation is made. Given that, it seems to me that only some things click, but we've already divided the space of possible ideas to remove those which should be avoided altogether. Here however, the capture-model seems to fall down a bit - while sometimes we see that an idea is probably good but hard to hold, and then painstakingly reason our way there, there are some places which are incorrectly declared downright undesirable.
0neuromancer9212y
This suggestion is certainly an interesting one - that clicks happen in places where pre-existing ideas are weak, and "clicky" people have fewer strongly-entrenched concepts. I think the explanation goes somewhat beyond this however, based on a personal observation that "clicks" seem to preferentially arise for ideas which are, to the best of our understanding, "right". I know people with very low thresholds of belief, and clicky people, and it seems to me that the correlation between the two is negative if it exists. Credulous people can't click onto an idea because it doesn't seem more right to them than any other - every point is neutral, so new ideas are simply accepted. Clicky people, by contrast, can click in the positive or negative. Just as intelligence explosion can make "intrinsic" sense to someone, counterarguments to it are likely to throw a mental flag even before they find a clear source for the objection. The click seems to go beyond acceptance to rapid understanding and evaluation.

that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

Careful, man. If you bang too hard on this drum, people are going to start thinking "hey, why slog through the boring pre-FAI era? I'll just sign up for cryo, head over to the preservation facility, and down a bottle of Nembutal. Before long I'll be relaxing on a beach with super-intelligent robot sex dolls bringing me martinis!"

9Eliezer Yudkowsky14y
Suicides automatically get autopsied, so not currently an option. Otherwise... well, it seems fairly obvious to an expected utility maximizer who believes in the von Neumann/Morgenstern axiom of Continuity, that if being cryonically suspended is better than death, and there exists a spectrum of lives so horrible as to not be worth living, then there must exist some intermediate point of a life exactly horrible enough that it is not worth committing suicide but is worth deliberately suspending yourself if you have the option.
4komponisto14y
This seems quite unfair to sufferers of mental illness. What if a person signed up for cryonics later becomes depressed, resulting in suicide? (It could happen.) I guess I shouldn't be surprised at the near-total absence of cryonics-friendly law, but it's still worth remarking upon.
4pdf23ds14y
For that matter, what if a person is depressed (or terminally ill) and wants to commit suicide, but wants to sign up for cryonics too? That's actually my situation, and I e-mailed Alcor about it, but have received no response, to my chagrin. Another consideration is that legal physician assisted suicide (I believe it's still legal in Oregon) probably makes autopsy less likely. I'll research this a bit and get back. Also, I totally do not understand the second para of Eliezer's comment. Also, it seems like the obvious reason that people would not commit suicide just to get suspended sooner is that most people's life has utility greater than (future utility if revived)*(probability of getting revived). OTOH, if you're dying of cancer, get yourself up to Oregon.

It is pretty obvious that Alcor must avoid even the slightest suspicion of helping (or even encouraging) you to commit suicide.

This would be an extremely slippery slope that they really have to avoid in order to prevent exposing themselves to a lot of unjustified attacks.

0pdf23ds14y
Well, yes, I assumed that was the motivation. On the other hand, Thomas Donaldson. They actually went to court with him against California to support his "suicide". (They ended up losing. The court said it was a matter for the legislature.) And what I'm asking only amounts to figuring out the best way to avoid autopsy. EDIT: Actually, Alcor probably wasn't involved directly in the case. I forget where I read that they were; I probably didn't read it. But anyway, the overall publicity from the case was positive for Alcor.
4AngryParsley14y
Several states allow religious objections to autopsy. The coroner can override it in some cases (infectious disease that endangers the public, murder suspected), but it's better than nothing. The state doesn't care what religion you are, just that you've signed a form stating you object to an autopsy. ETA: The override process is pretty involved in California. It involves petitioning the superior court to get an order to autopsy.
2komponisto14y
I was assuming from what Eliezer said that suicides were also in the "override" category. If not, that's good news.

With an unforgivable naivete, a childish stupidity, we all still think history is leading us towards good, that some happiness awaits us ahead; but it will bring us to blood and fire, such blood and such fire as we haven't seen yet.

-- Nikolai Strakhov, 1870s

3PatSwanson11y
Compared to now, he was wrong. Why should we think he will be wrong about our future when he was wrong about his own?
0[anonymous]11y
IMO he was right about Russia's future at least.
2denisbider14y
Well, he was right. After all, WWI and WWII then lay just ahead. And our current, relatively peaceful period may, in fact, be building up to a period yet worse than that. Though, of course, we hope it's not.

I wonder if it ties in to some kind of confidence in your understanding.* If you don't trust your ability to understand a simple argument, you're really quite likely to overrate the strength of your heuristics relative to your reason.

...which sounds a lot like why I'm suspicious of cryonics, on introspection. I really need to run the numbers and see if I can afford it.

* Oh noes! Have I become one of those people with one idea they go back to for everything?

9thomblake14y
Don't worry, that's practically everybody. Just be aware of it, excuse other people for not getting your big idea, and consider revising your stance in the future.
3RobinZ14y
Thanks - I'll try to think of it that way. (One thing I was considering as a countermeasure was to imagine it as an analogue to a Short Duration Personal Savior - a ShorDurDivRev (divine revelation), perhaps!)

It would have been convenient if I'd discovered some particular key insight that convinced people. If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance." Then I could just emphasize that argument, and people would sign up.

But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor.&qu

... (read more)

when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew

I am not defending OJ in general, but your objection, while philosophically valid, was misplaced. It's ok, you were only 5 :)

Talmudic law is let's say a "ritual law" where performance of certain acts is "fulfilled", in a pure legalistic sense. There is a long standing argument whether mitzhot (commandments) require intentional performance to be fulfilled. E.g. if you "hear the shofar" by accident, you do ... (read more)

Sorry, this may be a stupid question, but why is it a good for people to get cyonically frozen? Obviously if they don't they won't make it to the future - but other people will be born or duplicated in the future and the total number of people will be the same.

Why care more about people who live now than future potential people?

Because we exist already, and they don't. Our loss is death; theirs is birth control.

4RobertWiblin14y
Why is it worse to die (and people cryonically frozen don't avoid the pain of death anyway) than to never have been born? Assuming the process of dying isn't painful, they seem the same to me.
  1. Once a person exists, they can form preferences, including a preference not to die. These preferences have real weight. These preferences can also adjust, although not eliminate, the pain of death. If I were to die with a cryonics team standing over me ready to give me a chance at waking up again, I would be more emotionally comfortable than if I were to die on the expectation of ceasing to exist. Someone who does not exist yet does not have such preferences.

  2. People do not all die at the same time. Although an impermanent death is, like a permanent one, also a loss to the living (of time together), it's not the same magnitude of loss. Beyond a certain point, it doesn't matter very much to most people to be able to create new people (not that they wouldn't resent being disallowed).

  3. It's not clear that anyone's birth will really be directly prevented by cryonics. (I mean, except in the sense that all events have some causal impact that influences, among other things, who jumps whose bones when, and therefore who has which children.) A society that would revive cryonics patients probably isn't one that has a population problem such that the cryonics patients make a difference.

6Paul Crowley14y
I care more about myself than future potential people. More seriously, I value a diversity of minds, and if the future does too they may be glad to have us along.
4Vladimir_Nesov14y
Agree with "myself", disagree with "diversity of minds". If the future needs diversity, it has its random number generators and person templates. Additional argument: death is bad, life-creation is not morally reversible.
4Paul Crowley14y
I don't know why I said "more seriously" when it's by far the less defensible argument.
2RobertWiblin14y
I can understand valuing oneself more than others (simple selfishness is unsurprising), but I think Eliezer is saying cryonics is a positive good, not just that it benefits some people at the equal expense of others. If uploading gets up as is probably required for people to be woken up, the future will be able to make minds as diverse as they like.
2Paul Crowley14y
I don't think so; when people say "shouldn't you argue that people give the money to SIAI", he says "why does this come out of our saving the world budget, and not your curry budget?" I think this is a very weak point and the far future will probably be able to make whatever kinds of minds they like, but we could have scanning/WBE long before we know enough about minds to diversify them.
0wintermute9212y
In addition to the very valid counterpoints listed here, I think its worth noting the false dichotomy of the question. If the initial assumption is that population is capped, that hasn't been borne out yet, and assuming we eventually leave Earth in a sustainable-habitats manner, doesn't have to ever hold true. If population-capping isn't the basis for your statement, then I don't see anything suggesting that the total number of people will be the same with and without cryonics. We are not choosing between ourselves and future potential people - at the moment, we are simply choosing between possible-ourselves and definitely-not-ourselves existing in the future.

There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.

I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.

What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?

3AngryParsley14y
Hmm... you probably want to read Circular Altruism. There are different forms of utilitarian consequentialism. The LW wiki has a blurb with links to other useful posts.
1Document14y
It's linked from the wiki page, but just to shorten the distance: The "Intuitions" Behind "Utilitarianism" was what came to mind for me.

Oooh, but people can be wrong in so many ways. It's not a single extra crazy circuit. We've got redundancies: in most people, perhaps the 'main' circuits are never quite laid down right, but the redundant parts take over. This is so common people don't agree what the main circuits are; in Japan, dyslexia is more common than, err, what is neurotypical in USA.

Some people over think it, some under think it. Under think it, and you think, "Bah, Walt Disney is wacky to freeze his head!" and never get past that. Overthink it and you may never actually sign up because you leech out all the emotional impetus (this thought process is more adaptive for getting rid of bad memories).

2Blueberry14y
What? Are you saying most people in Japan are dyslexic?
0spriteless14y
Most have the same difficulties with letters that have multiple pronunciation that dyslexics have, and the standard method to teach reading is phonetic rather than memorization based. It could as easily be cultural as genetic, English is a strange nest of exceptions.
0Blueberry14y
This has nothing to do with dyslexia. Japanese uses a syllabary character system to write most words, so of course everything is phonetic. All human languages are strange nests of exceptions, and any time people learn a new language, they struggle to understand its complications, such as letters with multiple pronunciations. That's not dyslexia, it's getting confused by an unfamiliar language, and Americans who try to learn foreign languages make similar mistakes in those languages.
1denisbider14y
Yes, but a cognitive distinction doesn't have to be genetic in order to exist. Whether you choose to call it dyslexia or not, or whether the difference is genetic or due to a different learning background, what spriteless is trying to expose is that the mind functions differently. Circuits that are primary in one population are auxiliary in another, and vice versa.
0Blueberry14y
While it may well be true that different kinds of minds function differently, there's no reason to think that speaking different languages makes you function differently. A native English speaker learning Japanese will make much the same kind of mistakes that a native Japanese speaker learning English will, and pretty much the same circuits will be "primary" and "auxiliary" in both. This contrasts with neurodiversity, and disabilities like dyslexia, where some circuits may be impaired or differently wired.

Cryonics does not prevent you from dying. Humans are afraid of dying. Cryonics does not address the problem (fear of death). It instead offers a possible second life.

I'm afraid of dying, because I know that when I am dying I will be very afraid. So I'm afraid of being afraid. Cryonics would offer very little to me right now in terms of alleviating this fear. Sure it might work; but I won't know that it will while I'm dying, and so my fear while dying will not be mitigated.

You might say hey wait jhuff- isn't actually being dead, and not have a chance at a ... (read more)

8Cyan14y
For me, the overwhelming problem with death is that I don't get to exist anymore. If you're going to be afraid of dying whether or not you've signed up for cryonics, then your decision not to sign up cannot depend on your anticipation of being afraid, as that is invariant across the two scenarios.
1jhuffman14y
I don't really understand that statement. Your problem - here and now - is that after you die you don't exist anymore? I can't tell you what your problems or fears are but is it possible the real problem here and now is that you are afraid of not existing after your death? edit to follow-up this remark: So then Cryonics is just a solution looking for a problem. I don't have a problem it can solve.

Fear of not existing after death is not just some silly uncomfortable emotion to be calmed. Rather it reveals a disconnect between one's preference and expectations about the actual state of reality.

The real problem is not existing after death. Fear is a way of representing that.

2jhuffman14y
I never said it was silly, I hope it didn't come across that way. And I am not at all suggesting that we shouldn't prefer life, and shouldn't take all reasonable steps to continue living as long as living is worthwhile.
6Cyan14y
I value my continued existence; I'm surprised that this is at all confusing. Is penicillin also a solution looking for a problem? How about looking both ways before you cross the street? Do you really place no value on the longer life you would have the possibility of living if you signed up? If so, why does the same consideration not also extend to the common death-preventing steps (ETA: limit that to sudden death, the kind where you experience no opportunity to feel fear) you presumably currently take?
3jhuffman14y
Of course not, penicillin prevents death. So does watching both ways before I cross the street. Cryonics does not prevent death. Well, I could try and calculate the utility based on my guess at the odds of it working, but I estimate that the utility of the time invested in doing that would exceed the marginal utility I'd find when finished. So I'm not going to look into it, for the same reason that I don't read all the email messages I get from potential business partners in Nigeria. Surely there must be a chance that one of those is real, but I consider that chance to be so vanishingly small that its -EV for me to read the emails.
6thomblake14y
Funny that your expected value from posting this comment was higher than researching cryonics.
0jhuffman14y
Funny, I had the same thought. But I actually I got value from the responses I've gotten in this thread, even if you haven't.
2Cyan14y
Fair enough; I can't deny that your conclusion follows from the premise "Cryonics has so small a probability of succeeding that it doesn't even justify looking into the topic." I will note that this is a shift from "Cryonics is a solution in search of a problem" to "Cryonics is not a solution." ETA: No, I take that last remark back. But my original comment about your true objection being about something other than your fear of death was correct.
2jhuffman14y
Really my original point was and still is that cryonics doesn't prevent dying or death. My particular problem with death is a fear of dying. I truly have gotten over the fact that one day I will not exist. So I guess I was projecting this onto others, and probably that isn't valid. Yes there is another question, which I didn't originally speak to, about the analysis of the marginal utility value of a possibly pro-longed life, but that isn't really something that interests me. Even if we became much more confident in cryonics, such as some major technological breakthroughs - I would update based on the fact that knowing cryonics is likely to work would be a comfort to me as I was dying. So I'd go sign up, but not just because I want the pro-longed life but because I'd now view dying more like "going to sleep" and the fear of it would be significantly reduced. So even if I were never revived, I would have gotten value from cryonics if I considered it viable at the time I was dying.
3Cyan14y
Okay, fair enough. But asserting that cryonics won't work without detailed prior knowledge of its infeasibility and without even being willing to investigate it puts you in a terrible epistemic position. You still haven't argued for your ostensible point. (I wrote another reply, but then deleted it as it was premised on a falsehood.)
2jhuffman14y
My ostensible point again: cryonics doesn't prevent dying. I really need to present an argument for this? Or I need to present an argument for my point that I'm only afraid of dying, and not of being dead? Well here it is: I can die. I can't be dead - because at that point there is no I. So while right now I can fear the void, it won't be a problem once I am dead. Note that insertion of cryonics does not change any of these facts. I'll still be afraid of dying, I'll still die, I will no longer exist. Whether I'm in a frozen can or my ashes are scattered in the ocean there will be an identical amount of neural computation. So I won't exist and I won't have any problems, either way.
4thomblake14y
I'm pretty sure your terminology is causing a lot of needless confusion here. I think people are reading "cryonics doesn't prevent dying" as "cryonics does not prevent death", which is the usual way of speaking. If someone says, "Sam's dying; do something!" they don't so much want you to stop Sam from feeling like he's dying, but rather they want you to make it so that Sam does not die. However, you seem to be talking about death in the following, and people's replies might be better directed towards this:
4thomblake14y
Yes. I think the standard counterargument is linked to on the wiki; 'death' is a moving target, and it seems like "information-theoretic death" is a good candidate for what "death" will mean when the technology settles out.
0jhuffman14y
But the dying process does not change. The philosophical or even clinical definition of "dead" has no bearing on the emotional experience of dying.
5AngryParsley14y
I take it you have a do not resuscitate medical tag then? You wouldn't want some EMTs to restart your heart after you had the "emotional experience of dying."
0jhuffman14y
I've never said I wouldn't want to be revived before I expire. I've only said I wouldn't expect to be and so it would be of no comfort to me. Probably, it would be pretty terrible both dying and being revived. Afterward, I'd be glad I was revived. I can see where you are headed with this about the value of preferences now for things happening later.
0thomblake14y
I see. I think you're being unclear, though I'm not sure it's your fault. I'll reply to your earlier post.
-1Eliezer Yudkowsky14y
(Original poster thinks of himself as a persistent billiard ball of identity, when neural processing stops, the billiard ball winks out of existence. This winking-out is death. If anyone wants to explain the ontological falsity of the billiard-balls theory to the original poster at less length than working all the way up to here, they can go ahead and try.)
3jhuffman14y
Uhm, no. I would subscribe to more of an information view of identity. In other words, if my information state encoded in my brain could be uploaded to a computer and executed in a mind simulator it would have my identity as much as the meat guy writing this right now. Actually I have no idea what identity is or how many of me there are; I'm the guy who Can't Get Over Dust Theory. I don't have doubts about cryonics at that level. If the technology works, the technology works and I would wake up as me as I ever was. That isn't where I'm "going wrong". I do think that identity is lost when information is irretrievably lost. And I think that has likely happened or will happen to everyone being suspended right now.
1jhuffman14y
I don't know why you'd assume I've done no research on it or have no knowledge of its feasibility. What I'm unwilling at this point to do, is try and estimate the marginal utility of the proposition in any serious or sophisticated way. And I don't know where I said it won't work. That would be ridiculous to say that. Given our continued existence and advancement as a civilization it almost certainly it will work someday for someone. We present we have substantial technical challenges related to preserving people in a state such that they can be revived. I also know there is debate on what may be possible in the even further future in terms of repairing brain cells that were destroyed through apoptosis or necrosis. I also know there are a number of risks or barriers to revivals even after the technology challenges are resolved, and these risks (particularly the economic and political risks) increase the longer it takes the revival and medical technology to catch up. No one can predict the odds that any given person now would ever be revived, but there are many reasons to be pessimistic about those chances.
6Cyan14y
It's because we're using certain words in different ways, and according to my usage of them, what you said somewhat weakly implied that you hadn't. You did say "Cryonics does not prevent you from dying." If cryonics works, then I don't consider the life events that follow resuscitation to be a second life that occured after death, as opposed to a single life with a long inanimate period somewhere in the middle -- to me, that just looks like a distinction without a difference. This is an example of the ongoing semantic clash. Anyway, it now seems to me that you've practiced some form of Dark Side Epistemology on yourself, in that the fact that after you're dead you have no preferences seems to be critical to your reasoning. I'm all for removing time inconsistency of preferences, but I think that's going a bit far. It seems that CronoDAS had a far better grasp than me of what you were actually claiming; the linked query is far more apposite than I originally appreciated, and I'd be very interested in your reply to his question. I'll even accentuate it: does the fact that if you were in a stable coma* you would have no preferences excuse your doctor from rescuing you from that coma if they can? * Let us stipulate that you have no awareness of your state.
1jhuffman14y
I think it is more than just semantics. Unless you are very confident in a cryonic revival, then the emotional experience of dying is not much changed by that distant prospect. Gosh I hope its not fatal. So you really think you will have preferences after you cease to exist? Or just that is not important that you won't?
4thomblake14y
I'm pretty sure it's hard to fix the temporal context of that utterance. However, this might clear things up: I currently have preferences regarding things that haven't happened yet. Even things that will happen after I die! It shouldn't be too hard to imagine: "I want my great-grandchildren to grow up in a world without violence." or some such sentiment. Whether I will have preferences after I die is a different, interesting question, but not particularly relevant to decisions I have to make now; for those, I use my current preferences.
7jhuffman14y
Ok I guess I'm caught up now. So if my current preference is that I am revived at some point after "dying" (which I've acknowledged it is), then I should act on it now and sign up for Cryonics, since that is my only chance of that happening later. The fact that it provides no cure for the "dying experience" doesn't detract from its possibility to fulfill my current preferences. Got it.
0thomblake14y
Apparently I fail at convincing people not to sign up.
1Cyan14y
I hope so too. (I know you're being sarcastic; I'm being genuine. You can Google it if you don't know what the term I used means.) The fact that after I'm dead I have no preferences is not important to my decision-making. ETA: And, uh, not to be pushy or anything, but the coma scenario...?
0jhuffman14y
Well I did reply to CronoDAS directly, but I'll try this scenario too (I think it is a lot different): * Let us stipulate that you have no awareness of your state. I think this is more similar to the resuscitation question; yes I have a preference to be resuscitated. Doctor's have a responsibility to their patients to treat them in spite of their present lack of consciousness. This shares with the resuscitation example the fact that I really have to take no action for these preferences to be carried out.
1Cyan14y
I think that in the end, regardless of everything else we've discussed, your argument against choosing cryonics is founded on your judgment that it has negligible chance of success. So what is the evidence and reasoning that led you to that conclusion?
2jhuffman14y
Yes, I've acknowledged that much earlier. If I became much more confident in Cryonics I'd be more likely to sign-up, as I think it would then be a comfort in the dying process, to know that maybe this is "just going to sleep for awhile". I've also acknowledged I prefer resuscitation so even though "I'm out" for a period, yes I could see a value in cryonics now if I expected it to work. Well we know that pretty serious damage happens to the brain, beginning minutes after heart failure and getting worse and worse due to autolysis as the hours pass before vitrification. We also know that vitrification does cause some damage; not the devastation that actual freezing would be but damage nonetheless. Cryonics advocates wave what I call the "future nano-wand" at all of these problems. While it does not in theory seem infeasible that future resuscitation could avoid the destruction caused by ischemia, and that our nano-friends can repair any chemical contamination caused by vitrification itself; it is very, very doubtful to me that there is any information left in a brain that has suffered several hours of autolysis. As a young man, I'd expect my death to be unexpected (e.g. due to accident or violence) and that it would be hours before I could be vitrified. When this changes (e.g. I get older or find I'm doomed by a a life-threatening disease) maybe I'd be more interested in cryonics. Even assuming we can vitrify people in a state that preserves information, I have some doubts about revival actually ever happening. First, we don't really know what is going to be possible and when with nanotechnology. So some people want to say revival is 50 years out, I've got no reason - no successes in nanotechnology to date - to make me think its not 200 years out. I've got serious concerns about our economic and political stability over the next fifty years. There is no doubt, there are going to be some serious changes in the world. I think there may be problems beyond their con
3Cyan14y
In light of the recent post on logical rudeness, I feel I ought to complete this thread with a final reply. It is this: if your first post had consisted of this analysis instead of your remarks on fearing death, I would not have begun a conversation; therefore, having reached this point, I'm happy to stop here. On the substance of the parent, I'll say I don't agree on a couple of points, these points make all the difference, and I don't expect a return on the investment of arguing them.
3jhuffman14y
Well I thought some about this whole thread after reading that article. In my defense, I did acknowledge a number of errors or fallacies: the most important being that I do have preferences now about things that happen even when I am unconscious, and that the same apply even when I am dead. The second is your point, which is that underscoring my entire argument is a basic belief that it "just won't work"; and thats really all my argument amounts to. For me, I take it as a given that cryonics won't work, just as I take atheism as a given. So when I'm presented with someone who is in favor of cryonics, I don't really take their preference for it at face-value. I project (incorrectly) that they are buying to cryonics to get a hedge against "fear of dying" and so my only point is it doesn't really help much with that...if you believe what I believe. Its a pretty stupid argument really; while there was some learning value in this thread for me the OP is pretty fail.
1Cyan14y
Upvoted.
0[anonymous]14y
To make sense of your original post, one must know that you presupposed that cryonics doesn't prevent death. You don't state it there, and you haven't actually argued the point yet. But given that cryonics doesn't prevent death, everything else you've written seems valid to me.
0[anonymous]14y
Well, it kinda depends on how you define death. For instance: Not all the patients who are cryopreserved are actually dead as the term is usually understood. They're just in stasis. (Though unavoidably some will die, as the first aid teams can't get to everyone that fast.)
7CronoDAS14y
Thought: Is there a significant difference between the process of being suspended and revived, and the process of going to sleep and waking up?
7jhuffman14y
When I go to sleep I expect to wake up. When I die, even if I had a cryogenic stand-by all ready to go, I would not expect to be revived. So dying would be a lot more emotionally painful than going to sleep. In the future, if cryonic suspension and revival is an ordinary fact of life (for space travel or whatever) then I think there would be not much difference. The main emotional difference would be that you know you are going to be "away" for a long time. You may know people will miss you etc. Just like if you were taking a long trip with no communications. So different from sleep/wake but not different from other ordinary human experiences.
1Kevin14y
When I die, I plan on being on enough morphine that it doesn't cause me emotional pain to embrace my own probability estimates, which should be biased a lot higher in favor of resurrection by all of that morphine or kratom.
1denisbider14y
I agree, cryonics is failing to "click" with me for largely the same reason - that the estimate of me benefitting from cryonics is not 95%, but more like 5%. If the likelihood of my revival and resumption of awareness is only 5%, then it doesn't much alleviate the emotional trauma of death. Plus, I can imagine the possibility of a harmful revival, where the mind is cloned and resumes awareness, only to become a lab experiment that gets reused tens of thousands of times.
4Vladimir_Nesov14y
Think of it as insurance, in the literal sense. When you buy e.g. insurance for your house against fire, there is only something like 0.2% chance or less that you'll benefit from the fact that you've bought insurance (you only benefit if fire happens), and 99.8% chance that you'll only lose money by paying for insurance, which is by the way not a trivial sum. The analogy is not intuitively very salient on first sight, because "fire" may connote with "death", while actually the analogy likens "fire" to successful revival, and death is just a fact of the scenery. A cryonics contract ensures you against the risk of successful revival. If it turns out that you can be successfully revived, then you get the premium of open-ended future.
0denisbider14y
It also "insures" against the risk of a horrific revival. Plus, in order for the insurance to work, the premium would be very high right now for me to pay: http://lesswrong.com/lw/1mh/that_magical_click/1iam
2Roko14y
"resumption of awareness" - I think that this is a common intuition that people have - that their awareness is a continuous stream that is interrupted only by death - but I think it is nonsense. If you look at cryonics from a MW QM/subjective probability point of view, the "subjective probability" of revival is 100%, but that's only because branches where you don't survive don't contribute to your subjective probability from this point of view. If you take the utilitarian point of view, then 5% * millions or billions of years of fabulous life looks pretty good...
1jhuffman14y
Well if I'm banking on MW QM don't I already enjoy subjective quantum immortality, regardless of cryonics?
4Roko14y
Immortality is not much fun if you are in perpetual pain. If you condition on your own survival without cryo, there is an increased chance of most of the survival probability mass coming from scenarios where you are kept alive unpleasantly. Also, you might place some stead in "subjective" survival, and some in total survival measure.
2denisbider14y
Interesting arguments. Thank you.
0Roko14y
no problem!
0jhuffman14y
To bad about all those people who lived before cryo. I guess they all have at least a few world paths where they are in...hell? I think that, like Schrodinger's Cat was originally posited as a thought experiment to show that there is something wrong with the Copenhagen interpretation of a wave collapse; Quantum Immortality was originally posited as a thought experiment to suggest that there is something intuitively wrong with the many worlds interpretation of QM. I know MW is very popular here, but personally I don't find any interpretation of QM to be meaningful. The only thing we know is that the standard model makes accurate predictions. But that is another debate.
1Morendil14y
5% is how many times better than 0% ?
1Paul Crowley14y
But this invites Pascal's Wager/Pascal's Mugging type arguments. It's not enough to argue that it's more than zero - it has to be enough to be worth the investment.
4Furcas14y
The real flaw in Pascal's Wager isn't that the probability of getting the desired payoff is extremely low, it's that the probability of getting the payoff by holding any one belief from a set of different beliefs is the same. For example, the probability of being rewarded for being an atheist by a God who loves epistemic rationalism is at least as big as the probability of being rewarded by Yahweh for being a Christian. The probability of cryonics getting us the payoff, however, is a lot bigger than the probability that not signing up for cryonics will get us the payoff, so it's not a Pascal's Wager type argument to point out that cryonics is worth it even if the probability of it working is very small.
1jhuffman14y
Rather than pick a particular paranoid scenario, I'd just suggest you further reduce your +EV by some percentage to indicate revival into a future life you do not want to be living in. If you are lucky you'll have the chance to stop before the nanobots repair that particular "defect" in your mind.
0denisbider14y
See VijayKrishnan's comment for a much better writeup of the point I wanted to get across. http://lesswrong.com/lw/1mh/that_magical_click/1hl8 The issue is that my positive expected value for cryo starts out as marginal. Once I account for the possible horrific outcomes, I'm not sure whether the resulting balance is positive at all. And if it is, I'm not sure it's worth the drastic changes I would have to make right now for cryo to be viable. In order for cryo to make sense for me right now, I'd have to move from the Caribbean to somewhere nearer to Alcor where they can get my body in case I die. I'd then also have to move my company, start paying corporate and personal income taxes... The current +EV from cryo seems minor enough to me to not warrant these changes, but I would sign up if it didn't involve making a major compromise, and especially if I knew that the risk of a horrific outcome is nonexistent.

if you wake up in the Future, it's probably going to be a nicer place to live than the Present.

How do we know this? How can we possibly think it's possible to know this? I can think of at least three scenarios that seem much more likely than this sunny view that things will just keep progressing while you're dead and when you wake up you'll slip right into a nicer society:

1) We run out of cheap energy and hence cheap food; tensions rise; most of the world turns into what Haiti looks like now.

2) Somebody sets off a nuclear weapon, leading to worldwide r... (read more)

6Blueberry14y
It seems unlikely that people would be revived in those scenarios, especially in 1 and 2. As for 3, biological evolution takes a long long time, and even then it's likely the future humans would provide a decent environment for us if they revive us. Unlike apes, we and future humans will both be capable to communicate and engage in abstract thought, so I don't think that analogy works.
6Ryan14y
Yep. As far as I can tell a world where people can be and are being revived is almost certainly one I want to live in.

a world where people can be and are being revived is almost certainly one I want to live in.

Exactly, and that's really well stated. By being cryo-preserved, you're self-selecting for worlds where there is a high likelihood that

(a) contracts are honored (your resources are being used to revive you, as you intended),

(b) human lives, even very different humans from long ago, are respected (otherwise why go to all the trouble of dethawing),

(c) there is advanced neurotechnology sufficient to bring people back, and humanity is still around and has learned to live with it,

and (d) society is rationalist enough not to prohibit cryonics out of fear of zombies or something.

It's not perfect, but it's a good filter.

1Nick_Tarleton14y
Excellent observation!
2AndyWood14y
This is the single consideration in the cryonics debate that I remain unconvinced of. It seems very easy for me to imagine lots of futures that others might find worthwhile, that I would find very unpleasant. Off the top of my head: what if society is more regimented? What if one is expected to be very patriotic? What if it is a very collectivist culture? What if I still have to submit to hierarchies of one kind or another, for one reason or another? ... ... Are there good reasons to believe that human life in the future will be enjoyable to me? Can I do better than beginning with a bottom line that says "The future will be pleasant", and inventing justifications for why that's more likely than not?
4MichaelGR14y
Evolution by natural selection is indeed too slow to be a problem, but self-modification via technological means could mean rapid change for humanity. It might still not be a problem since it's doubtful that a smarter civilization would totally lose the capability to communicate with humans v1.0 (knowing they have a bunch of frozen people around, they'd at least keep a file somewhere about the 21st century, or scan a bunch of brains to learn what they need to know). And if they could improve themselves, there's a good chance that they'll also be able to improve the revived people so that they can fit in the new society, or at least accomodate comfortably humans 1.0 who don't want to be modified (who knows how a smarter than human friendly intelligence with highly advanced technology would deal with that problem? All we can guess is that the solution would probably be pretty effective).
3Zack_M_Davis14y
Upload evolution could be very fast (due to clock speedup, fast copying, ability to test and revert mutations, &c.).
1Paul Crowley14y
Things are by and large much better for animals in captivity than wild animals. I suspect this extends to apes, though others may have better domain-specific knowledge.
2mattnewport14y
Better how? Easier and longer lived in many cases but I think you can make a plausible case that they are not very happy.
2Paul Crowley14y
I'm not sure wild animals are all that happy either though!
0[anonymous]14y
(1) and (2) are fairly likely, but what isn't is that someone will bother to revive us frozen folk if civilization is doing that badly. (3) is actually the best possible scenario next to a FAI having been designed. It's not a problem if we can be made into super-humans too!
Is that really just it?  Is there no special sanity to add, but only ordinary madness to take away?  Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?

Noesis.

https://en.wikipedia.org/wiki/Nous

1MakerOfErrors5y
Although, Noesis is just the original Greek Philosophy name for That Magic Click, and not an explanation in and of itself. At least, not any more than the "dark matter" or "phlogiston". However, it seems like if anyone has figured out what actually is in that magic click, Noesis is the magic search term to find that gem of knowledge in the vast ocean of information. It's a Schelling Point for people to discuss possible answers, so if anyone has found an answer, they or someone learning it from them would introduce it to the discussions using that term. If those discussions are the sort that are especially interested in truth as an end unto itself rather than as a useful tool for winning arguments, then I'd expect the answer to spread broadly and float to the top of things like the Nous Wikipedia article.

Is this "click" you mention epiphany)?

[-][anonymous]14y10

Whether something seems "reasonable" or "implausible" can depend on how one's brain happens to be wired, perhaps due to a stroke, mental illness, or even just genetics. As your former blog post about [asognostics] (http://lesswrong.com/lw/12s/the_strangest_thing_an_ai_could_tell_you/) shows, the human brain can come to some silly conclusions with the input it's given. How do you know if what "clicks" for you matches reality or is due to a faulty circuit? Personally I require evidence and I'll sign up for cryonics when the firs... (read more)

I model clicks as moments where my subconscious mind notices a shortcut, and forces my conscious mind to take it.

Depending on a variety of factors (how tired I am, or whether I am in danger, for example), my subconscious mind may be able to force more or fewer shortcuts onto my conscious.

The "clicks" are not always ones that, after careful analysis, I follow up on. They have, however, come in very useful in high pressure/low time availability situations.

0MrHen14y
"Force" is an interesting choice of words. What is another way to say what you mean?
1aausch14y
Pushes my conscious mind to action, bypassing my ability to directly control my actions.
0MrHen14y
Thanks.
-3bgrah44914y
"My" is an interesting choice of words. What is another way to say what you mean?
0RobinZ14y
I believe a philosopher would suggest that aausch does not identify with that part of their intellect.
0[anonymous]14y
Wait, why is this an interesting choice of words?
-2Blueberry14y
"Is" is an interesting choice of words. What is another way to say what you mean?

I suspect you need to travel some (most?) of the inferential distance to becoming a rationalist (one way or another) before you can start clicking on ideas and concepts you're hearing for the first time.

Maybe you could devise a click-test and give it to different groups to see what kinds of people click more often?

6MrHen14y
Depending on whether my "clicks" and EY's "clicks" are the same, this isn't true. My studies in Math and Computer Science were full of clicks and the people around me would click at different points in the more complicated Math classes. Some of these people were certainly not rationalist. They were very smart, but certainly not rationalist.
3Lightwave14y
I'm not sure what you mean by "clicks" in Math classes. It looks like you're using "click" for "understand" or "gain insight"? Whether and when you click in complicated Math classes depends on how you manage to grasp math concepts and follow and connect them logically (or something of this sort). Whereas EY defines "click" as a "a very short chain of reasoning", which in the minds of most people gets derailed. What I'm suggesting is that it gets derailed by other preconceptions that interfere with the short reasoning chain.
3MrHen14y
Sort of, but on a whole different scale than I use for the words "understand" or "gain insight." So much so that I would never switch one word out for the other. For me, "click" is to "understand" as "fly" is to "jump." You could say that flight is a form of jumping, but all the details are different and they have drastically different results. Yeah, that isn't how I am using click at all.
6Eliezer Yudkowsky14y
At age eight? Even I wasn't much of a rationalist until nine or so.
5MichaelVassar14y
I wonder if we should just use the word Bayesian and drop "Rationalist". It has an entrenched meaning opposite to empiricist. We can also use words like Skeptic, Scientists, Popperian, and the like in their traditional meanings.
4Eliezer Yudkowsky14y
But no one can be a Bayesian except in the statistical-method-advocacy sense of the term.
3komponisto14y
I think the traditional "rationalist/empiricist" dichotomy is most likely a confusion. I don't mind at all if we end up helping to displace this terminology by spreading our sense of "rationalist".
3Lightwave14y
What I had in mind is that people will click more often if they've gone through some of the inferential distance already and are in a mindset in which, when they first encounter cryonics/AI/whatever, it appears obviously/intuitively possible. Which is why you have 25% computer industry people and 25% scientists (i.e. it's obviously not a random sample of people). Scientists are more likely than most people to be atheists, believe in the possibility of AI, etc, and also more likely to click when they first hear about cryonics on the radio. As you've said, the chain of reasoning followed by a click is very short. But it's only short for those people that don't have other (longer) chains of reasoning and beliefs that seem to contradict the original statement. And in order to connect the short chain, you have to dissolve the long one first. It seems to me that people need to have already accepted to a certain extent the naturalistic/scientific worldview in order to click immediately on cryonics. Now, I'm not sure how much of this applies to children, but I don't see why kids can't have a similar (albeit based on simpler reasoning chains) mindset, i.e. they already accept most of the prerequisites for cryonics.
0bogdanb14y
That's weird. Do you actually remember your thoughts from that age?
3Eliezer Yudkowsky14y
I remember writing absolutely unthinkably awful science fiction, and reading Jerry Pournelle's A Step Farther Out.
0bogdanb14y
Hmm. Things like that I remember, at least in the sense that I have flashes of memory of reading a few books, or discussing a film with somebody, or some things I liked to draw. (Writing never quite attracted me, but I doodled all the time.) However, I have almost no memory of my mental state. All my memories are almost like flashes of third-person-view scenes of my life; which tempts me to believe they're “re-constructed views” rather than memories, otherwise I'd expect them to be first-person. (Also, the flashes of memories are not associated with moments. I might reconstruct when a memory was about by deducing from what I see in the flashes with things I can track down the age of, but otherwise I don't have a mental “when” something happened. All this of course applies only memories older than a few years.)
2wedrifid14y
Or perhaps not travel far enough away.

Is this "click" you mention epiphany?

Looks like clickability is an important quality in startup founders.

Whether something seems "reasonable" or "implausible" can depend on how one's brain happens to be wired, perhaps due to a stroke, mental illness, or even just genetics. As your former blog post about asognostics shows, the human brain can come to some silly conclusions with the input it's given. How do you know if what "clicks" for you matches reality or is due to a faulty circuit?

Personally I require evidence and I'll sign up for cryonics when the first dead mouse is brought back to life after dying and being frozen. People b... (read more)

2FutureQ14y
While death could occur at any moment in today's world, waiting to sign up for cryonics until you've been proven that it will work, is similar to waiting until the Lottery numbers have been drawn and confirmed as winners before buying one's ticket. Oops sorry, those numbers are for the previous game, this ticket now applies to the next game. There is no NEXT GAME in this life period. That is unless you click that signing up now is at the very least better odds for you than all those that died before there was such a chance or than for those which still refuse to click that cryonics IS the only rational choice to date. I clicked that cryonics was for me the very instant the thought met my mind. I've come to the conclusion it's not mass madness it's mass stupidity for why most of the rest of the world that could afford it just don't get it. They likely won't until it's an option at the local hospital when tech is advancing so fast someone literally could be only a week away from a cure known to be iin the pipeline but without suspension for that week they are toast.
1sfb__114y
Speaking of Nobel prize winners and cryonics: http://www.youtube.com/watch?v=g7Lzr3cwaPs A comedy sketch show take on Cryonics, with reanimated people from the 1940s bought up by a TV company when the cryonics company folded, and stuck in a 'Big Brother House'. Just the kind of miserable future that looms disproportionately larger in the mind than it is objectively likely to happen.