Followup to: Normal Cryonics
Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists. 34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.
Except for one thing.
During one conversation, I said something about there being no magic in our universe.
And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"
Sigh. We all know how this conversation is going to go, right?
So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"
"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"
Click.
She got it, just like that.
This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:
"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it. She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that. So my mother laughed and said that if I still felt that way when I was older, she wouldn't object. Later, when I was older and signing up for cryonics, she objected."
Click.
It's... kinda frustrating, actually.
There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).
Freezing damage? I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...
But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does. It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old. (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)
Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.
Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.
In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew. It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.
And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious. (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.) It would have been convenient if I'd discovered some particular key insight that convinced people. If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance." Then I could just emphasize that argument, and people would sign up.
But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor."
In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...
Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to "get it" just from hearing a casual mention on the radio. Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.
If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five. As it is, I have no problem with shrugging and saying "People are crazy, the world is mad."
But it really, really raises the question of what the hell is in that click.
There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click. I myself failed to click on one notable occasion, but the topic was probably just as clickable.
(In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)
A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it. I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago. But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green. And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons. Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.
We can define a "click" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.
What makes it happen? What goes into that click?
It's a question of life-or-death importance, and I don't know the answer.
That generation of cryonicists seemed so normal apart from that...
What's in that click?
The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well. Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it. That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal. (Michael Vassar is one example of a "superclicker".) But it is still true AFAICT that people who click on one problem are more likely than average to click on another.
My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode. (Why?)
The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away? Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?
This statement is simply not true in this form. My survival instincts prevent me from committing suicide, but they don't tell me anything about cryonics. On another thread, VijayKrishnan explained this quite clearly:
One can try to construct a low-complexity formalized approximation to our survival instincts. ("This is how you would feel about it if you were smarter.") I have two issues with this. First, these will not actually be instincts (unless we rewire our brain to make them so). Second, I'm not sure that such a formalization will logically imply cryonics. Here is a sort of counterexample:
On a more abstract level, the important thing about "having a clone in the future" aka survival is that you have the means to influence the future. So in a contrived thought experiment you may objectively prefer choosing "heroic, legendary death that inspires billions" to "long, dull existence", as the former influences the future more. And this formalization/reinterpretation of survival is, of course, in line with what writers and poets like to tell us.
Well, your instincts evolved primarily to handle direct, immediate threats to your life. You could say the same thing about smoking cigarettes (or any other health risk): "My survival instincts prevent me from committing suicide, but they don't tell me anything about whether to smoke or not."
But your instincts respond to your beliefs about the world. If you know the health risks of smoking, you can use that to trigger your survival instinc... (read more)