Followup to: Normal Cryonics
Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists. 34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.
Except for one thing.
During one conversation, I said something about there being no magic in our universe.
And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"
Sigh. We all know how this conversation is going to go, right?
So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"
"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"
Click.
She got it, just like that.
This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:
"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it. She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that. So my mother laughed and said that if I still felt that way when I was older, she wouldn't object. Later, when I was older and signing up for cryonics, she objected."
Click.
It's... kinda frustrating, actually.
There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).
Freezing damage? I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...
But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does. It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old. (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)
Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.
Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.
In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew. It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.
And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious. (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.) It would have been convenient if I'd discovered some particular key insight that convinced people. If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance." Then I could just emphasize that argument, and people would sign up.
But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor."
In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...
Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to "get it" just from hearing a casual mention on the radio. Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.
If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five. As it is, I have no problem with shrugging and saying "People are crazy, the world is mad."
But it really, really raises the question of what the hell is in that click.
There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click. I myself failed to click on one notable occasion, but the topic was probably just as clickable.
(In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)
A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it. I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago. But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green. And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons. Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.
We can define a "click" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.
What makes it happen? What goes into that click?
It's a question of life-or-death importance, and I don't know the answer.
That generation of cryonicists seemed so normal apart from that...
What's in that click?
The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well. Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it. That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal. (Michael Vassar is one example of a "superclicker".) But it is still true AFAICT that people who click on one problem are more likely than average to click on another.
My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode. (Why?)
The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away? Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?
Thank you for writing this post. It's one of the topics that has kept me from participating in the discussion here - I click on things very often, as a trained and sustained act of rationality, and often find it difficult to verbalize why I feel I am right and others wrong. But when I feel that I have clicked, then I have very high confidence in my rightness, as determined by observation and many years of evidence that my clicks are, indeed, right.
I use the phrase, "My subconscious is way smarter than I am," to describe this event. My best guess is that my subconscious has built-in pathways to notice logical flaws, lack of evidence, and has already chewed through problems over many years of thought ("creating a path"?), and I have trained myself to follow these "feelings" and form them into conscious words/thoughts/actions. It seems to be related to memory and number of facts in some ways, as the more reading I have done on a topic, the better I'm able to click on related topics. I do not use the word "feeling" lightly - it really does feel like something, and it gives me a sort of built-in filter.
I click on people (small movements, small statements leading to huge understanding gains, to the point where I can literally say what they're thinking from the slightest gesture), I click on tests (memorization), I click on big topics (X-risks, shut-up-and-multiply), philosophy, etc. Quantum mechanics, I have failed to click anything, and have been avoiding.
What I've found is that my click decisions, when thought is applied, have dozens of reasons behind them, all unrealized at the time I was able to make the decision. Writing them all out afterward makes for an incredibly powerful argument in favor of my decision, and oftentimes shows that I really did weigh all the positives and negatives, just not in a rigorous 'proof'. Like not showing your work on a math problem, but still being able to look at the numbers and know the result.
One of the things I had to eliminate for my clickiness to become truly powerful was the desire to hold onto current beliefs. Openness to change is essential to letting the click take over your thoughts and lead you in a new direction. People get frustrated when arguing with me on occasion, as I will be a strong proponent of a specific position, then they present to me a single fact that demolishes it, and I will immediately begin arguing a new position using that fact as support. They typically laugh and shake their head, as if I had never supported my previous position and am now just arguing for argument's sake, when in reality, I clicked on the new fact and realized the implications, adjusting my beliefs accordingly, all in an instant.
Another thing which I've noticed I do which helps greatly with click thinking is absorb a huge amount of information on a specific topic. When I need to make an important decision, I get books from the library, I hit up google and click links out to 25+ pages of results, I check for authoritative forums and lurk there, reading, learning, rarely asking questions but picking up as much as I can none-the-less. I did this for WoW, for audio tech, for financial investment, for Singularity and rationality topics... and after I've spent a few months at integrating myself with that information center, I'm able to click like crazy on all the important things. Some topics, once I've clicked, I drop it and move on, but others I continue to practice and read, like first order philosophy. Thought experiments (roleplaying, etc.) tend to help if the topic is complex enough. I play a character in my head, or in a game, for a month or so, then when I need to ask a really important question about the topic the character was designed around, I just click to the answer without consciously thinking, because I just know. For clicking on people, this involves observation and spending time around that person, and asking lots of questions about what they're thinking (I've gotten very good at asking without seeming annoying).
In addition to being open to change and gorging on data, I've found it very important to trust yourself. And by this I don't mean, 'trust that you're making the right decision', but as an almost 'trust that other person who used your name at the time but who you don't even remember or think like any more'. In a sense, trust Rain-2007, even though I am now Rain-2010. I realize this will be counter to Eliezer's standard, "Do not listen to Eliezer-2001 - he was wrong," but that's not how I mean it. Instead, I'm saying that, at the point of information-glut and focus on a topic, I'm far more clickable than at some distant point in the future. I should trust that decision unless I'm willing to go through the same process of gather, read, learn, focus, and decide anew. That past click will be more right than any decision I make removed from that focus. It also means that you should trust your "instincts" - heresy, I know, considering the inherent biases, but a click really does "feel" different from a template-bias response.
One other thing I do, but which I'm not sure contributes to clickability, is avoid deep jargon or too-specific thinking. The click seems related to generalized thought processes rather than specific verbiage. So, for example, rather than reading and learning what Kolomogorov complexity is (probably spelled wrong - as I said, I'm avoiding this stuff), I'd rather do a roleplaying exercise where my character exists at various technology levels, and generalize the universe around them. This step may seem at odds with the information-glut step, but I combine the two - when reading, every link I check often has only one or two sentences, maybe a paragraph or a whole page, that I actually "use" as in try to retain. The rest of it, I consider useless/worthless, and discard as best I can.
Which reminds me, I also ignore / forget information in order to more carefully focus on what I'm thinking about in the present (this is part of why I have to trust my past self so much). I try not to overload myself with knowledge or memories that don't help me make decisions now or in the anticipated future. Some people find this frustrating, as I don't remember I pushed them down the stairs when I was 10, but that thing in my childhood was so different from me that I found no point in remembering much of it.
I think the click is a result of years of gathering information and thinking on (potentially general) topics, the ability to rapidly change, the ability to recognize how it feels to click, and to place the trust in that feeling that it deserves. It seems to be a trained skill, starting with (prerequisite of?) a good memory.
The one thing I'm getting out of writing this post is that the 'click' you describe is, in my opinion, not simple or a single effect, but rather a complex interaction of events, abilities, and predispositions. It may not be reproducible or trainable.
Sorry this post is late. Another part of the information-gathering strategy is to let conversations resolve themselves (wait a few days after a post) and read it all at once so points and counterpoints are all neatly together at the same time.
As applies specifically to cryonics, I remember my sole rejection was a lack of affordability, and it turned out I was wrong about that. I think the first time I heard about the possibility I was 9 years old, and it clicked for me then -- but until just a few weeks ago, I was laboring under the misconception that you basically have to be rich to get it. That click can be painful when it generates conflicts between your utility function and the circumstances of your life.
I wonder if that might not lead some people to reject cryonics who'd otherwise be amen... (read more)