My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.
Such people have no problem with the idea of magic, because everything is magic to them, even science.
An anecdote: once, when I still worked as software developer/department manager in a corporation, my boss was congratulating me on a million dollar project (revenue, not cost) that my team had just turned in precisely on time with no crises.
Well, not congratulating me, exactly. He was saying, "wow, that turned out really well", and I felt oddly uncomfortable. After getting off the phone, I realized a day or so later that he was talking about it like i...
Such people have no problem with the idea of magic, because everything is magic to them, even science.
Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn't understand it because his explanation didn't seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn't understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.
Huh. I guess that if I tell myself, "Most people simply do not expect reality to make sense, and are trying to do entirely different things when they engage in the social activity of talking about it", then I do feel a little less confused.
Most people simply do not expect reality to make sense
More precisely, different people are probably using different definitions of "make sense"... and you might find it easier to make sense of if you had a more detailed understanding of the ways in which people "make sense". (Certainly, it's what helped me become aware of the issue in the first place.)
So, here are some short snippets from the book "Using Your Brain For A Change", wherein the author comments on various cognitive strategies he's observed people using in order to decide whether they "understand" something:
...There are several kinds of understanding, and some of them are a lot more useful than others. One kind of understanding allows you to justify things, and gives you reasons for not being able to do anything different....
A second kind of understanding simply allows you to have a good feeling: "Ahhhh." It's sort of like salivating to a bell: it's a conditioned response, and all you get is that good feeling. That's the kind of thing that can lead to saying, "Oh, yes, 'ego' is that one up there on the chart. I've seen that before; yes, I understand."
Is the rest of it insightful too, or did you quote the only good part?
There are a lot of other good parts, especially if you care more about practice than theory. However, I find that personally, I can't make use of many of the techniques provided without the assistance of a partner to co-ordinate the exercises. It's too difficult to pay attention to both the steps in the book and what's going on in my head at the same time.
I'm still confused, but now my eyes are wide with horror, too. I don't dispute what pjeby said; in retrospect it seems terribly obvious. But how can we deal with it? Is there any way to get someone to start expecting reality to make sense?
I have a TA job teaching people how to program, and I watch as people go from desperately trying to solve problems by blindly adapting example code that they don't understand to actually thinking and being able to translate their thoughts into working, understandable programs. I think the key of it is to be thrust into situations that require understanding instead of just guessing the teacher's password -- the search space is too big for brute force. The class is all hands-on, doing toy problems that keep people struggling near the edge of their ability. And it works, somehow! I'm always amazed when they actually, truly learn something. I think this habit of expecting to understand things can be taught in at least one field, albeit painfully.
Is this something that people can learn in general? How? I consider this a hugely important question.
I wouldn't be surprised if thinking this way about computer programs transfers fairly well to other fields if people are reminded to think like programmers or something like that. There are certainly a disproportionate number of computer programmers on Less Wrong, right?
Certainly; I think this is a case where there are 3 types of causality going on:
During my military radio ops course, I realized that the woman teaching us about different frequencies literally thought that 'higher' frequencies were higher off the ground. Like you, I found her explanations deeply confusing, though I suspect most of the other candidates would have said it made sense. (Despite being false, this theory was good enough to enable radio operations - though presumably not engineering).
Thankfully I already had a decent founding in EM, otherwise I would have yet more cached garbage to clear - sometimes it's worse than finding the duplicate mp3s in my music library.
To properly understand how traceroute works one would need to know about the TTL field
I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: "Both." A charitable interpretation of this would be "They measure round trip times between the source and that router, but it's just a matter of arithmetic to use those to estimate round trip times between any two routers in the list" -- but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.
Edit: I think I remember him saying something like "You're expecting it to be logical, but things aren't always logical".
Jesus Christ. "Things aren't always logical." The hallmark of a magic-thinker. Of course everything is always logical. The only case it doesn't seem that way is when one lacks understanding.
This is overwhelmingly how I perceive most people. This in particular: 'reality is social'.
I have personally traced the difference, in myself, to receiving this book at around the age of three or four. It has illustrations of gadgets and appliances, with cut-out views of their internals. I learned almost as soon as I was capable of learning, that nothing is a mysterious black box, things that seem magical have internal detail, and there are explanations for how they work. Whether or not I had anything like a pre-existing disposition that made me love and devour the book in the first place, I still consider it to have had a bigger impact on my whole world view than anything else I can remember.
So I called him back and had a little chat about it. The idea that the project had succeeded because I designed it that way had not occurred to him, and the idea that I had done it by the way I negotiated the requirements in the first place -- as opposed to heroic efforts during the project -- was quite an eye opener for him.
The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'
The Inside View says, 'we succeeded because of careful planning of X, Y, and Z, and our own awesomeness.' The Outside View says, 'most large software projects fail, but some succeed anyway.'
What makes you think it was the only one, or one of a few out of many?
The specific project was only relevant because my bosses prior to that point in time already implicitly understood that there was something my team was doing that got our projects done on time when others under their authority were struggling - but they attributed it to intelligence or skill on my part, rather than our methodology/philosophy.
The newer boss, OTOH, didn't have any direct familiarity with my track record, and so didn't attribute the success to me at all, except that obviously I hadn't screwed it up.
Once upon a time, I had a job where most of what I did involved signing up people for cryonics. I'm guessing that few other people on this site can say they've ever made a salary off that (unless you're reading this, Derek), and so I can speak with some small authority. Over those four excruciating years at Alcor, I spent hundreds of hours discussing the subject with hundreds of people.
Obviously I never came up with a definitive answer as to why some people get it and most don't. But I developed a working map of the conceptual space. Rather than a single "click," I found that there were a series of memetic filters.
The first and largest by far tended to be religious, which is to say, afterlife mythology. If you thought you were going to Heaven, Kolob, another plane of existence, or another body, you wouldn't bother investing the money or emotional effort in cryonics.
Only then came the intellectual barriers, but the boundary could be extremely vague. I think that the vast majority of people didnt have any trouble grasping the basic scientific arguments for cryonics; the actual logic filter always seemed relatively thin to me. Instead, people used their intellect to ra...
Thank you for writing this.
If you ever feel like writing a longer post about your experience in the cryonics world, I'd love to read it and I suspect others would too.
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away?
I think this is the primary factor. I've got a pretty amusing story about this.
Last week I met a relatively distant relative, a 15 year old guy who's in a sports oriented high school. He plays football, has not much scientific, literary or intellectual background, and is quite average and normal in most conceivable ways. Some TV program on Discovery was about "robots", and in a shortly unfolding 15 minute spontaneous conversation I've managed to explain him the core problems of FAI, without him getting stuck at any points of my arguments. I'm fairly sure that he had no previous knowledge about the subject.
First I made a remark in connection to the TV program's poetic question about what if robots will be able to get most human work done; I said that if robots get the low wage jobs, humans would eventually get paid more on average, and the problem is only there when robots can do everything humans can and somehow end up actually doing all those things.
Then he asked if I think they'll get that smart, and I answered that it's quite possible in this century. I explained rec...
There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.
I think it's a mistake to put all the opinions you agree with in a special category. Why do some people come quickly to beliefs you agree with? There is no reason, except that sometimes people come quickly to beliefs, and some beliefs happen to match yours.
People who share one belief with you are more likely to share others, so you're anecdotally finding people who agree with you about non-cryonics things at a cryonics conference. Young people might be more likely to change their mind quickly because they're more likely to hear something for the first time.
More strongly, is there any reason to believe that people are more likely to "click" to rational beliefs than irrational ones?
As an example, papal infallibility once clicked for me (during childhood religious education), which I think most people here would agree is wrong, even conditioned on the existence of God.
Thank you for writing this post. It's one of the topics that has kept me from participating in the discussion here - I click on things very often, as a trained and sustained act of rationality, and often find it difficult to verbalize why I feel I am right and others wrong. But when I feel that I have clicked, then I have very high confidence in my rightness, as determined by observation and many years of evidence that my clicks are, indeed, right.
I use the phrase, "My subconscious is way smarter than I am," to describe this event. My best guess is that my subconscious has built-in pathways to notice logical flaws, lack of evidence, and has already chewed through problems over many years of thought ("creating a path"?), and I have trained myself to follow these "feelings" and form them into conscious words/thoughts/actions. It seems to be related to memory and number of facts in some ways, as the more reading I have done on a topic, the better I'm able to click on related topics. I do not use the word "feeling" lightly - it really does feel like something, and it gives me a sort of built-in filter.
I click on people (small movements, small sta...
I had a funny click with my girlfriend earlier this evening. I suggested that she should sign up for cryonics at some point soon, and I was surprised that she was against the idea. In response to her objections, I explained it was vitrification and not freezing, etc. etc. but she wasn't giving me any rational answers, until she said that she really wanted to see the future, but she also wanted to watch the future unfold.
She thought by cryonics that I meant right now, Futurama style. After a much needed clarification she immediately agreed that cryonics was a good idea.
So based on her understanding of what you said, she was actually right to object.
I guess the lesson here is that we must learn not to skip steps in the explanation of unconventional ideas because there is a risk that people will be opposed to things that aren't even part of the proposal, and there is a further risk that we won't notice that's what is going on (in your case, you noticed it and corrected the situation, but what if there had been a huge fight and the subject had never been brought up again? That would have been a sad reason not to sign up for cryonics...).
I think her default understanding was more like "Kevin is really morally depraved and probably not serious anyways".
It was funnier in the real world; I sucked away most of the humor with my written retelling.
Mmm... I am a click-hunter. I keep pestering a topic and returning over and over until I feel it click. I can understand something well enough to start accurately predicting results but still refuse to be satisfied until I feel it click. Once it clicks I move on.
You and I may be describing different types of clicks, however. Here is a short list of things I have observed about the clicks in my life.
The minor step from not having a subject click and having a subject click is enormous. It is the single greatest leap in knowledge I will likely experience in a subject matter. I may learn more in one click than with a whole semester of absorbing knowledge from a book.
Clicks don't translate well. It is hard to describe the actual path up to and through a click.
What causes a subject to click for me will not cause it to click for another. Clicks seem to be very personal experiences, which is probably why it is so hard to translate.
Clicks tend to be most noticeable with large amounts of critical study. I assume that day-in-day-out clicks are not terribly noticeable but I suspect that they exist. A simple example I can think of is suddenly discovering a quicker route through town.
C
Interesting. Eliezer took some X years to recognize that even "normal looking" persons can be quick on the uptake? ;)
My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
I guess it has a bit deeper explanation than that. I think clickiness happens if two people managed to build very similar mental models and they are ready to manipulate and modify models incrementally. Once the models are roughly in sync, it takes very little time to communicate and just slight hints can create the right change in the conversation partner's model, if he is ready to update.
I think a lot of us has been trained hard to stop model building at certain points. There is definitely a personal difference between people on how much do they care about taboos the society imposes on them which can result in mental red lights: "Don't continue building that model! It's dangerous!" This is what I think Eliezer's notion of "compartmentalization" refe...
What the hell is in that click?
I'm not seeing that there's anything so mysterious here. From your description, to click is to realize an implication of your beliefs so quickly that you aren't conscious of the process of inference as it happens. You add that this inference should be one that most people fail to draw, even if the reasoning is presented to them explicitly.
I expect that, for this to happen, the relevant beliefs must happen to be
cached in a rapidly-accessible part of your mind,
stored in a form such that the conclusion is a very short inferential step beyond them, and
free of any obstructing beliefs.
By an obstructing belief, I don't mean a belief contradicting the other beliefs. I mean a belief that lowers you estimate of the conditional probability of the conclusion that you would otherwise have reached.
When you are trying to induce other people to click, you can do something about (1) and (2) above. You can format the relevant beliefs in the most transparent way possible, and you can use emphasis and repetition to get the beliefs cached.
But if your interlocutors still fail to click, it's probably because (3) didn't happen. That is, it's probably just ...
This post, in addition to being a joy to read, contains one particular awesome insight:
My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
Here's some confirmation: I must have at least some clickiness, since I "got" the intelligence explosion/FAI/transhumanism stuff pretty much immediately, despite not having been raised on science fiction.
And, it turns out: I hate, hate, HATE compartmentalization. Just hate it -- in pretty much all its forms. For example, I have always despised the way schools divide up learning into different "classes", which you're not supposed to relate to each other. (It's particularly bad at the middle/high school level, where, if you dare to ask why you shouldn't be able to study both music and drama, or both French and Spanish, they look at you with a puzzled expression, as though such thoughts had occurred to no human being before.) I hate C.P. Snow's goddamned "Two Cultures". I hate the ...
At the risk of revealing my stupidity...
In my experience, people who don't compartmentalize tend to be cranks.
Because the world appears to contradict itself, most people act as if it does. Evolution has created many, many algorithms and hacks to help us navigate the physical and social worlds, to survive, and to reproduce. Even if we know the world doesn't really contradict itself, most of us don't have good enough meta-judgement about how to resolve the apparent inconsistencies (and don't care).
Most people who try to make all their beliefs fit with all their other beliefs, end up forcing some of the puzzle pieces into wrong-shaped holes. Their favorite part of their mental map of the world is locally consistent, but the farther-out parts are now WAY off, thus the crank-ism.
And that's just the physical world. When we get to human values, some of them REALLY ARE in conflict with others, so not only is it impossible to try to force them all to agree, but we shouldn't try (too hard). Value systems are not axiomatic. Violence to important parts of our value system can have repercussions even worse than violence to parts of our world view.
FWIW, I'm not interested in cryonics. I...
It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).
It seems to me that you can't be more specific because there is not anything there to be more specific about.
I think it's not possible, but even if it were, I think I would not bother. Introspecting now, I'm not sure I can explain why. But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.
-Longer life has never been given; it has always been taken. There is no giver.
-"Enough is enough" is sour grapes - "I probably don't have access to living forever, so it's easier to change my values to be happy with that than to want yet not attain it." But if it were a guarantee, and everyone else was doing it (as they would if it were a guarantee), then this position would be the equivalent to advocating suicide at some ridiculously young age in the current era.
It takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).
I assert that the more extremely the idea "life is good, death is bad" is held, the more benefit other valuable parts of our humanity are rendered. I can't be more specific.
(Edit: after having written this entire giant thing, I notice you saying that this was just a "why are some people not interested in cryo" comment, whereas I very much am trying to change your mind. I don't like trying to change people's minds without warning (I thought we were having that sort of discussion, but apparently we aren't), so here's warning.)
But it seems that natural death seems like a good point to say "enough is enough." In other words, letting what's been given be enough.
You're aware that your life expectancy is about 4 times that of the people who built the pyramids, even the Pharoahs, right? That assertion seems to basically be slapping all of your ancestors in the face. "I don't care that you fought and died for me to have a longer, better life; you needn't have bothered, I'm happy to die whenever". Seriously: if natural life span is good enough for you, start playing russian roulette once a year around 20 years old; the odds are about right for early humans.
As a sort-of aside, I honestly don't see a lot of difference between "when I die is fine" and just committing suicide right now. Whatever it is that would stop...
Careful with life-expectancy figures from earlier eras. There was a great chance of dying as a baby, and a great chance for women to die of childbirth. Excluding the first -- that is, just counting those that made it to, say, 5 years old, and the life-expectancy greatly shoots up, though obviously not as high as now.
My survival instincts prevent me from committing suicide, but they don't tell me anything about cryonics.
Well, your instincts evolved primarily to handle direct, immediate threats to your life. You could say the same thing about smoking cigarettes (or any other health risk): "My survival instincts prevent me from committing suicide, but they don't tell me anything about whether to smoke or not."
But your instincts respond to your beliefs about the world. If you know the health risks of smoking, you can use that to trigger your survival instincts, perhaps with the emotional aid of photos or testimony from those with lung cancer. The same is true for cryonics: once you know enough, not signing up for cryonics is another thing that shortens your life, a "slow suicide".
You seem to have two objections to cryonics:
Cryonics won't work.
Life extension is bad.
#1 is better addressed by the giant amount of information already written on the subject.
For #2 I'd like to quote a bit of Down and Out in the Magic Kingdom:
Everyone who had serious philosophical conundra on that subject just, you know, died, a generation before. The Bitchun Society didn't need to convert its detractors, just outlive them.
Even if you don't think life extension technologies are a good thing, it's only a matter of time before almost everyone thinks they are. Whatever part of "humanity" you value more than life will be gone forever.
ETA: Actually, there is an out: if you build FAI or some sort of world government and it enforces 20th century life spans on people. I can't say natural life spans because our lives were much shorter before modern sanitation and medicine.
There's also the valuable trait where, between being presented with an argument and going "click", one's brain cleanly goes "duhhh", rather than producing something that sounds superficially like reasoning.
I am puzzled by Eliezer's confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a "GODDAMNED SANE CIVILIZATION". I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.
(a) Have my life support system turned off and die peacefully.
(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very l...
If you were hit by a car tomorrow, would you be lying there thinking, 'well, I've had a good life, and being dead's not so bad, so I'll call the funeral service' or would you be calling an ambulance?
Ambulances are expensive, and doctors are not guaranteed to be able to fix you, and there is chance you might be in for some suffering, and you may be out of society for a while until you recover - but you call them anyway. You do this because you know that being alive is better than being dead.
Cryonics is just taking this one step further., and booking your ambulance ahead of time.
I suspect that Eliezer too has a similar opinion on this
Nope, ongoing disagreement with Robin. http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/
It is not cryonics which carries this risk, it is the future in general.
Consider: what guarantees that you will not wake up tomorrow morning to a horrible situation, with nothing familiar to cling to ? Nothing; you might be kidnapped during the night and sequestered somewhere by terrorists. That is perhaps a far-out supposition, but no more fanciful than whatever your imagination is currently conjuring about your hypothetical revival from cryonics.
The future can be scary, I'll grant you that. But the future isn't "200 years from now". The future is the next breath you take.
You don't have to die to become a penniless refugee. All it takes is for the earth to move sideways, back and forth, for a few seconds.
I wasn't going to bring this up, because it's too convenient and I was afraid of sounding ghoulish. But think of the people in Haiti who were among the few with a secure future, one bright afternoon, and who became "penniless refugees" in the space of a few minutes. You don't even have to postulate anything outlandish.
You are wealthy and well-connected now, compared to the rest of the population, and more likely than not to still be wealthy and well-connected tomorrow; the risk of losing these advantages looms large because you feel like you would not be in control while frozen. The same perception takes over when you decide between flying and driving somewhere: it feels safer to drive, to many people.
Yes, there are possible futures where your life is miserable, and the likelihoods do not seem to depend significantly on the manner in which the future becomes the present - live or paused, as it were - or on the length of the pauses.
The likelihoods do strongly depend on what actions we undertake in the present to reduce what we might call &q...
Most of what you're worried about should be UnFriendly AI or insane transcending uploads; lesser forces probably lack the technology to revive you, and the technology to revive you bleeds swiftly into AGI or uploads.
If you're worried that the average AI which preserves your conscious existence will torture that existence, then you should also worry about scenarios where an extremely fast mind strikes so fast that you don't have the warning required to commit suicide - in fact, any UFAI that cares enough to preserve and torture you, has a motive to deliberately avoid giving such warning. This can happen at any time, including tomorrow; no one knows the space of self-modifying programs well enough to predict when the aggregate of meddling dabblers will hit something that effectively self-improves. Without benefit of hindsight, it could have been Eurisko.
You might expect more warning about uploads, but, given that you're worried enough about negative outcomes to forego cryonic preservation out of fear, it seems clear that you should commit suicide immediately upon learning about the existence of whole-brain emulation or technology that seems like it might enable some party to run WB...
and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...
As far as I know, the idea that there are organizations capable of reading overwritten data off of a hard drive is an urban legend. See http://www.nber.org/sys-admin/overwritten-data-gutmann.html
I would be more inclined to take ErrantX seriously if he said what company he works for, so I could do some investigation. You would think that if they regularly do this sort of thing, they wouldn't mind a link. The "expensive" prices he quotes actually seem really low. DriveSavers charges more than $1000 to recover data off of a failed hard drive, and they don't claim to be able to recover overwritten data. Given all of that, I tend to think he is either mistaken (he does say it isn't really his field), or is lying.
I have what I hope is an interesting perspective here - I'm a super-not-clicker. I had to be dragged through most of the sequences, chipping away one dumb idea after another, until I Got It. I recognize this as basically my number one handicap. Introspecting about what causes it, I'll back Eliezer's compartmentalization idea.
For me, input flows into categorized and contextual storage. I can access it, but I have to look (and know to look, which I won't if it's not triggered). This is severe enough to impact my memory; I find I'm relying on almost-stigmergy, to-do-list cues activated by context, and I can literally slip out of doing one thing and into another if my contextual cues are off.
I think this is just my problem, but I wonder if it's an exaggerated form of the way other people can just divert facts into a box and sit on them.
I've often described learning in terms of 'clicking'.
It's most memorable to me when thinking about hard problems that I can't solve right away. It feels like something finally puts the last piece of the puzzle in place and for the first time I can 'see' the answer.
When trying to teach people, I've noticed that some people have a very obvious 'click response'- they'll light up at a distinct moment and just get it from then on.
Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.
For me though, learning about cryonics and the intelligence explosion idea didn't seem very 'click like' since it just seemed obviously true the first time I heard about it, rather than there being a delay that makes the evaporation of confusion more satisfying. I suspect the learning mechanism is actually the same though.
Other people show no sign of this, yet claim to learn. I still haven't figured out what is going on here. The possibilities I can think of are: 1) Their learning process involves no clicking 2) They hide the click to make it sound like they've known it all along because they'd be embarassed at how late their click is 3) They're faking it, and don't really get it.
How about 4) they don't really get it, and just think they do, or 5) they don't realize there's anything to "get" in the first place, because they think knowledge is a mysterious thing that you memorize and regurgitate. I think that's actually the most common case, but the others are perhaps plausible as well.
I've met very few people for whom the concept "simulating consciousness is analogous to simulating arithmetic" is obvious-in-retrospect, even among atheists. A special case of a "generalized anti-zombie" click?
life-is-good/death-is-bad
Widespread failure to understand this most basic principle ever drives me crazy and leaves me feeling physically sick. I'd appreciate efforts to raise the sanity waterline for this reason alone.
Oddly, I self identify as both being very good at "clicking", and very able to compartmentalize. I'm used to roleplaying an elf in WOW, a religious person at church, and a rationalist here. It makes it very easy to "click" because I can go "oh, of course, in the world that an elf inhabits, in that world view, this just makes sense", and because I have a lot of practice absorbing very odd ideas (I've worked out agricultural requirements for keeping a pet dragon...)
The big perk is that, say, my religious objections to cryonics d...
I wasn't there at the time, but if EY's description is roughly accurate, I suspect the ordinary-seeming woman understood him in the opening anecdote. The specific chain I'm looking at is:
EY: Magic does not exist.
OSW: Science doesn't understand everything?
EY: Ignorance is in mind, not reality.
OSW: Magic is impossible!
I see no way that OSW could deduce this fact about magic unless she compared magic - stuff which you are necessarily ignorant of - to the correct interpretation of EY's point.
That click is cultural. It seems magical because you've acclimated yourself to not encountering shared values with people very often, and so this cryonics gathering was a feast of connections.
I think "clicky" people are people who are not emotionally vested in their beliefs.
Many people need their beliefs to be true in order to feel like they are valuable and worthwhile people.
Clicky people simply don't need that (or at least need that to a lesser extent). Instead, clicky people need to be right whether or not that means they were initially wrong.
Was there some particular bright line at which cryonics flipped from "impossible given current technology" to "failure to have universal cryonics is a sign of an insane society"? That is a sign change, not just a change in magnitude.
If we go back 50 or 100 years, we should be at a point where then-present preservation techniques were clearly inadequate. Maybe vitrification was the bright line, I do not pretend that preserving brains is a specialty of mine. I just empathize with those who still doubt that the technology is good enough...
I think that this post should be linked prominently here for those who haven't been around on LW/OB for long and who might not follow all the back-links:
Hah, "Magic Click" --I see that all the time, people who don't know cryonics is real-or have not met anyone actually signed up. Left and right, every day kids and adults think it is a "cool" idea, they express interest--but they don't go through the steps to become a signed cryonicist. I'm not sure what causes one person to go through all the paperwork and another just thinks they might want to do that some day--from what I've seen, people who sign up for cryonics have had a brush with death and seem more motivated--it could come down t...
Does anyone know if Blink: The Power of Thinking Without Thinking is a good book?
http://www.amazon.com/Blink-Power-Thinking-Without/dp/0316172324
Amazon.com Review
Blink is about the first two seconds of looking--the decisive glance that knows in an instant. Gladwell, the best-selling author of The Tipping Point, campaigns for snap judgments and mind reading with a gift for translating research into splendid storytelling. Building his case with scenes from a marriage, heart attack triage, speed dating, choking on the golf course, selling cars, and military m...
Interesting. I remember my brother saying, "I want to be frozen when I die, so I can be brought back to life in the future," when he was child (somewhere between ages 9-14, I would guess). Probably got the idea from a cartoon show. I think the idea lost favor with him when he realized how difficult a proposition reanimating a corpse really was (he never thought about the information capture aspect of it.)
Isin't it more sane to donate money to organizations fighting against existential risks rather than spending money on cryonics?
Surely you should be asking about the marginal utility of money spent on eating out before you ask about money spent on cryonics. What is this strange mental accounting where money spent on cryonics is immediately available to be redirected to existential risks, but money spent on burritos or French restaurants or an extra 100sqft in an apartment is not?
I have a theory about this, actually. How it works is: people get paid at the beginning of the month, and then pay their essential bills, food, rent, electricity, insurance, Internet, etc. What happens next is, people have a certain standard of living that they think they're supposed to have, based somewhat on how much money they make, but much more on what all their friends are spending money on. They then go out and buy stuff like fancy dinners and a house in the suburbs and what not, and this spending is not mentally available as something that can be cut back on, because they don't see it as "spending", so much as "things I need to do to maintain my standard of living"; people see it as a much larger burden to write a single check for $2,000 than to spend $7 every day on coffee, because they come out of different mental pools. Anything left over after that gets put into cryonics, or existential risk, or savings, or investments, etc. That's why you see so many more millionaire plumbers than millionaire attorneys, because the attorney has a higher standard of living, and so has less money left over to save.
Anecdotally, when I was a child I was a super-clicker. (I often wonder what child-Ialdabaoth would have accomplished, if not for sub-100 IQ parents, a fearfully fundamentalist Christian upbringing, and a cliche'd-bad experience with the public school system.)
As an adult, I find that it is much, much harder for me to just "click" on things - and it is invariably due to a panic reaction when presented with information that might cause me to lose status among an imagined group of violent authoritarians.
It would be interesting to see how many more &q...
In some piece of fiction (I think it was Orion's Arm, but the closest I can find is http://www.orionsarm.com/eg-topic/45b3daabb2329 and the reference to "the Herimann-Glauer-Yudkowski relation of inclusive retrospective obviousness") I saw the idea that one could order qualitatively-smarter things on the basis of what you're calling "clicks". Specifically, that if humans are level 1, then the next level above that is the level where if you handed the being the data on which our science is built, all the results would click immediately/...
It seems odd to refer to a "magical click" leading to understanding the incoherence of the idea of magic.
Say you were playing a real-time strategy game, with limited forces to allocate among possible targets to attack and potential vulnerabilities to defend. You can see all sorts of information about a given target, but the really important stuff either requires resources to discover or is hidden outright. The game's bootlegged and you can't find a copy of the manual, so even for the visible numbers you don't know exactly what they mean.
Poking ar...
that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.
Careful, man. If you bang too hard on this drum, people are going to start thinking "hey, why slog through the boring pre-FAI era? I'll just sign up for cryo, head over to the preservation facility, and down a bottle of Nembutal. Before long I'll be relaxing on a beach with super-intelligent robot sex dolls bringing me martinis!"
It is pretty obvious that Alcor must avoid even the slightest suspicion of helping (or even encouraging) you to commit suicide.
This would be an extremely slippery slope that they really have to avoid in order to prevent exposing themselves to a lot of unjustified attacks.
With an unforgivable naivete, a childish stupidity, we all still think history is leading us towards good, that some happiness awaits us ahead; but it will bring us to blood and fire, such blood and such fire as we haven't seen yet.
-- Nikolai Strakhov, 1870s
I wonder if it ties in to some kind of confidence in your understanding.* If you don't trust your ability to understand a simple argument, you're really quite likely to overrate the strength of your heuristics relative to your reason.
...which sounds a lot like why I'm suspicious of cryonics, on introspection. I really need to run the numbers and see if I can afford it.
* Oh noes! Have I become one of those people with one idea they go back to for everything?
...It would have been convenient if I'd discovered some particular key insight that convinced people. If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance." Then I could just emphasize that argument, and people would sign up.
But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor.&qu
when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew
I am not defending OJ in general, but your objection, while philosophically valid, was misplaced. It's ok, you were only 5 :)
Talmudic law is let's say a "ritual law" where performance of certain acts is "fulfilled", in a pure legalistic sense. There is a long standing argument whether mitzhot (commandments) require intentional performance to be fulfilled. E.g. if you "hear the shofar" by accident, you do ...
Sorry, this may be a stupid question, but why is it a good for people to get cyonically frozen? Obviously if they don't they won't make it to the future - but other people will be born or duplicated in the future and the total number of people will be the same.
Why care more about people who live now than future potential people?
Once a person exists, they can form preferences, including a preference not to die. These preferences have real weight. These preferences can also adjust, although not eliminate, the pain of death. If I were to die with a cryonics team standing over me ready to give me a chance at waking up again, I would be more emotionally comfortable than if I were to die on the expectation of ceasing to exist. Someone who does not exist yet does not have such preferences.
People do not all die at the same time. Although an impermanent death is, like a permanent one, also a loss to the living (of time together), it's not the same magnitude of loss. Beyond a certain point, it doesn't matter very much to most people to be able to create new people (not that they wouldn't resent being disallowed).
It's not clear that anyone's birth will really be directly prevented by cryonics. (I mean, except in the sense that all events have some causal impact that influences, among other things, who jumps whose bones when, and therefore who has which children.) A society that would revive cryonics patients probably isn't one that has a population problem such that the cryonics patients make a difference.
There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click.
I can find a number of blog posts from you clearly laying out the arguments in favor of each of those clicks except the consequentialism/utilitarianism one.
What do you mean by "consequentialism" and "utilitarianism" and why do you think they are not just right but obviously right?
Oooh, but people can be wrong in so many ways. It's not a single extra crazy circuit. We've got redundancies: in most people, perhaps the 'main' circuits are never quite laid down right, but the redundant parts take over. This is so common people don't agree what the main circuits are; in Japan, dyslexia is more common than, err, what is neurotypical in USA.
Some people over think it, some under think it. Under think it, and you think, "Bah, Walt Disney is wacky to freeze his head!" and never get past that. Overthink it and you may never actually sign up because you leech out all the emotional impetus (this thought process is more adaptive for getting rid of bad memories).
Cryonics does not prevent you from dying. Humans are afraid of dying. Cryonics does not address the problem (fear of death). It instead offers a possible second life.
I'm afraid of dying, because I know that when I am dying I will be very afraid. So I'm afraid of being afraid. Cryonics would offer very little to me right now in terms of alleviating this fear. Sure it might work; but I won't know that it will while I'm dying, and so my fear while dying will not be mitigated.
You might say hey wait jhuff- isn't actually being dead, and not have a chance at a ...
Fear of not existing after death is not just some silly uncomfortable emotion to be calmed. Rather it reveals a disconnect between one's preference and expectations about the actual state of reality.
The real problem is not existing after death. Fear is a way of representing that.
if you wake up in the Future, it's probably going to be a nicer place to live than the Present.
How do we know this? How can we possibly think it's possible to know this? I can think of at least three scenarios that seem much more likely than this sunny view that things will just keep progressing while you're dead and when you wake up you'll slip right into a nicer society:
1) We run out of cheap energy and hence cheap food; tensions rise; most of the world turns into what Haiti looks like now.
2) Somebody sets off a nuclear weapon, leading to worldwide r...
a world where people can be and are being revived is almost certainly one I want to live in.
Exactly, and that's really well stated. By being cryo-preserved, you're self-selecting for worlds where there is a high likelihood that
(a) contracts are honored (your resources are being used to revive you, as you intended),
(b) human lives, even very different humans from long ago, are respected (otherwise why go to all the trouble of dethawing),
(c) there is advanced neurotechnology sufficient to bring people back, and humanity is still around and has learned to live with it,
and (d) society is rationalist enough not to prohibit cryonics out of fear of zombies or something.
It's not perfect, but it's a good filter.
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away? Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?
Noesis.
Whether something seems "reasonable" or "implausible" can depend on how one's brain happens to be wired, perhaps due to a stroke, mental illness, or even just genetics. As your former blog post about [asognostics] (http://lesswrong.com/lw/12s/the_strangest_thing_an_ai_could_tell_you/) shows, the human brain can come to some silly conclusions with the input it's given. How do you know if what "clicks" for you matches reality or is due to a faulty circuit? Personally I require evidence and I'll sign up for cryonics when the firs...
I model clicks as moments where my subconscious mind notices a shortcut, and forces my conscious mind to take it.
Depending on a variety of factors (how tired I am, or whether I am in danger, for example), my subconscious mind may be able to force more or fewer shortcuts onto my conscious.
The "clicks" are not always ones that, after careful analysis, I follow up on. They have, however, come in very useful in high pressure/low time availability situations.
I suspect you need to travel some (most?) of the inferential distance to becoming a rationalist (one way or another) before you can start clicking on ideas and concepts you're hearing for the first time.
Maybe you could devise a click-test and give it to different groups to see what kinds of people click more often?
Whether something seems "reasonable" or "implausible" can depend on how one's brain happens to be wired, perhaps due to a stroke, mental illness, or even just genetics. As your former blog post about asognostics shows, the human brain can come to some silly conclusions with the input it's given. How do you know if what "clicks" for you matches reality or is due to a faulty circuit?
Personally I require evidence and I'll sign up for cryonics when the first dead mouse is brought back to life after dying and being frozen. People b...
Followup to: Normal Cryonics
Yesterday I spoke of that cryonics gathering I recently attended, where travel by young cryonicists was fully subsidized, leading to extremely different demographics from conventions of self-funded activists. 34% female, half of those in couples, many couples with kids - THAT HAD BEEN SIGNED UP FOR CRYONICS FROM BIRTH LIKE A GODDAMNED SANE CIVILIZATION WOULD REQUIRE - 25% computer industry, 25% scientists, 15% entertainment industry at a rough estimate, and in most ways seeming (for smart people) pretty damned normal.
Except for one thing.
During one conversation, I said something about there being no magic in our universe.
And an ordinary-seeming woman responded, "But there are still lots of things science doesn't understand, right?"
Sigh. We all know how this conversation is going to go, right?
So I wearily replied with my usual, "If I'm ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself; a blank map does not correspond to a blank territory -"
"Oh," she interrupted excitedly, "so the concept of 'magic' isn't even consistent, then!"
Click.
She got it, just like that.
This was someone else's description of how she got involved in cryonics, as best I can remember it, and it was pretty much typical for the younger generation:
"When I was a very young girl, I was watching TV, and I saw something about cryonics, and it made sense to me - I didn't want to die - so I asked my mother about it. She was very dismissive, but tried to explain what I'd seen; and we talked about some of the other things that can happen to you after you die, like burial or cremation, and it seemed to me like cryonics was better than that. So my mother laughed and said that if I still felt that way when I was older, she wouldn't object. Later, when I was older and signing up for cryonics, she objected."
Click.
It's... kinda frustrating, actually.
There are manifold bad objections to cryonics that can be raised and countered, but the core logic really is simple enough that there's nothing implausible about getting it when you're eight years old (eleven years old, in my case).
Freezing damage? I could go on about modern cryoprotectants and how you can see under a microscope that the tissue is in great shape, and there are experiments underway to see if they can get spontaneous brain activity after vitrifying and devitrifying, and with molecular nanotechnology you could go through the whole vitrified brain atom by atom and do the same sort of information-theoretical tricks that people do to recover hard drive information after "erasure" by any means less extreme than a blowtorch...
But even an eight-year-old can visualize that freezing a sandwich doesn't destroy the sandwich, while cremation does. It so happens that this naive answer remains true after learning the exact details and defeating objections (a few of which are even worth considering), but that doesn't make it any less obvious to an eight-year-old. (I actually did understand the concept of molecular nanotech at eleven, but I could be a special case.)
Similarly: yes, really, life is better than death - just because transhumanists have huge arguments with bioconservatives over this issue, doesn't mean the eight-year-old isn't making the right judgment for the right reasons.
Or: even an eight-year-old who's read a couple of science-fiction stories and who's ever cracked a history book can guess - not for the full reasons in full detail, but still for good reasons - that if you wake up in the Future, it's probably going to be a nicer place to live than the Present.
In short - though it is the sort of thing you ought to review as a teenager and again as an adult - from a rationalist standpoint, there is nothing alarming about clicking on cryonics at age eight... any more than I should worry about my first schism with Orthodox Judaism coming at age five, when they told me that I didn't have to understand the prayers in order for them to work so long as I said them in Hebrew. It really is obvious enough to see as a child, the right thought for the right reasons, no matter how much adult debate surrounds it.
And the frustrating thing was that - judging by this group - most cryonicists are people to whom it was just obvious. (And who then actually followed through and signed up, which is probably a factor-of-ten or worse filter for Conscientiousness.) It would have been convenient if I'd discovered some particular key insight that convinced people. If people had said, "Oh, well, I used to think that cryonics couldn't be plausible if no one else was doing it, but then I read about Asch's conformity experiment and pluralistic ignorance." Then I could just emphasize that argument, and people would sign up.
But the average experience I heard was more like, "Oh, I saw a movie that involved cryonics, and I went on Google to see if there was anything like that in real life, and found Alcor."
In one sense this shouldn't surprise a Bayesian, because the base rate of people who hear a brief mention of cryonics on the radio and have an opportunity to click, will be vastly higher than the base rate of people who are exposed to detailed arguments about cryonics...
Yet the upshot is that - judging from the generation of young cryonicists at that event I attended - cryonics is sustained primarily by the ability of a tiny, tiny fraction of the population to "get it" just from hearing a casual mention on the radio. Whatever part of one-in-a-hundred-thousand isn't accounted for by the Conscientiousness filter.
If I suffered from the sin of underconfidence, I would feel a dull sense of obligation to doubt myself after reaching this conclusion, just like I would feel a dull sense of obligation to doubt that I could be more rational about theology than my parents and teachers at the age of five. As it is, I have no problem with shrugging and saying "People are crazy, the world is mad."
But it really, really raises the question of what the hell is in that click.
There's this magical click that some people get and some people don't, and I don't understand what's in the click. There's the consequentialist/utilitarian click, and the intelligence explosion click, and the life-is-good/death-is-bad click, and the cryonics click. I myself failed to click on one notable occasion, but the topic was probably just as clickable.
(In fact, it took that particular embarrassing failure in my own history - failing to click on metaethics, and seeing in retrospect that the answer was clickable - before I was willing to trust non-click Singularitarians.)
A rationalist faced with an apparently obvious answer, must assign some probability that a non-obvious objection will appear and defeat it. I do know how to explain the above conclusions at great length, and defeat objections, and I would not be nearly as confident (I hope!) if I had just clicked five seconds ago. But sometimes the final answer is the same as the initial guess; if you know the full mathematical story of Peano Arithmetic, 2 + 2 still equals 4 and not 5 or 17 or the color green. And some people very quickly arrive at that same final answer as their best initial guess; they can swiftly guess which answer will end up being the final answer, for what seem even in retrospect like good reasons. Like becoming an atheist at eleven, then listening to a theist's best arguments later in life, and concluding that your initial guess was right for the right reasons.
We can define a "click" as following a very short chain of reasoning, which in the vast majority of other minds is derailed by some detour and proves strongly resistant to re-railing.
What makes it happen? What goes into that click?
It's a question of life-or-death importance, and I don't know the answer.
That generation of cryonicists seemed so normal apart from that...
What's in that click?
The point of the opening anecdote about the Mind Projection Fallacy (blank map != blank territory) is to show (anecdotal) evidence that there's something like a general click-factor, that someone who clicked on cryonics was able to click on mysteriousness=projectivism as well. Of course I didn't expect that I could just stand up amid the conference and describe the intelligence explosion and Friendly AI in a couple of sentences and have everyone get it. That high of a general click factor is extremely rare in my experience, and the people who have it are not otherwise normal. (Michael Vassar is one example of a "superclicker".) But it is still true AFAICT that people who click on one problem are more likely than average to click on another.
My best guess is that clickiness has something to do with failure to compartmentalize - missing, or failing to use, the mental gear that lets human beings believe two contradictory things at the same time. Clicky people would tend to be people who take all of their beliefs at face value.
The Hansonian explanation (not necessarily endorsed by Robin Hanson) would say something about clicky people tending to operate in Near mode. (Why?)
The naively straightforward view would be that the ordinary-seeming people who came to the cryonics did not have any extra gear that magically enabled them to follow a short chain of obvious inferences, but rather, everyone else had at least one extra insanity gear active at the time they heard about cryonics.
Is that really just it? Is there no special sanity to add, but only ordinary madness to take away? Where do superclickers come from - are they just born lacking a whole lot of distractions?
What the hell is in that click?