Reply to: A Tale Of Two Tradeoffs
I'm not comfortable with compliments of the direct, personal sort, the "Oh, you're such a nice person!" type stuff that nice people are able to say with a straight face. Even if it would make people like me more - even if it's socially expected - I have trouble bringing myself to do it. So, when I say that I read Robin Hanson's "Tale of Two Tradeoffs", and then realized I would spend the rest of my mortal existence typing thought processes as "Near" or "Far", I hope this statement is received as a due substitute for any gushing compliments that a normal person would give at this point.
Among other things, this clears up a major puzzle that's been lingering in the back of my mind for a while now. Growing up as a rationalist, I was always telling myself to "Visualize!" or "Reason by simulation, not by analogy!" or "Use causal models, not similarity groups!" And those who ignored this principle seemed easy prey to blind enthusiasms, wherein one says that A is good because it is like B which is also good, and the like.
But later, I learned about the Outside View versus the Inside View, and that people asking "What rough class does this project fit into, and when did projects like this finish last time?" were much more accurate and much less optimistic than people who tried to visualize the when, where, and how of their projects. And this didn't seem to fit very well with my injunction to "Visualize!"
So now I think I understand what this principle was actually doing - it was keeping me in Near-side mode and away from Far-side thinking. And it's not that Near-side mode works so well in any absolute sense, but that Far-side mode is so much more pushed-on by ideology and wishful thinking, and so casual in accepting its conclusions (devoting less computing power before halting).
An example of this might be the balance between offensive and defensive nanotechnology, where I started out by - basically - just liking nanotechnology; until I got involved in a discussion about the particulars of nanowarfare, and noticed that people were postulating crazy things to make defense win. Which made me realize and say, "Look, the balance between offense and defense has been tilted toward offense ever since the invention of nuclear weapons, and military nanotech could use nuclear weapons, and I don't see how you're going to build a molecular barricade against that."
Are the particulars of that discussion likely to be, well, correct? Maybe not. But so long as I wasn't thinking of any particulars, my brain had free reign to just... import whatever affective valence the word "nanotechnology" had, and use that as a snap judgment of everything.
You can still be biased about particulars, of course. You can insist that nanotech couldn't possibly be radiation-hardened enough to manipulate U-235, which someone tried as a response (fyi: this is extremely silly). But in my case, at least, something about thinking in particulars...
...just snapped me out of the trance, somehow.
When you're thinking using very abstract categories - rough classes low on computing power - about things distant from you, then you're also - if Robin's hypothesis is correct - more subject to ideological bias. Together this implies you can cherry-pick those very loose categories to put X together with whatever "similar" Y is ideologically convenient, as in the old saw that "atheism is a religion" (and not playing tennis is a sport).
But the most frustrating part of all, is the casualness of it - the way that ideologically convenient Far thinking is just thrown together out of whatever ingredients come to hand. The ten-second dismissal of cryonics, without any attempt to visualize how much information is preserved by vitrification and could be retrieved by a molecular-level scan. Cryonics just gets casually, perceptually classified as "not scientifically verified" and tossed out the window. Or "what if you wake up in Dystopia?" and tossed out the window. Far thinking is casual - that's the most frustrating aspect about trying to argue with it.
This seems like an argument for writing fiction with lots of concrete details if you want people to take a subject seriously and think about it in a less biased way. This is not something I would have thought based on my previous view.
Maybe cryonics advocates really should focus on writing fiction stories that turn on the gory details of cryonics, or viscerally depict the regret of someone who didn't persuade their mother to sign up. (Or offering prizes to professionals who do the same; writing fiction is hard, writing SF is harder.)
But I'm worried that, for whatever reason, reading concrete fiction is a special case that doesn't work to get people to do Near-side thinking.
Or there are some people who are inspired to Near-side thinking by fiction, and only these can actually be helped by reading science fiction.
Maybe there are people who encounter big concrete detailed fictions process them in a Near way - the sort of people who notice plot holes. And others who just "take it all in stride", casually, so that however much concrete fictional "information" they encounter, they only process it using casual "Far" thinking. I wonder if this difference has more to do with upbringing or genetics. Either way, it may lie at the core of the partial yet statistically outstanding correlation between careful futurists and science fiction fans.
I expect I shall be thinking about this for a while.
I should probably blog about it, but here's my opinion about cryonics:
What are chances that signing up for cryonics will work? I estimate it's really really tiny, 1% or less kind of chance, even if cryonics works some day I might die in a wrong way like in a car accident or by cancer metastasis that will make me lose too much information; or will be frozen in a wrong way; or I won't stay frozen for long enough due to hardware failure, economic crash, or whatever reasons; or future might decide not to unfreeze me; or to modify me too much upon unfreezing etc. Anything goes wrong and it's a fail, and things tend to go wrong with first try of every new technology almost always.
What's the benefit if it works? It could be very high like infinite youth in utopian society, but I guess it's most likely to be moderate to high, like a few extra decades of life of someone vaguely like me.
What's the cost? I did a quick check and it seemed very high.
The most naively calculated expected utility of that doesn't match the price, with reasonable levels of time discounting and risk aversion it's really a horrible proposition. It's too much of a Pascal's Wager if you think a small chance of a very high win makes cost and risk irrelevant.
SENS sounds like a much more likely way to achieve much very long healthy lifespans. Cryonics depends on success of SENS anyway, it's just a bet that SENS is most likely to occur too late against chance of cryonics failing.
There are alternatives way to increasing your healthy lifespan with high expected return, low risk, and low cost - not smoking and avoiding obesity are the most obvious ones in modern Western societies. Unless you've done all these taking a high cost high risk chance like cryonics seems not much different than going to church every Sunday hoping afterlife really exists.
I wonder what makes you and Robin like cryonics so much. You most likely have much higher estimation of its chances. You might also have a higher estimation of its utility if it works. Or you might have lower estimation of its price, perhaps you have too much money and no idea what to do with it ;-)
The chances are tiny, but a tiny chance is preferable to no chance at all.
The benefit if it works is that you wake up as yourself, immortal in eutopia. Anything less I qualify as failure.