Tetronian comments on The Irrationality Game - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (910)
Oh wow, that was poorly phrased. What I meant was really closer to "cryonics will not maximize expected utility." I will rephrase that.
(I really need more sleep...)
But that could just be a preference... perhaps add a statement of for who?
I'd interpret the who to mean 'Less Wrong commenters', since that's the reference class we're generally working with here.
That was the reference class I was referring to, but it really doesn't matter much in this case--after all, who wouldn't want to live through a positive Singularity?
True, but a positive Singularity doesn't necessarily raise the cryonic dead. I'd bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)
My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of "hard takeoff" for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer's (i.e. Moore's Law will not swoop in to save the day) (6) Even an FAI might not wake the "cryonic dead" (I like that term, I think I'll steal it, haha) (7) Cryonically preserved bodies may be destroyed before we have the ability to revive them ...and a few more minor reasons I can't remember at the moment.
I'm curious, what are yours?
My thoughts have changed somewhat since writing this post, but that's the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I'm not sure if this extends to most LW people; I'd have to think about it more.
But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don't you think so?
By the way, I think it's really cool to see another RWer here! LW's a different kind of fun than RW, but it's a neat place.
I remember that post--it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.
I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.
I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.
There is a reason to expect that it will scale in general.
To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn't have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.
Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.
Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?
BTW, what does RW refer to?
I'm not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal's Mugging in any general case.
RW = RationalWiki
Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you -- you won't give them what they want no matter what -- and you don't incentivize even more terrorists to show up and demand even bigger sums.
But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.
I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.
I understand this idea--in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.
This is what I was afraid of: we can't do anything about Pascal's Mugging with respect to purely epistemic questions. (I'm still not entirely sure why, though--what prevents us from treating cryonics just like we would treat the mugger?)
Ha, Trent's essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though--LW and RW have very different methods of evaluating ideas, and I'm suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I'm not quick to judge.) RW tends to use labels a bit too much--if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a "reliable" source claiming that someone is a fraud, then they assume he/she is.