Will_Newsome comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 03 October 2010 03:48:24AM *  1 point [-]

I take it you mean cryonics won't lead to successful revival? It's interesting, 'cuz I think a lot of cryonauts would be more confident of that than you are, but the low probability of high utility justifies the expense. 65% is thus a sort of odd figure here. I expect most people would be at around 90-99.9%. Disagreed because I am significantly more doubtful of the general chance of cryonic revival: 95%.

Comment author: [deleted] 03 October 2010 03:50:48AM 0 points [-]

Oh wow, that was poorly phrased. What I meant was really closer to "cryonics will not maximize expected utility." I will rephrase that.

(I really need more sleep...)

Comment author: Sniffnoy 03 October 2010 03:53:28AM 0 points [-]

But that could just be a preference... perhaps add a statement of for who?

Comment author: Will_Newsome 03 October 2010 03:54:49AM 1 point [-]

I'd interpret the who to mean 'Less Wrong commenters', since that's the reference class we're generally working with here.

Comment author: [deleted] 03 October 2010 03:57:17AM 1 point [-]

That was the reference class I was referring to, but it really doesn't matter much in this case--after all, who wouldn't want to live through a positive Singularity?

Comment author: Will_Newsome 03 October 2010 03:59:20AM 1 point [-]

True, but a positive Singularity doesn't necessarily raise the cryonic dead. I'd bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)

Comment author: [deleted] 03 October 2010 04:05:12AM *  3 points [-]

My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of "hard takeoff" for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer's (i.e. Moore's Law will not swoop in to save the day) (6) Even an FAI might not wake the "cryonic dead" (I like that term, I think I'll steal it, haha) (7) Cryonically preserved bodies may be destroyed before we have the ability to revive them ...and a few more minor reasons I can't remember at the moment.

I'm curious, what are yours?

Comment author: Will_Newsome 03 October 2010 04:12:20AM *  0 points [-]

My thoughts have changed somewhat since writing this post, but that's the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I'm not sure if this extends to most LW people; I'd have to think about it more.

But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don't you think so?

By the way, I think it's really cool to see another RWer here! LW's a different kind of fun than RW, but it's a neat place.

Comment author: [deleted] 03 October 2010 04:20:04AM *  0 points [-]

I remember that post--it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

Comment author: rwallace 03 October 2010 07:03:04PM 0 points [-]

There is a reason to expect that it will scale in general.

To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn't have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.

Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.

Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?

BTW, what does RW refer to?

Comment author: Will_Newsome 03 October 2010 04:30:40AM 0 points [-]

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you -- you won't give them what they want no matter what -- and you don't incentivize even more terrorists to show up and demand even bigger sums.

But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.