Tetronian comments on The Irrationality Game - Less Wrong

38 Post author: Will_Newsome 03 October 2010 02:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (910)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 03 October 2010 03:57:17AM 1 point [-]

That was the reference class I was referring to, but it really doesn't matter much in this case--after all, who wouldn't want to live through a positive Singularity?

Comment author: Will_Newsome 03 October 2010 03:59:20AM 1 point [-]

True, but a positive Singularity doesn't necessarily raise the cryonic dead. I'd bet against it, for one. (Figuring out whether I agree or disagree with you is making me think pretty hard right now. At least my post is working for me! I probably agree, though almost assuredly for different reasons than yours.)

Comment author: [deleted] 03 October 2010 04:05:12AM *  3 points [-]

My reasons for disagreement are as follows: (1) I am not sure that the current cryonics technology is sufficient to prevent information-theoretic death (2) I am skeptical of the idea of "hard takeoff" for a seed AI (3) I am pessimistic about existential risk (4) I do not believe that a good enough seed AI will be produced for at least a few more decades (5) I do not believe any versions of the Singularity except Eliezer's (i.e. Moore's Law will not swoop in to save the day) (6) Even an FAI might not wake the "cryonic dead" (I like that term, I think I'll steal it, haha) (7) Cryonically preserved bodies may be destroyed before we have the ability to revive them ...and a few more minor reasons I can't remember at the moment.

I'm curious, what are yours?

Comment author: Will_Newsome 03 October 2010 04:12:20AM *  0 points [-]

My thoughts have changed somewhat since writing this post, but that's the general idea. It would be personally irrational for me to sign up for cryonics at the moment. I'm not sure if this extends to most LW people; I'd have to think about it more.

But even your list of low probabilities might be totally outweighed by the Pascalian counterargument: FAI is a lot of utility if it works. Why don't you think so?

By the way, I think it's really cool to see another RWer here! LW's a different kind of fun than RW, but it's a neat place.

Comment author: [deleted] 03 October 2010 04:20:04AM *  0 points [-]

I remember that post--it got me to think about cryonics a lot more. I agree with most of your arguments, particularly bullet point #3.

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

Comment author: rwallace 03 October 2010 07:03:04PM 0 points [-]

There is a reason to expect that it will scale in general.

To see why, first note that the most watertight formulation of the problem uses lives as its currency (this avoids issues like utility failing to scale linearly with money in the limit of large quantities). So, suppose the mugger offers to save N lives or create N people who will have happy lives (or threatens to kill N people on failure to hand over the wallet, if the target is a shortsighted utilitarian who doesn't have a policy of no deals with terrorists), for some suitably large N that on the face of it seems to outweigh the small probability. So we are postulating the existence of N people who will be affected by this transaction, of whom I, the target of the mugging, am one.

Suppose N = e.g. a trillion. Intuitively, how plausible is it that I just happen to be the one guy who gets to make a decision that will affect a trillion lives? More formally, we can say that, given the absence of any prior reason I should be in such an unusual position, the prior probability of this is 1 in N, which does scale with N to match the increase in claimed utility.

Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?

BTW, what does RW refer to?

Comment author: [deleted] 03 October 2010 07:22:53PM 0 points [-]

Granted, the original formulation did offer the extension of a single life instead of the creation of separate lives. I consider it reasonable to regard this as a technical detail; is there any fundamental reason why a thousand centuries of extra life can't be regarded as equivalent to a thousand century-long lives chained together in sequence?

I'm not sure if we can write this off as a technical detail because we are formulating our prior based on it. What if we assume that we are talking about money and the mugger offers to give us an amount of money that is equivalent in terms of utility to creating N happy lives (assuming he knows your utility function)? If your reasoning is correct, then the prior probability for that would have to be the same as your prior for the mugger creating N happy lives, but since totally different mechanisms would be involved in doing so, this may not be true. That, to me, seems like a problem because we want to be able to defuse Pascal's Mugging in any general case.

BTW, what does RW refer to?

RW = RationalWiki

Comment author: rwallace 03 October 2010 08:44:27PM 0 points [-]

Well, there is no necessary reason why all claimed mechanisms must be equally probable. The mugger could say "I'll heal the sick with my psychic powers" or "when I get to the bank on Monday, I'll donate $$$ to medical research"; even if the potential utilities were the same and both probabilities were small, we would not consider the probabilities equal.

Also, the utility of money doesn't scale indefinitely; if nothing else, it levels off once the amount starts being comparable to all the money in the world, so adding more just creates additional inflation.

Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.

Comment author: [deleted] 03 October 2010 08:53:29PM 0 points [-]

Well, there is no necessary reason why all claimed mechanisms must be equally probable.

That's why I don't think we can defuse Pascal's Mugging, since we can potentially imagine a mechanism for which our probability that the mugger is honest doesn't scale with the amount of utility the mugger promises to give. That would imply that there is no fully general solution to Bostrom's formulation of Pascal's Mugging. And that worries me greatly.

However:

Nonetheless, since the purpose of money is to positively affect lives, we can indeed use similar reasoning to say the improbability of receiving a large amount of money scales linearly with the amount. Note that this reasoning would correctly dismiss get-rich-quick schemes like pyramids and lotteries, even if we were ignorant of the mechanics involved.

This gives me a little bit of hope, since we might be able to use it as a heuristic when dealing with situations like these. That's not as good as a proof, but it's not bad.

Also:

The mugger could say "I'll heal the sick with my psychic powers" or "when I get to the bank on Monday, I'll donate $$$ to medical research"

Only on LessWrong does that sentence make sense and not sound funny :)

Comment author: Will_Newsome 03 October 2010 04:30:40AM 0 points [-]

I do struggle with Pascal's Mugging--it seems to me, intuitively, that Pascal's Mugging can't be true (that is, in the original scenario, Pascal should not give up his money), but I can't find a reason for this to be so. It seems like his probability that the mugger will give him a return on his investment should scale with the amount of money the mugger offers him, but I don't see a reason why this is always the case. So, while I can't defuse Pascal's Mugging, I am skeptical about its conclusion.

Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists. That way they have no incentive to try to terrorize you -- you won't give them what they want no matter what -- and you don't incentivize even more terrorists to show up and demand even bigger sums.

But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.

I had no idea you were on RW! Can you send me a message sometime? LW is indeed a very different kind of fun, and I enjoy both.

I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.

Comment author: [deleted] 03 October 2010 04:40:36AM 0 points [-]

Ah, Pascal's mugging is easy, decision theoretically speaking: cultivate the disposition of not negotiating with terrorists.

I understand this idea--in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.

But other kinds of Pascalian reasoning are valid, like in the case of cryonics. I don't give Pascal's mugger any money, but I do acknowledge that in the case of cryonics, you need to actually do the calculation: no decision theoretic disposition is there to invalidate the argument.

This is what I was afraid of: we can't do anything about Pascal's Mugging with respect to purely epistemic questions. (I'm still not entirely sure why, though--what prevents us from treating cryonics just like we would treat the mugger?)

I'm almost never there anymore... I know this is a dick thing to say, but it's not a great intellectual environment for really learning, and I can get better entertainment elsewhere (like Reddit) if I want to. It was a cool place though; Trent actually introduced me to Bayes with his essay on it, and I learned some traditional rationality there. But where RW was a cool community of fun, like-minded people, I now have a lot of intellectual and awesome friends IRL at the Singularity Institute, so it's been effectively replaced.

Ha, Trent's essay was what introduced me to Bayes as well! And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page. I know what you mean, though--LW and RW have very different methods of evaluating ideas, and I'm suspicious of the heuristics RW uses sometimes. (I am sometimes suspicious here too, but I realize I am way out of my depth so I'm not quick to judge.) RW tends to use labels a bit too much--if an idea sounds like pseudoscience, then they automatically believe it is. Or, if they can find a "reliable" source claiming that someone is a fraud, then they assume he/she is.

Comment author: Will_Newsome 03 October 2010 05:03:53AM *  0 points [-]

I understand this idea--in fact, I just learned it today reading the comments section of this post. I would like to see it formalized in UDT so I can better grasp it, but I think I understand how it works verbally.

Eliezer finally published TDT a few days ago, I think it's up at the singinst.org site by now. Perhaps we should announce it in a top level post... I think we will.

This is what I was afraid of: we can't do anything about Pascal's Mugging with respect to purely epistemic questions. (I'm still not entirely sure why, though--what prevents us from treating cryonics just like we would treat the mugger?)

Cryonics isn't an agent we have to deal with. Pascal's Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there's no one to argue with: either cryonics works, or it doesn't. We just have to figure it out.

The invalidity of paying Pascal's mugger doesn't have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, improbable or not, large or small.

And unless I remember incorrectly RW introduced my to LW because someone linked to it somewhere on a talk page.

Might it have been here? That's where I was first introduced to LW and Eliezer.

(I am sometimes suspicious here too, but I realize I am way out of my depth so I'm not quick to judge.)

Any ideas/heuristics you're suspicious of specifically? If there was a Less Wrong and an SIAI belief dichotomy I'd definitely fall in the SIAI belief category, but generally I agree with Less Wrong. It's not exactly a fair dichotomy though; LW is a fun online social site whereas SIAI folk are paid to be professionally rational.

Comment author: wedrifid 03 October 2010 05:31:13AM 1 point [-]

hat gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.

The second 'negative sum' seems redundant...

Comment author: Will_Newsome 03 October 2010 05:34:03AM 1 point [-]

Are you claiming that 100% of negative sum interactions are negative sum?! 1 is not a probability! ...just kidding. I meant 'improbable or not'.

Comment author: [deleted] 03 October 2010 05:22:46AM 0 points [-]

If there was a Less Wrong and an SIAI belief dichotomy I'd definitely fall in the SIAI belief category, but generally I agree with Less Wrong.

I guess I'm not familiar enough with the positions of LW and SIAI--where do they differ?

Comment author: [deleted] 03 October 2010 05:15:47AM 0 points [-]

Eliezer finally published TDT a few days ago, I think it's up at the singinst.org site by now.

Excellent, that'll be a fun read.

Cryonics isn't an agent we have to deal with. Pascal's Mugger we can deal with because both options lead to negative expected utility, and so we find ways to avoid the choice entirely by appealing to the motivations of the agent to not waste resources. But in the case of cryonics no one has a gun to our head, and there's no one to argue with: either cryonics works, or it doesn't. We just have to figure it out. The invalidity of paying Pascal's mugger doesn't have anything to do with the infinity in the calculation; that gets sidestepped entirely by refusing to engage in negative sum actions of any kind, negative sum or not, large or small.

I'm still not sure if I follow this--I'll have to do some more reading on it. I still don't see how the two situations are different--for example, if I was talking to someone selling cryonics, wouldn't that be qualitatively the same as Pascal's Mugging? I'm not sure.

Might it have been here? That's where I was first introduced to LW and Eliezer.

Unfortunately no, it was here. I didn't look at that article until recently.

Any ideas/heuristics you're suspicious of specifically?

That opens a whole new can of worms that it's far too late at night for me to address, but I'm thinking of writing a post on this soon, perhaps tomorrow.

Comment author: Will_Newsome 03 October 2010 05:26:48AM *  1 point [-]

I still don't see how the two situations are different--for example, if I was talking to someone selling cryonics, wouldn't that be qualitatively the same as Pascal's Mugging?

Nah, the cryonics agent isn't trying to mug you! (Er, hopefully.) He's just giving you two options and letting you calculate.

In this case of Pascal's Mugging both choices lead to negative expected utility as defined by the problem. Hence you look for a third option, and in this case, you find one: ignore all blackmailers; tell them to go ahead and torture all those people, you don't care. Unless they find joy in torturing people (then you're screwed) they have no incentive to actually use up the resources to go through with it. So they leave you alone, 'cuz you won't budge.

Cryonics is a lot simpler in its nature, but a lot harder to calculate. You have two options, and the options are given to you by reality, not an agent you can outwit. (Throwing in a cryonics agent doesn't change anything.) When you have to choose between the binary cryonics versus no cryonics, it's just a matter of seeing which decision is better (or worse). It could be that both are bad, like in the Pascal's mugger scenario, but in this case you're just screwed: reality likes to make you suffer, and you have to take the best possible world. Telling reality that it can go ahead and give you tons of disutility doesn't take away its incentive to give you tons of disutility. There's no way out of the problem.

That opens a whole new can of worms that it's far too late at night for me to address, but I'm thinking of writing a post on this soon, perhaps tomorrow.

Cool! Be careful not to generalize too much, though: there might bad general trends, but no one likes to be yelled at for things they didn't do. Try to frame it as humbly as possible, maybe. Sounding unsure of your position when arguing against LW norms gets you disproportionately large amounts of karma. Game the system!