Comment author: Vladimir_Nesov 29 July 2012 08:04:40PM *  2 points [-]

Having the property of one-boxing given an empty box (within the hypothetical where you are presented with an empty box) is a prerequisite for winning the million, and not seeing an empty box. If, when you see an empty box, you fight the hypothetical and two-box on the grounds that something must have broken down, then you don't have that property, won't win the million, and would see an empty box.

When you've seen an empty box, expected utility calculation should take the alternative (of non-empty box) into account, in which case it too would recommend taking one box (this is the same "updateless" consideration as in Counterfactual Mugging and Non-Anthropic Problem).

Comment author: APMason 29 July 2012 08:20:33PM 0 points [-]

Yes, I think we're in agreement - although I'm not sure that it's precisely the same consideration as the Counterfactual Mugging. Updating on the outcome of the coin flip in the CM doesn't tell you what your own decision is going to be - the coin's outcome was independent of your decision. Whereas if you see an empty box and update on that, it tells you your decision is going to be "two-box" - if you ask yourself "What happens if I one-box?", you don't get an expected utility calculation, you get a logical contradiction, which is why you can't update on seeing the empty box. That doesn't seem to me to be the same structure of problem as the Counterfactual Mugging.

Comment author: Vladimir_Nesov 29 July 2012 01:29:37PM *  4 points [-]

It's like one boxing in Newcomb's problem where the boxes are made of glass and you can't help it but see that the first box is empty.

One boxing given a transparent empty box is the correct decision... (Not sure what the intended analogy is.)

Comment author: APMason 29 July 2012 07:40:50PM *  1 point [-]

Isn't one-boxing given a transparent empty box a violation of the premise of the thought experiment? If you one-box, the box isn't empty by hypothesis (unless Omega is something less than a perfectly reliable predictor, in which case you can one-box with an empty box only if Omega predicted your decision wrong).

EDIT: Although I do agree it's the right decision, if you ever were, impossibly, to find yourself in such a situation - otherwise Omega's making your decision for you.

Comment author: Viliam_Bur 19 July 2012 09:25:05PM 2 points [-]

when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn't really allow it.

You sure about this? I don't know much about this topic, but I remember reading somewhere that 200 or more years ago Jews were often allowed to give punishment to their own people within diaspora. They couldn't stone a Christian/Muslim from the majority population, but they could stone (or otherwise kill, or otherwise severely punish) one of their own -- unless the given sinner already converted to Christianity/Islam and left their community. So converting to majority religion could be safe, but converting to atheism or some heresy within Judaism would not.

Comment author: APMason 19 July 2012 09:32:06PM 2 points [-]

You sure about this?

Nope, not sure at all.

Comment author: Raw_Power 19 July 2012 01:23:27PM 12 points [-]

Speaking in long term terms, what is the mechanism by which societies secularize themselves, and are there ways to trigger it? For instance, the Jews too have a very explicit, canonic policy of stoning proselytizing apostates to death. When did they stop doing that, and why?

Comment author: APMason 19 July 2012 02:22:40PM 9 points [-]

I don't think that question's going to give you the information you want - when in the last couple thousand of years, if Jews had wanted to stone apostates to death, would they have been able to do it? The diasporan condition doesn't really allow it. I think Christianity really is the canonical example of the withering away of religiosity - and that happened through a succession of internal revolutions ("In Praise of Folly", Lutheranism, the English reformation etc.) which themselves happened for a variety of reasons, not all pure or based in rationality (Henry VIII's split with Rome, for example) but had the effect of demystifying the church and thereby shrinking the domain of its influence. I think. Although it's hard to interpret the Englightenment as a movement internal to Christianity, so this only gets you so far, I suppose.

Comment author: Grognor 03 July 2012 02:14:49PM *  16 points [-]

There are many problems here.

At the end of paragraph 2 and the other examples, you say

This exactly mirrors the Prisoner's Dilemma.

But it doesn't, as you point out later in the post, because the payoff matrix isn't D-C > C-C > D-D, as you explain, but rather C-C > D-C > C-D, because of reputational effects, which is not a prisoner's dilemma. "Prisoner's dilemma" is a very specific term, and you are inflating it.

evolution is also strongly motivated [...] evolution will certainly take note.

I doubt that quite strongly!

The evolutionarily dominant strategy is commonly called “Tit-for-tat” - basically, cooperate if and only if you expect your opponent to do so.

That is not tit-for-tat! Tit-for-tat is start with cooperate and then parrot the opponent's previous move. It does not do what it "expects" the opponent to do. Furthermore, if you categorically expect your opponent to cooperate, you should defect (just like you should if you expect him to defect). You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.

This so-called "superrationality” appears even more [...]

That is not superrationality! Superrationality achieves cooperation by reasoning that you and your opponent will get the same result for the same reasons, so you should cooperate in order to logically bind your result to C-C (since C-C and D-D are the only two options). What is with all this misuse of terminology? You write like the agents in the examples of this game are using causal decision theory (which defects all the time no matter what) and then bring up elements that cannot possibly be implemented in causal decision theory, and it grinds my gears!

And if two people with these sorts of emotional hangups play the Prisoner's Dilemma together, they'll end up cooperating on all hundred crimes, getting out of jail in a mere century and leaving rational utility maximizers to sit back and wonder how they did it.

This is in direct violation of one of the themes of Less Wrong. If "rational expected utility maximizers" are doing worse than "irrational emotional hangups", then you're using a wrong definition of "rational". You do this throughout the post, and it's especially jarring because you are or were one of the best writers for this website.

playing as a "rational economic agent" gets you a bad result

9_9

[...] anger makes us irrational. But this is the good kind of irrationality [...]

"The good kind of irrationality" is like "the good kind of bad thing". An oxymoron, by definition.

[...] if we're playing an Ultimatum Game against a human, and that human precommits to rejecting any offer less than 50-50, we're much more likely to believe her than if we were playing against a rational utility-maximizing agent

Bullshit. A rational agent is going to do what works. We know this because we stipulated that it was rational. If you mean to say a "stupid number crunching robot that misses obvious details like how to play ultimatum games" then sure it might do as you describe. But don't call it "rational".

It is distasteful and a little bit contradictory to the spirit of rationality to believe it should lose out so badly to simple emotion, and the problem might be correctable.

You think?

Downvoted.

Comment author: APMason 03 July 2012 02:48:06PM 9 points [-]

I agree with pretty much everything you've said here, except:

You only cooperate if you expect your opponent to cooperate if he expects you to cooperate ad nauseum.

You don't actually need to continue this chain - if you're playing against any opponent which cooperates iff you cooperate, then you want to cooperate - even if the opponent would also cooperate against someone who cooperated no matter what, so your statement is also true without the "ad nauseum" (provided the opponent would defect if you defected).

Comment author: MarkusRamikin 01 July 2012 06:52:51AM *  8 points [-]

What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly; this is probably a Hard Problem.

I simply want more freedom to do things in ways that suit me and the other person as long as it doesn't harm anyone else. There may be gotchas and necessary qualifications once you get into the details, but the basic idea I think is hardly outrageous; surely there is at least room to move from the current stale state of affairs in that direction.

So I guess I don't believe the statement I quoted earlier entirely without qualification. Still, I like it because it recognises the fact that the current situation with marriage is ridiculous and it doesn't, in principle, have to be that way. That recognition, as opposed to taking existing absurdities for granted without even thinking about them like most people do, is what I was referring to as a rare dose of sanity:

"Yes," Harry said. "It's what you do to bad teachers. You fire them. Then you hire a better teacher instead. You don't have unions or tenure here, right?"

Fred and George were frowning in much the same way that hunter-gatherer tribal elders might frown if you tried to tell them about calculus.

"I don't know," said Fred after a while. "I never thought about that."

"Me neither," said George.

"Yeah," said Harry, "I get that a lot.

Your second paragraph serves... I'm not sure what purpose. To tell me that the idea is politically unfeasable? I know that.

Comment author: APMason 01 July 2012 01:50:49PM 1 point [-]

What sort of examples can you bring up of custom marital contracts that would make people scream in horror? My guess is that people would generally feel queasy about allowing legal enforcement of what looks like slavish or abusive relationships. I think this would be a genuine cause for concern, not because I don't think that people should be able to enter whatever relationships please them in principle, but because in practice I'm concerned about people being coerced into signing contracts harmful to themselves. Not sure where I'd draw the line exactly; this is probably a Hard Problem.

Remember that "enforcing contracts" could mean two things. It could mean that the government steps in and makes the parties do what they said they would - it keeps whipping them until they follow through. It could also mean punishing the parties for damage done on the other end when they breach the contract. For example, in a world in which prostitution is legal, X proposes to pay Y for sex. Y accepts. X hands over the money. Y refuses to have sex with X. The horrific version of this is the government comes in and "enforces" the contract... by holding down Y and, well, yeah. The alternative is the government comes in, sees that Y has taken money from X by fraud, and punishes Y the same way it would punish any other thief. The second option is, I think, both more intuitive and less massively disturbing.

Comment author: Lukas_Gloor 27 June 2012 02:18:18PM 0 points [-]

That's been done in this paper, secion VI "The Asymptotic Gambit".

Comment author: APMason 27 June 2012 02:29:13PM *  0 points [-]

Thank you. I had expected the bottom to drop out of it somehow.

EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.

Comment author: TheOtherDave 27 June 2012 01:25:59PM *  0 points [-]

Agreed that all of these sorts of arguments ultimately rest on different intuitions about morality, which sometimes conflict, or seem to conflict.

Agreed that value needn't add linearly, and indeed my intuition is that it probably doesn't.

It seems clear to me that if I negatively value something happening, I also negatively value it happening more more. That is, for any X I don't want to have happen, it seems I would rather have X happen than have X happen twice. I can't imagine an X where I don't want X to happen and would prefer to have X happen twice than once. (Barring silly examples like "the power switch for the torture device gets flipped".)

Comment author: APMason 27 June 2012 01:41:58PM 0 points [-]

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Comment author: Lukas_Gloor 27 June 2012 11:57:18AM *  -1 points [-]

I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that "hybrid-views" like prior-existence (or "critical level" negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren't contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that's just slightly less awful.

Comment author: APMason 27 June 2012 01:07:58PM 1 point [-]

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

Comment author: loup-vaillant 25 June 2012 07:16:55AM *  3 points [-]

Because it's a fair test

No, not even by Eliezer's standard, because TDT is not given the same problem than other decision theories.

As stated in comments below, everyone but TDT have the information "I'm not in the simulation" (or more precisely, in one of the simulations of the infinite regress that is implied by Omega's formulation). The reason TDT does not have this extra piece of information comes from the fact that it is TDT, not from any decision it may make.

Comment author: APMason 25 June 2012 03:40:53PM *  0 points [-]

This variation of the problem was invented in the follow-up post (I think it was called "Sneaky strategies for TDT" or something like that:

Omega tells you that earlier he flipped a coin. If the coin came down heads, it simulated a CDT agent facing this problem. If the coin came down tails, it simulated a TDT agent facing this problem. In either case, if the simulated agent one-boxed, there is $1000000 in Box-B; if it two-boxed Box-B is empty. In this case TDT still one-boxes (50% chance of $1000000 dominates a 100% chance of $1000), and CDT still two-boxes (because that's what CDT does). In this case, even though both agents have an equal chance of being simulated, CDT out-performs TDT (average payoffs of 500500 vs. 500000) - CDT takes advantage of TDT's prudence and TDT suffers for CDT's lack of it. Notice also that TDT cannot do better by behaving like CDT (both would get payoffs of 1000). This shows that the class of problems we're concerned with is not so much "fair" vs. "unfair", but more like "those problem on which the best I can do is not necessarily the best anyone can do". We can call it "fairness" if we want, but it's not like Omega is discriminating against TDT in this case.

View more: Prev | Next