Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Globally better means locally worse

2 Tyrrell_McAllister 22 March 2017 07:25PM

Home appliances, such as washing machines, are apparently much less durable now than they were decades ago.

Perhaps this is a kind of mirror image of "cost disease". In many sectors (education, medicine), we pay much more now for a product that is no better than what we got decades ago at a far lower cost, even accounting for inflation. It takes more money to buy the same level of quality. Scott Alexander (Yvain) argues that the cause of cost disease is a mystery. There are several plausible accounts, but they don't cover all the cases in a satisfying way. (See the link for more on the mystery of cost disease.)

Now, what if the mysterious cause of cost disease were to set to work in a sector where price can't go up, for whatever reason? Then you would expect quality to take a nosedive. If price per unit quality goes up, but total price can't go up, then quality must go down. So maybe the mystery of crappy appliances is just cost disease in another guise.

In the spirit of inadequate accounts of cost disease, I offer this inadequate account of crappy appliances:

As things get better globally, they get worse locally.

Global goodness provides a buffer against local badness. This makes greater local badness tolerable. That is, the cheapest tolerable thing gets worse. Thus, worse and worse things dominate locally as things get better globally.

This principle applies in at least two ways to washing machines:

Greater global wealth: Consumers have more money, so they can afford to replace washing machines more frequently. Thus, manufacturers can sell machines that require frequent replacement.

Manufacturers couldn't get away with this if people were poorer and could buy only one machine every few decades. If you're poor, you prioritize durability more. In the aggregate, the market will reward durability more. But a rich market accepts less durability.

Better materials science: Globally, materials science has improved. Hence, at the local level, manufacturers can get away with making worse materials.

Rich people might tolerate a washer that lasts 3 years, give or take. But even they don't want a washer that breaks in one month. If you build washers, you need to be sure that nearly every single one lasts a full month, at least. But, with poor materials science, you have to overshoot by a lot to ensure of that. Maybe you have to aim for a mean duration of decades to guarantee that the minimum duration doesn't fall below one month. On the other hand, with better materials science, you can get the distribution of duration to cluster tightly around 3 years. You still have very few washers lasting only one month, but the vast majority of your washers are far less durable than they used to be.

Afterthought

Maybe this is just Nassim Taleb's notion of antifragility. I haven't read the book, but I gather that the idea is that individuals grow stronger in environments that contain more stressors (within limits). Conversely, if you take away the stressors (i.e., make the environment globally better), then you get more fragile individuals (i.e., things are locally worse).

Making equilibrium CDT into FDT in one+ easy step

5 Stuart_Armstrong 21 March 2017 02:42PM

In this post, I'll argue that Joyce's equilibrium CDT (eCDT) can be made into FDT (functional decision theory) with the addition of an intermediate step - a step that should have no causal consequences. This would show that eCDT is unstable under causally irrelevant changes, and is in fact a partial version of FDT.

Joyce's principle is:

Full Information. You should act on your time-t utility assessments only if those assessments are based on beliefs that incorporate all the evidence that is both freely available to you at t and relevant to the question about what your acts are likely to cause.

When confronted by a problem with a predictor (such as Death in Damascus or the Newcomb problem), this allows eCDT to recursively update their probabilities of the behaviour of the predictor, based on their own estimates of their own actions, until this process reaches equilibrium. This allows it to behave like FDT/UDT/TDT on some (but not all) problems. I'll argue that you can modify the setup to make eCDT into a full FDT.

 

Death in Damascus

In this problem, Death has predicted whether the agent will stay in Damascus (S) tomorrow, or flee to Aleppo (F). And Death has promised to be in the same city as the agent (D or A), to kill them. Having made its prediction, Death then travels to that city to wait for the agent. Death is known to be a perfect predictor, and the agent values survival at $1000, while fleeing costs $1.

Then eCDT fleeing to Aleppo with probability 999/2000. To check this, let x be the probability of fleeing to Aleppo (F), and y the probability of Death being there (A). The expected utility is then

  • 1000(x(1-y)+(1-x)y)-x                                                    (1)

Differentiating this with respect to x gives 999-2000y, which is zero for y=999/2000. Since Death is a perfect predictor, y=x and eCDT's expected utility is 499.5.

The true expected utility, however, is -999/2000, since Death will get the agent anyway, and the only cost is the trip to Aleppo.

 

Delegating randomness

The eCDT decision process seems rather peculiar. It seems to allow updating of the value of y dependent on the value of x - hence allow acausal factors to be considered - but only in a narrow way. Specifically, it requires that the probability of F and A be equal, but that those two events remain independent. And it then differentiates utility according to the probability of F only, leaving that of A fixed. So, in a sense, x correlates with y, but small changes in x don't correlate with small changes in y.

That's somewhat unsatisfactory, so consider the problem now with an extra step. The eCDT agent no longer considers whether to stay or flee; instead, it outputs X, a value between 0 and 1. There is a uniform random process Z, also valued between 0 and 1. If Z<X, then the agent flees to Aleppo; if not, it stays in Damascus.

This seems identical to the original setup, for the agent. Instead of outputting a decision as to whether to flee or stay, it outputs the probability of fleeing. This has moved the randomness in the agent's decision from inside the agent to outside it, but this shouldn't make any causal difference, because the agent knows the distribution of Z.

Death remains a perfect predictor, which means that it can predict X and Z, and will move to Aleppo if and only if Z<X.

Now let the eCDT agent consider outputting X=x for some x. In that case, it updates its opinion of Death's behaviour, expecting that Death will be in Aleppo if and only if Z<x. Then it can calculate the expected utility of setting X=x, which is simply 0 (Death will always find the agent) minus x (the expected cost of fleeing to Aleppo), hence -x. Among the "pure" strategies, X=0 is clearly the best.

Now let's consider mixed strategies, where the eCDT agent can consider a distribution PX over values of X (this is a sort of second order randomness, since X and Z already give randomness over the decision to move to Aleppo). If we wanted the agent to remain consistent with the previous version, the agent then models Death as sampling from PX, independently of the agent. The probability of fleeing is just the expectation of PX; but the higher the variance of PX, the harder it is for Death to predict where the agent will go. The best option is as before: PX will set X=0 with probability 1001/2000, and X=1 with probability 999/2000.

But is this a fair way of estimating mixed strategies?

 

Average Death in Aleppo

Consider a weaker form of Death, Average Death. Average Death cannot predict X, but can predict PX, and will use that to determine its location, sampling independently from it. Then, from eCDT's perspective, the mixed-strategy behaviour described above is the correct way of dealing with Average Death.

But that means that the agent above is incapable of distinguishing between Death and Average Death. Joyce argues strongly for considering all the relevant information, and the distinction between Death and Average Death is relevant. Thus it seems when considering mixed strategies, the eCDT agent must instead look at the pure strategies, compute their value (-x in this case) and then look at the distribution over them.

One might object that this is no longer causal, but the whole equilibrium approach undermines the strictly causal aspect anyway. It feels daft to be allowed to update on Average Death predicting PX, but not on Death predicting X. Especially since moving from PX to X is simply some random process Z' that samples from the distribution PX. So Death is allowed to predict PX (which depends on the agent's reasoning) but not Z'. It's worse than that, in fact: Death can predict PX and Z', and the agent can know this, but the agent isn't allowed to make use of this knowledge.

Given all that, it seems that in this situation, the eCDT agent must be able to compute the mixed strategies correctly and realise (like FDT) that staying in Damascus (X=0 with certainty) is the right decision.

 

Let's recurse again, like we did last summer

This deals with Death, but not with Average Death. Ironically, the "X=0 with probability 1001/2000..." solution is not the correct solution for Average Death. To get that, we need to take equation (1), set x=y first, and then differentiate with respect to x. This gives x=1999/4000, so setting "X=0 with probability 2001/4000 and X=1 with probability 1999/4000" is actually the FDT solution for Average Death.

And we can make the eCDT agent reach that. Simply recurse to the next level, and have the agent choose PX directly, via a distribution PPX over possible PX.

But these towers of recursion are clunky and unnecessary. It's simpler to state that eCDT is unstable under recursion, and that it's a partial version of FDT.

[Link] Building Safe A.I. - A Tutorial for Encrypted Deep Learning

1 korin43 21 March 2017 03:17PM

[Stub] Newcomb problem as a prisoners' dilemma/anti-coordination game

2 Stuart_Armstrong 21 March 2017 10:34AM

You should always cooperate with an identical copy of yourself in the prisoner's dilemma. This is obvious, because you and the copy will reach the same decision.

That justification implicitly assumes that you and your copy as (somewhat) antagonistic: that you have opposite aims. But the conclusion doesn't require that at all. Suppose that you and your copy were instead trying to ensure that one of you got maximal reward (it doesn't matter which). Then you should still jointly cooperate because (C,C) is possible, while (C,D) and (D,C) are not (I'm ignoring randomising strategies for the moment).

Now look at the Newcomb problem. You decision enters twice: once when you decide how many boxes to take, and once when Omega is simulating or estimating you to decide how much money to put in box B. You would dearly like your two "copies" (one of which may just be an estimate) to be out of sync - for the estimate to 1-box while the real you two-boxes. But without any way of distinguishing between the two, you're stuck with taking the same action - (1-box,1-box). Or, seeing it another way, (C,C).

This also makes the Newcomb problem into an anti-coordination game, where you and your copy/estimate try to pick different options. But, since this is not possible, you have to stick to the diagonal. This is why the Newcomb problem can be seen both as an anti-coordination game and a prisoners' dilemma - the differences only occur in the off-diagonal terms that can't be reached.

[Link] Chuckling a Bit at Microsoft and the PCFG Formalism

5 Daniel_Burfoot 20 March 2017 07:37PM

[Error]: Statistical Death in Damascus

3 Stuart_Armstrong 20 March 2017 07:17PM

Note: This post is in error, I've put up a corrected version of it here. I'm leaving the text in place, as historical record. The source of the error is that I set Pa(S)=Pe(D) and then differentiated with respect to Pa(S), while I should have differentiated first and then set the two values to be the same.

Nate Soares and Ben Levinstein have a new paper out on "Functional Decision theory", the most recent development of UDT and TDT.

It's good. Go read it.

This post is about further analysing the "Death in Damascus" problem, and to show that Joyce's "equilibrium" version of CDT (causal decision theory) is in a certain sense intermediate between CDT and FDT. If eCDT is this equilibrium theory, then it can deal with a certain class of predictors, which I'll call distribution predictors.

 

Death in Damascus

In the original Death in Damascus problem, Death is a perfect predictor. It finds you in Damascus, and says that it's already planned it's trip for tomorrow - and it'll be in the same place you will be.

You value surviving at $1000, and can flee to Aleppo for $1.

Classical CDT will put some prior P over Death being in Damascus (D) or Aleppo (A) tomorrow. And then, if P(A)>999/2000, you should stay (S) in Damascus, while if P(A)<999/2000, you should flee (F) to Aleppo.

FDT estimates that Death will be wherever you will, and thus there's no point in F, as that will just cost you $1 for no reason.

But it's interesting what eCDT produces. This decision theory requires that Pe (the equilibrium probability of A and D) be consistent with the action distribution that eCDT computes. Let Pa(S) be the action probability of S. Since Death knows what you will do, Pa(S)=Pe(D).

The expected utility is 1000.Pa(S)Pe(A)+1000.Pa(F)Pe(D)-Pa(F). At equilibrium, this is 2000.Pe(A)(1-Pe(A))-Pe(A). And that quantity is maximised when Pe(A)=1999/4000 (and thus the probability of you fleeing is also 1999/4000).

This is still the wrong decision, as paying the extra $1 is pointless, even if it's not a certainty to do so.

So far, nothing interesting: both CDT and eCDT fail. But consider the next example, on which eCDT does not fail.

 

Statistical Death in Damascus

Let's assume now that Death has an assistant, Statistical Death, that is not a prefect predictor, but is a perfect distribution predictor. It can predict the distribution of your actions, but not your actual decision. Essentially, you have access to a source of true randomness that it cannot predict.

It informs you that its probability over whether to be in Damascus or Aleppo will follow exactly the same distribution as yours.

Classical CDT follows the same reasoning as before. As does eCDT, since Pa(S)=Pe(D), as before, since Statistical Death follows the same distribution as you do.

But what about FDT? Well, note that FDT will reach the same conclusion as eCDT. This is because 1000.Pa(S)Pe(A)+1000.Pa(F)Pe(D)-Pa(F) is the correct expected utility, the Pa(S)=Pe(D) assumption is correct for Statistical Death, and (S,F) is independent of (A,D) once the action probabilities have been fixed.

So on the Statistical Death problem, eCDT and FDT say the same thing.

 

Factored joint distribution versus full joint distributions

What's happening is that there is a joint distribution over (S,F) (your actions) and (D,A) (Death's actions). FDT is capable of reasoning over all types of joint distributions, and fully assessing how its choice of Pa acausally affects Death's choice of Pe.

But eCDT is only capable of reasoning over ones where the joint distribution factors into a distribution over (S,F) times a distribution over (D,A). Within the confines of that limitation, it is capable of (acausally) changing Pe via its choice of Pa.

Death in Damascus does not factor into two distributions, so eCDT fails on it. Statistical Death in Damascus does so factor, so eCDT succeeds on it. Thus eCDT seems to be best conceived of as a version of FDT that is strangely limited in terms of which joint distributions its allowed to consider.

Open thread, Mar. 20 - Mar. 26, 2017

2 MrMind 20 March 2017 08:01AM

Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should start on Monday, and end on Sunday.

4. Unflag the two options "Notify me of new top level comments on this article" and "

[Link] The D-Squared Digest One Minute MBA – Avoiding Projects Pursued By Morons 101

1 Benquo 19 March 2017 06:48PM

[Link] 20000 UC Berkeley lectures that were deleted from Youtube

3 username2 19 March 2017 11:40AM

[Link] LW mentioned in influential 2016 Milo article on the Alt-Right

6 James_Miller 18 March 2017 07:30PM

View more: Next