Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Caspar42 16 January 2018 02:20:22PM 0 points [-]

The issue with this example (and many similar ones) is that to decide between interventions on a variable X from the outside, EDT needs an additional node representing that outside intervention, whereas Pearl-CDT can simply do(X) without the need for an additional variable. If you do add these variables, then conditioning on that variable is the same as intervening on the thing that the variable intervenes on. (Cf. section 3.2.2 "Interventions as variables" in Pearl's Causality.)

Comment author: Caspar42 17 December 2017 08:49:34AM 0 points [-]

I wrote a summary of Hansons's The Age of Em, in which I focus on the bits of information that may be policy-relevant for effective altruists. For instance, I summarize what Hanson says about em values and also have a section about AI safety.

Comment author: Caspar42 11 November 2017 08:56:34AM *  0 points [-]

Great post, obviously.

You argue that signaling often leads to distribution of intellectual positions following this pattern: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with simple arguments

I think it’s worth noting that the pattern of position often looks different. For example, there is: in favor of X with simple arguments / in favor of Y with complex arguments / in favor of something like X with surprising and even more sophisticated and hard-to-understand arguments

In fact, I think many of your examples follow the latter pattern. For example, the market efficiency arguments in favor of libertarianism seem harder-to-understand and more sophisticated than most arguments for liberalism. Maybe it fits your pattern better if libertarianism is justified purely on the basis of expert opinion.

Similarly, the justification for the “meta-contrarian” position in "don't care about Africa / give aid to Africa / don't give aid to Africa" is more sophisticated than the reasons for the contrarian or naive positions.

But as has been pointed out, along with the gigantic cost, death does have a few small benefits. It lowers overpopulation, it allows the new generation to develop free from interference by their elders, it provides motivation to get things done quickly.

I’m not sure whether the overpopulation is a good example. I think in many circles that point would signal naivety and people would respond by something deep-sounding about how life is sacred. (The same is true for “it’s good if old people die because that saves money and allows the government to build more schools”.) Here, too, I would argue that your pattern doesn’t quite describe the set of commonly held positions, as it omits the naive pro-death position.

Comment author: IlyaShpitser 06 October 2017 03:48:37PM *  1 point [-]

I agree that in situations where A only has outgoing arrows, p(s | do(a)) = p(s | a), but this class of situations is not the "Newcomb-like" situations. In particular, classical smoking lesion has a confounder with an incoming arrow into a.

Maybe we just disagree on what "Newcomb-like" means? To me what makes a situation "Newcomb-like" is your decision algorithm influencing the world through something other than your decision (as happens in the Newcomb problem via Omega's prediction). In smoking lesion, this does not happen, your decision algorithm only influences the world via your action, so it's not "Newcomb-like" to me.

Comment author: Caspar42 06 October 2017 04:00:55PM 0 points [-]

I agree that in situations where A only has outgoing arrows, p(s | do(a)) = p(s | a), but this class of situations is not the "Newcomb-like" situations.

What I meant to say is that the situations where A only has outgoing arrows are all not Newcomb-like.

Maybe we just disagree on what "Newcomb-like" means? To me what makes a situation "Newcomb-like" is your decision algorithm influencing the world through something other than your decision (as happens in the Newcomb problem via Omega's prediction). In smoking lesion, this does not happen, your decision algorithm only influences the world via your action, so it's not "Newcomb-like" to me.

Ah, okay. Yes, in that case, it seems to be only a terminological dispute. As I say in the post, I would define Newcomb-like-ness via a disagreement between EDT and CDT which can mean either that they disagree about what the right decision is, or, more naturally, that their probabilities diverge. (In the latter case, the statement you commented on is true by definition and in the former case it is false for the reason I mentioned in my first reply.) So, I would view the Smoking lesion as a Newcomb-like problem (ignoring the tickle defense).

Comment author: MakoYass 08 September 2017 08:32:07AM *  2 points [-]

Aye, I've been meaning to read your paper for a few months now. (Edit: Hah. It dawns on me it's been a little less than a month since it was published? It's been a busy less-than-month for me I guess.)

I should probably say where we're at right now... I came up with an outline of a very reductive proof that there isn't enough expected anthropic measure in higher universes to make adhering to Life's Pact profitable (coupled with a realization that patternist continuity of existence isn't meaningful to living things if it's accompanied by a drastic reduction in anthropic measure). Having discovered this proof outline makes compat uninteresting enough to me that writing it down has not thus far seemed worthwhile. Christian is mostly unmoved by what I've told him of it, but I'm not sure whether that's just because his attention is elsewhere right now. I'll try to expound it for you, if you want it.

Comment author: Caspar42 06 October 2017 02:59:01PM 0 points [-]

Yes, the paper is relatively recent, but in May I published a talk on the same topic. I also asked on LW whether someone would be interested in giving feedback a month or so before actually the paper.

Do you think your proof/argument is also relevant for my multiverse-wide superrationality proposal?

Comment author: IlyaShpitser 23 September 2017 12:44:47PM *  0 points [-]

I guess:

(a) p(s | do(a)) is in general not equal to p(s | a). The entire point of causal inference is characterizing that difference.

(b) I looked at section 3.2.2, did not see how anything there supporting the claim.

(c) We knew since the 90s that p(s | do(a)) and p(s | a) disagree on classical decision theory problems, standard smoking lesion being one. But in general on any problem where you shouldn't "manage the news."

So I got super confused and stopped reading.

As cousin_it said somewhere at some point (and I say in my youtube talk), the confusing part of Newcomb is representing the situation correctly, and that is something you can solve by playing with graphs, essentially.

Comment author: Caspar42 06 October 2017 02:53:44PM 0 points [-]

So, the class of situations in which p(s | do(a)) = p(s | a) that I was alluding to is the one in which A has only outgoing arrows (or all the values of A’s predecessors are known). (I guess this could be generalized to: p(s | do(a)) = p(s | a) if A d-separates its predecessors from S?) (Presumably this stuff follows from Rule 2 of Theorem 3.4.1 in Causality.)

All problems in which you intervene in an isolated system from the outside are of this kind and so EDT and CDT make the same recommendations for intervening in a system from the outside. (That’s similar to the point that Pearl makes in section 3.2.2 of Causality: You can model the do-interventions by adding action nodes without predecessors and conditioning on these action nodes.)

The Smoking lesion is an example of a Newcomb-like problem where A has an inbound arrow that leads p(s | do(a)) and p(s | a) to differ. (That said, I think the smoking lesion does not actually work as a Newcomb-like problem, see e.g. chapter 4 of Arif Ahmed’s Evidence, Decision and Causality.)

Similarly, you could model Newcomb’s problem by introducing a logical node as a predecessor of your decision and the result of the prediction. (If you locate “yourself” in the logical node and the logical node does not have any predecessors, then CDT and EDT agree again.)

Of course, in the real world, all problems are in theory Newcomb-like because there are always some ingoing arrows into your decision. But in practice, most problems are nearly non-Newomb-like because, although there may be an unblocked path from my action to the value of my utility function, that path is usually too long/complicated to be useful. E.g., if I raise my hand now, that would mean that the state of the world 1 year ago was such that I raise my hand now. And the world state 1 year ago causes how much utility I have. But unless I’m in Arif Ahmed’s “Betting on the Past”, I don’t know which class of world states 1 year ago (the ones that lead to me raising my hand or the ones that cause me not to raise my hand) causes me to have more utility. So, EDT couldn't try to exploit that way of changing the past.

Comment author: Dagon 22 September 2017 01:02:11AM 1 point [-]

Actually, it would be interesting to break down the list of reasons people might have for two-boxing, even if we haven't polled for reasons, only decisions. From https://en.wikipedia.org/wiki/Newcomb%27s_paradox, the outcomes are:

  • a: Omega predicts two-box, player two-boxes, payout $1000
  • b: Omega predicts two-box, player one-boxes, payout $0
  • c: Omega predicts one-box, player two-boxes, payout $1001000
  • d: Omega predicts one-box, player one-boxes, payout $1000000

I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing), and reason that d > a. And further I think that two-boxers believe that all 4 are possible (b and c being "tricking Omega") and reason that c > d and a > b, so two-boxing dominates one-boxing.

Aside from "lizard man", what are the other reasons that lead to two-boxing?

Comment author: Caspar42 28 September 2017 07:15:02AM 0 points [-]

I claim that one-boxers do not believe b and c are possible because Omega is cheating or a perfect predictor (same thing)

Note that Omega isn't necessarily a perfect predictor. Most one-boxers would also one-box if Omega is a near-perfect predictor.

Aside from "lizard man", what are the other reasons that lead to two-boxing?

I think I could pass an intellectual Turing test (the main arguments in either direction aren't very sophisticated), but maybe it's easiest to just read, e.g., p. 151ff. of James Joyce's The Foundations of Causal Decision Theory and note how Joyce understands the problem in pretty much the same way that a one-boxer would.

In particular, Joyce agrees that causal decision theorists would want to self-modify to become one-boxers. (I have heard many two-boxers admit to this.) This doesn't make sense if they don't believe in Omega's prediction abilities.

Comment author: Wei_Dai 23 September 2017 10:02:37PM 2 points [-]

It seems to me that the original UDT already incorporated this type of approach to solving naturalized induction. See here and here for previous discussions. Also, UDT, as originally described, was intended as a variant of EDT (where the "action" in EDT is interpreted as "this source code implements this policy (input/output map)". MIRI people seem to mostly prefer a causal variant of UDT, but my position has always been that the evidential variant is simpler so let's go with that until there's conclusive evidence that the evidential variant is not good enough.

LZEDT seems to be more complex than UDT but it's not clear to me that it solves any additional problems. If it's supposed to have advantages over UDT, can you explain what those are?

Comment author: Caspar42 28 September 2017 06:58:23AM *  0 points [-]

I hadn’t seen these particular discussions, although I was aware of the fact that UDT and other logical decision theories avoid building phenomenological bridges in this way. I also knew that others (e.g., the MIRI people) were aware of this.

I didn't know you preferred a purely evidential variant of UDT. Thanks for the clarification!

As for the differences between LZEDT and UDT:

  • My understanding was that there is no full formal specification of UDT. The counterfactuals seem to be given by some unspecified mathematical intuition module. LZEDT, on the other hand, seems easy to specify formally (assuming a solution to naturalized induction). (That said, if UDT is just the updateless-evidentialist flavor of logical decision theory, it should be easy to specify as well. I haven’t seen people UDT characterize in this way, but perhaps this is because MIRI’s conception of UDT differs from yours?)
  • LZEDT isn’t logically updateless.
  • LZEDT doesn’t do explicit optimization of policies. (Explicit policy optimization is the difference between UDT1.1 and UDT1.0, right?)

(Based on a comment you made on an earlier past post of mine, it seems that UDT and LZEDT reason similarly about medical Newcomb problems.)

Anyway, my reason for writing this isn’t so much that LZEDT differs from other decision theories. (As I say in the post, I actually think LZEDT is equivalent to the most natural evidentialist logical decision theory — which has been considered by MIRI at least.) Instead, it’s that I have a different motivation for proposing it. My understanding is that the LWers’ search for new decision theories was not driven by the BPB issue (although some of the motivations you listed in 2012 are related to it). Instead it seems that people abandoned EDT — the most obvious approach — mainly for reasons that I don’t endorse. E.g., the TDT paper seems to give medical Newcomb problems as the main argument against EDT. It may well be that looking beyond EDT to avoid naturalized induction/BPB leads to the same decision theories as these other motivations.

Comment author: entirelyuseless 22 September 2017 11:32:13AM 2 points [-]

If statements about whether an algorithm exists are not objectively true or false, there is also no objectively correct decision theory, since the existence of agents is not objective in the first place. Of course you might even agree with this but consider it not to be an objection, since you can just say that decision theory is something we want to do, not something objective.

Comment author: Caspar42 25 September 2017 11:55:56PM *  1 point [-]

Yes, I share the impression that the BPB problem implies some amount of decision theory relativism. That said, one could argue that decision theories cannot be objectively correct, anyway. In most areas, statements can only be justified relative to some foundation. Probability assignments are correct relative to a prior, the truth of theorems depends on axioms, and whether you should take some action depends on your goals (or meta-goals). Priors, axioms, and goals themselves, on the other hand, cannot be justified (unless you have some meta-priors, meta-axioms, etc., but I think the chain as to end at some point, see https://en.wikipedia.org/wiki/Regress_argument ). Perhaps decision theories are similar to priors, axioms and terminal values?

Comment author: Brian_Tomasik 23 September 2017 05:11:16AM 1 point [-]

However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn't actual and so pressing the button is safe.

By "isn't actual" do you just mean that the agent isn't in world 2? World 2 might still exist, though?

Comment author: Caspar42 23 September 2017 06:06:41AM 1 point [-]

No, I actually mean that world 2 doesn't exist. In this experiment, the agent believes that either world 1 or world 2 is actual and that they cannot be actual at the same time. So, if the agent thinks that it is in world 1, world 2 doesn't exist.

View more: Next