Psy-Kosh comments on Decision theory: Why Pearl helps reduce “could” and “would”, but still leaves us with at least three alternatives - Less Wrong

29 06 September 2009 06:10AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 09 September 2009 03:17:54PM 0 points [-]

Even if rare, the decision theory used should at least be able to THINK ABOUT THE IDEA of a hardware error or such. Even if it dismisses it as not worth considering, it should at least have some means of describing the situation. ie, I am capable of at least considering the possibility of me having brain damage or whatever. Our decision theory should be capable of no less.

Sorry if I'm unclear here, but my focus isn't so much on omega as trying to get a version of TDT that can at least represent that sort of situation.

You seem to more or less have it right. Except I'd place the choice more at the selector or at the "node that represents the specific abstract algorithm that actually gets used"

As per TDT, choose as if you get to decide what the output for the abstract algorithm should be. The catch is that here there's a bit of uncertainty as to which abstract algorithm is being computed. So if, due to cosmic ray striking and causing a bitflip at a certain point in the computation, you end up actually computing algorithm 1B while omega models you as being algorithm 1A, then that'd be potentially a weakening of the dependence. (Again, just using the Newcomb problem simply as a way of talking about this.)

Comment author: 09 September 2009 03:37:12PM 0 points [-]

You seem to more or less have it right. Except I'd place the choice more at the selector or at the "node that represents the specific abstract algorithm that actually gets used".

Okay, so there'd be another node between "algorithm selector" and "your choice of box"; that would still be an uninterrupted path (chain) and so doesn't affect the result.

The problem, then, is that if you take the agent's choice as being at "algorithm selector", or any descendant through "your choice of box", you've d-separated "your choice of box" from "Omega's prediction", meaning that Omega's prediction is conditionally independent of "your choice of box", given the agent's choice. (Be careful to distinguish "your choice of box" from where we're saying the agent is making a choice.)

But then, we know that's not true, and it would reduce your model to the "innards CSA" that AnnaSalamon gave above. (The parent of "your choice of box" has no parents.)

So I don't think that's an accurate representation of the situation, or consistent with TDT. So the agent's choice must be occuring at the "innards node" in your graph.

(Note: this marks the first time I've drawn a causal Bayesian network and used the concept of d-separation to approach a new problem. w00t! And yes, this would be easier if I uploaded pictures as I went.)

Comment author: 09 September 2009 04:25:18PM 0 points [-]

Okay, so there'd be another node between "algorithm selector" and "your choice of box";

Not sure where you're getting that extra node from. The agent's choice is the output of the abstract algorithm they actually end up computing as a result of all the physical processes that occur.

Abstract algorithm space feeds into both your algorithm selector node and the algorithm selector node in "omega"'s model of you. That's where the dependence comes from.

So given logical uncertainty about the output of the algorithm, wouldn't they be d-connected? They'd be d-separated if the choice was already known... but if it was, there'd be nothing left to choose, right? No uncertainties to be dependent on each other in the first place.

Actually, maybe I ought draw a diagram of what I have in mind and upload to imgur or whatever.

Comment author: 10 September 2009 04:24:05AM 0 points [-]

Alright, after thinking about your points some more, and refining the graph, here's my best attempt to generate one that includes your concerns: Link.

Per AnnaSalamon's convention, the agent's would-node-surgery is in a square box, with the rest elliptical and the payoff octagonal. Some nodes included for clarity that would normally be left out. Dotted lines indicate edges that are cut for surgery when fixing "would" node. One link I wasn't sure about has a "?", but it's not that important.

Important points: The cutting of parents for the agent's decision preserves d-connection between box choice and box content. Omega observes innards and attempted selection of algorithm but retains uncertainty as to how the actual algorithm plays out. Innards contribute to hardware failures to accurately implement algorithm (as do [unshown] exogenous factors).

Comment author: 10 September 2009 04:31:27AM 0 points [-]

Just placing this here now as sort of a promise to follow up. Just that I'm running on insufficient sleep, so can only do "easy stuff" at the moment. :) I certainly plan on following up on our conversation in more detail, once I get a good night's sleep.

Comment author: 10 September 2009 05:03:09AM 0 points [-]

Comment author: 10 September 2009 09:08:10PM 0 points [-]

Having looked at your diagram now, that's not quite what I have in mind. For instance, "what I attempt to implement" is kinda an "innards" issue rather than deserving a separate box in this context.

Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you're doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I'm going to have to come up with some other notation to represent this.

Before we continue... do you have any objection to me making a top level posting for this (drawing out an attempt to diagram what I have in mind and so on?) frankly, even if my solution is complete nonsense, I really do think that this problem is an issue that needs to be dealt with as a larger issue.

Begun working on the diagram, still thinking out though exact way to draw it. I'll probably have to use a crude hack of simply showing lots of surgery points and basically saying "do surgery at each of these one at a time, weighing the outcome by the probability that that's the one you're actually effectively operating on" (This will (hopefully) make more sense in the larger post)

Comment author: 10 September 2009 09:28:11PM *  0 points [-]

Having looked at your diagram now, that's not quite what I have in mind. For instance, "what I attempt to implement" is kinda an "innards" issue rather than deserving a separate box in this context.

Grr! That was my first suggestion!

Actually, I realized that what I want to do is kind of weird, sort of amounting to doing surgery on a node while being uncertain as to what node you're doing the surgery on. (Or, alternately, being uncertain about certain details of the causal structure). I'm going to have to come up with some other notation to represent this. ... I'll probably have to use a crude hack of simply showing lots of surgery points and basically saying "do surgery at each of these one at a time, weighing the outcome by the probability that that's the one you're actually effectively operating on"

Not that weird, actually. I think you can do that by building a probabilistic twin network. See the good Pearl summary, slide 26. Instead of using it for a counterfactual, surgically set a different node in each subnetwork, and also the probabilities coming from the common parent (U in slide 26) to represent the probability of each subnetwork being the right one. Then use all terminal nodes across both subnetworks as the outcome set for calculating probability.

Though I guess that amounts to what you were planning anyway. Another way might be to use multiple dependent exogenous variables that capture the effect of cutting one edge when you thought you were cutting another.

Before we continue... do you have any objection to me making a top level posting for this

No problem, just make sure to link this discussion.

Comment author: 10 September 2009 10:04:57PM 0 points [-]

Grr! That was my first suggestion!

And I said that was more or less right, didn't I? ie, "what I attempt to implement" ~= "innards", which points to "selector"/"output", which selects what actually gets used.

Looking through the second link (ie, the slides) now

Comment author: 09 September 2009 05:15:43PM *  0 points [-]

Okay, I think there are some terminological issues to sort out here, resulting from our divergence from AnnaSalamon's original terminology.

The discussion I thought we were having corresponds to the CSA's calculation of "woulds". And when you calculate a would, you surgically set the output of the node, which means cutting the links to its parents.

Is this where we are? Are you saying the "would" should be calculated from surgery on the "algorithm selector" node (which points to "choice of box")? Because in that case, the links to "algorithm selector" from "algorithm space" and "innards" are cut, which d-separates them. (ETA: to clarify: d-separates "box choice" from Omega and its descendants.)

OTOH, even if you follow my suggestion and do surgery on "innards", the connection between "box choice" and "omega's prediction" is only a weak link -- algorithm space is huge.

Perhaps you also want an arrow from "algorithm selector" to "omega's prediction" (you don't need a separate node for "Omega's model of your selector" because it chains). Then, the possible difference between the box choice and omega's prediction emerges from the independent error term pointing to box choice (which accounts for cosmic rays, hardware errors, etc.) There is a separate (implicit) "error parent" for the "Omega's prediction" node, which accounts for shortcomings of Omega's model.

This preserves d-connection (between box choice and box content) after a surgery on "algorithm selector". Is that what you're aiming for?

(Causal Bayes nets are kinda fun!)