Comment author: 28 April 2014 04:25:25AM *  11 points [-]

I was considering the Shangri-La diet, but now I'm nervous.

Comment author: 20 May 2014 05:16:29PM 3 points [-]

According to information his family graciously posted to his blog, the cause of death was occlusive coronary artery disease with cardiomegaly.

http://blog.sethroberts.net/

Comment author: 26 March 2013 08:17:57PM *  13 points [-]

Wow, this is great work--congratulations! If it pans out, it bridges a really fundamental gap.

I'm still digesting the idea, and perhaps I'm jumping the gun here, but I'm trying to envision a UDT (or TDT) agent using the sense of subjective probability you define. It seems to me that an agent can get into trouble even if its subjective probability meets the coherence criterion. If that's right, some additional criterion would have to be required. (Maybe that's what you already intend? Or maybe the following is just muddled.)

Let's try invoking a coherent P in the case of a simple decision problem for a UDT agent. First, define G <--> P("G") < 0.1. Then consider the 5&10 problem:

• If the agent chooses A, payoff is 10 if ~G, 0 if G.

• If the agent chooses B, payoff is 5.

And suppose the agent can prove the foregoing. Then unless I'm mistaken, there's a coherent P with the following assignments:

P(G) = 0.1

P(Agent()=A) = 0

P(Agent()=B) = 1

P(G | Agent()=B) = P(G) = 0.1

And P assigns 1 to each of the following:

P("Agent()=A") < epsilon

P("Agent()=B") > 1-epsilon

P("G & Agent()=B") / P("Agent()=B") = 0.1 +- epsilon

P("G & Agent()=A") / P("Agent()=A") > 0.5

The last inequality is consistent with the agent indeed choosing B, because the postulated conditional probability of G makes the expected payoff given A less than the payoff given B.

Is that P actually incoherent for reasons I'm overlooking? If not, then we'd need something beyond coherence to tell us which P a UDT agent should use, correct?

(edit: formatting)

Comment author: 09 April 2013 09:00:04PM 1 point [-]

It occurs to me that my references above to "coherence" should be replaced by "coherence & P(T)=1 & reflective consistency". That is, there exists (if I understand correctly) a P that has all three properties, and that assigns the probabilities listed above. Therefore, those three properties would not suffice to characterize a suitable P for a UDT agent. (Not that anyone has claimed otherwise.)

Comment author: 26 March 2013 08:17:57PM *  13 points [-]

Wow, this is great work--congratulations! If it pans out, it bridges a really fundamental gap.

I'm still digesting the idea, and perhaps I'm jumping the gun here, but I'm trying to envision a UDT (or TDT) agent using the sense of subjective probability you define. It seems to me that an agent can get into trouble even if its subjective probability meets the coherence criterion. If that's right, some additional criterion would have to be required. (Maybe that's what you already intend? Or maybe the following is just muddled.)

Let's try invoking a coherent P in the case of a simple decision problem for a UDT agent. First, define G <--> P("G") < 0.1. Then consider the 5&10 problem:

• If the agent chooses A, payoff is 10 if ~G, 0 if G.

• If the agent chooses B, payoff is 5.

And suppose the agent can prove the foregoing. Then unless I'm mistaken, there's a coherent P with the following assignments:

P(G) = 0.1

P(Agent()=A) = 0

P(Agent()=B) = 1

P(G | Agent()=B) = P(G) = 0.1

And P assigns 1 to each of the following:

P("Agent()=A") < epsilon

P("Agent()=B") > 1-epsilon

P("G & Agent()=B") / P("Agent()=B") = 0.1 +- epsilon

P("G & Agent()=A") / P("Agent()=A") > 0.5

The last inequality is consistent with the agent indeed choosing B, because the postulated conditional probability of G makes the expected payoff given A less than the payoff given B.

Is that P actually incoherent for reasons I'm overlooking? If not, then we'd need something beyond coherence to tell us which P a UDT agent should use, correct?

(edit: formatting)

Comment author: 11 September 2011 03:01:51PM 0 points [-]

Thanks!

Comment author: 11 September 2011 08:34:25PM 14 points [-]

If John's physician prescribed a burdensome treatment because of a test whose false-positive rate is 99.9999%, John needs a lawyer rather than a statistician. :)

Comment author: 27 May 2011 01:17:23PM 4 points [-]

In April 2010 Gary Drescher proposed the "Agent simulates predictor" problem, or ASP, that shows how agents with lots of computational power sometimes fare worse than agents with limited resources.

Just to give due credit: Wei Dai and others had already discussed Prisoner's Dilemma scenarios that exhibit a similar problem, which I then distilled into the ASP problem.

Comment author: 12 January 2011 01:12:16AM 0 points [-]

Interesting. This would seem to return it to the class of decision-determined problems, and for an illuminating reason - the algorithm is only run with one set of information - just like how in Newcomb's problem the algorithm has only one set of information no matter the contents of the boxes.

This way of thinking makes Vladimir's position more intuitive. To put words in his mouth, instead of being not decision determined, the "unfixed" version is merely two-decision determined, and then left undefined for half the bloody problem.

Comment author: 12 January 2011 02:30:39PM 0 points [-]

and for an illuminating reason - the algorithm is only run with one set of information

That's not essential, though (see the dual-simulation variant in Good and Real).

Comment author: 06 January 2011 11:33:31AM *  1 point [-]

By "tit for tat" I am referring to the notable strategy in the iterated prisoner's dilemma. Agents using this strategy will keep cooperating as long as the other person cooperates, but if the other person defects then they will defect too. It's an excellent strategy by many measures, beating out more complicated strategies, and we probably have something like it built into our heads.

By analogy, a "tit for tat" strategy in Newcomb's problem with transparent boxes would be to one-box if the Predictor "cooperates," and two-box if the Predictor "defects."

But what does the Predictor see when it looks into the future of an agent with this strategy? Either way it chooses, it will have chosen correctly, so the Predictor needs some other, non-decision-determined criterion to decide.

Alternately you could think of it as making the decision-type of the agent undefined (at the time the Predictor is filling the boxes), thus making it impossible for the problem to have any well-defined decision-determined statement.

Comment author: 11 January 2011 09:38:52PM *  4 points [-]

Just to clarify, I think your analysis here doesn't apply to the transparent-boxes version that I presented in Good and Real. There, the predictor's task is not necessarily to predict what the agent does for real, but rather to predict what the agent would do in the event that the agent sees \$1M in the box. (That is, the predictor simulates what--according to physics--the agent's configuration would do, if presented with the \$1M environment; or equivalently, what the agent's 'source code' returns if called with the \$1M argument.)

If the agent would one-box if \$1M is in the box, but the predictor leaves the box empty, then the predictor has not predicted correctly, even if the agent (correctly) two-boxes upon seeing the empty box.

Comment author: 14 November 2010 10:55:44PM *  1 point [-]

1) 2TDT-1CDT.

How is this not resolved? (My comment and the following Eliezer's comment; I didn't re-read the rest of the discussion.)

2) "Agent simulates predictor"

This basically says that the predictor is a rock, doesn't depend on agent's decision, which makes the agent lose because of the way problem statement argues into stipulating (outside of predictor's own decision process) that this must be a two-boxing rock rather than a one-boxing rock.

Same as (2). We stipulate the weak player to be a \$9 rock. Nothing to be surprised about.

4) "A/B/~CON"

Requires ability to reason under logical uncertainty, comparing theories of consequences and not just specific possible utilities following from specific possible actions. Under any reasonable axioms for valuation of sets of consequences, action B wins.

5) The general case of agents playing a non-zero-sum game against each other, knowing each other's source code.

Without good understanding of reasoning under logical uncertainty, this one remains out.

Comment author: 18 November 2010 05:54:18PM *  1 point [-]

2) "Agent simulates predictor"

This basically says that the predictor is a rock, doesn't depend on agent's decision,

True, it doesn't "depend" on the agent's decision in the specific sense of "dependency" defined by currently-formulated UDT. The question (as with any proposed DT) is whether that's in fact the right sense of "dependency" (between action and utility) to use for making decisions. Maybe it is, but the fact that UDT itself says so is insufficient reason to agree.

[EDIT: fixed typo]

Comment author: 28 February 2010 07:25:46PM *  2 points [-]

If there are multiple translations, then either the translations are all mathematically equivalent, in the sense that they agree on the output for every combination of inputs, or the problem is underspecified. (This seems like it ought to be the definition for the word underspecified. It's also worth noting that all game-theory problems are underspecified in this sense, since they contain an opponent you know little about.)

Now, if two world programs were mathematically equivalent but a decision theory gave them different answers, then that would be a serious problem with the decision theory. And this does, in fact, happen with some decision theories; in particular, it happens to theories that work by trying to decompose the world program into parts, when those parts are related in a way that the decision theory doesn't know how to handle. If you treat the world-program as an opaque object, though, then all mathematically equivalent formulations of it should give the same answer.

Comment author: 28 February 2010 08:40:19PM *  1 point [-]

I assume (please correct me if I'm mistaken) that you're referring to the payout-value as the output of the world program. In that case, a P-style program and a P1-style program can certainly give different outputs for some hypothetical outputs of S (for the given inputs). However, both programs's payout-outputs will be the same for whatever turns out to be the actual output of S (for the given inputs).

P and P1 have the same causal structure. And they have the same output with regard to (whatever is) the actual output of S (for the given inputs). But P and P1 differ counterfactually as to what the payout-output would be if the output of S (for the given inputs) were different than whatever it actually is.

So I guess you could say that what's unspecified are the counterfactual consequences of a hypothetical decision, given the (fully specified) physical structure of the scenario. But figuring out the counterfactual consequences of a decision is the main thing that the decision theory itself is supposed to do for us; that's what the whole Newcomb/Prisoner controversy boils down to. So I think it's the solution that's underspecified here, not the problem itself. We need a theory that takes the physical structure of the scenario as input, and generates counterfactual consequences (of hypothetical decisions) as outputs.

PS: To make P and P1 fully comparable, drop the "E*1e9" terms in P, so that both programs model the conventional transparent-boxes problem without an extraneous pi-preference payout.

Comment author: 28 February 2010 04:54:18PM 0 points [-]

It seems to me that the world-program is part of the problem description, not the analysis. It's equally tricky whether it's given in English or in a computer program; Wei Dai just translated it faithfully, preserving the strange properties it had to begin with.

Comment author: 28 February 2010 06:22:20PM *  1 point [-]

My concern is that there may be several world-programs that correspond faithfully to a given problem description, but that correspond to different analyses, yielding different decision prescriptions, as illustrated by the P1 example above. (Upon further consideration, I should probably modify P1 to include "S(<relevant-inputs>)=S1(<relevant-inputs>)" as an additional input to S and to Omega_Predict, duly reflecting that aspect of the problem description.)

View more: Next