Briggs (2010) may be of interest to LWers. Opening:

It is a platitude among decision theorists that agents should choose their actions so as to maximize expected value. But exactly how to define expected value is contentious. Evidential decision theory (henceforth EDT), causal decision theory (henceforth CDT), and a theory proposed by Ralph Wedgwood that I will call benchmark theory (BT) all advise agents to maximize different types of expected value. Consequently, their verdicts sometimes conflict. In certain famous cases of conflict — medical Newcomb problems — CDT and BT seem to get things right, while EDT seems to get things wrong. In other cases of conflict, including some recent examples suggested by Egan 2007, EDT and BT seem to get things right, while CDT seems to get things wrong. In still other cases, EDT and CDT seem to get things right, while BT gets things wrong.

It’s no accident, I claim, that all three decision theories are subject to counterexamples. Decision rules can be reinterpreted as voting rules, where the voters are the agent’s possible future selves. The problematic examples have the structure of voting paradoxes. Just as voting paradoxes show that no voting rule can do everything we want, decision-theoretic paradoxes show that no decision rule can do everything we want. Luckily, the so-called “tickle defense” establishes that EDT, CDT, and BT will do everything we want in a wide range of situations. Most decision situations, I argue, are analogues of voting situations in which the voters unanimously adopt the same set of preferences. In such situations, all plausible voting rules and all plausible decision rules agree.

 

New Comment
12 comments, sorted by Click to highlight new comments since:

In the paper:

I suggest, as a useful mantra, Jeffrey’s (1983, 16) slogan “Choose for the person you expect to become once you have chosen.”

Whoa, no. That's a bad mantra. Wireheading, quantum immortality, doing meth - these are bad things.

The idea in the paper is is that you should decide by letting your "future selves" "vote," justified by the mantra above. And so their entire result is that in cases where your "future selves" who use different decision theories have different preference orderings, Arrow's theorem applies.

This not only requires you to throw away the cardinal information as gwern says, it only works if you make your decisions according to the mantra above! A reductio ad absurdum of this position would be that it can't even distinguish CDT from evil-CDT, which is where you always choose the locally worst option. Their result is really just the statement "some decision theories have different preference orderings," and should not be taken as a test of whether or not some decision theories are better than others.

Whoa, no. That's a bad mantra. Wireheading, quantum immortality, doing meth - these are bad things.

Briggs is here primarily considering cases where your preferences don't change as a result of your decision (but where your credences might). If we're interested in criticising the argument precisely as stated then perhaps this is a reasonable criticism but it's not an interesting criticism of Briggs' view which is to do with how we reason in cases where our decision gives us new information about the state of the world (ie. about changing credences not changing utilities).

This not only requires you to throw away the cardinal information as gwern says

Again, it is not clear that this is an interesting criticism. The result doesn't rely on cardinal values but it does apply to agents with cardinal values. This makes it a stronger more powerful result (rather than an uninteresting result which doesn't apply to actual theories). The result only relies on the ordinal rankings of outcomes but it causes problems for theories that utilise cardinal values (like decision theory). So Gwern notes that "a lot of voting paradoxes are resolved with additional information. (For example, Arrow's paradox is resolved with cardinal information.)". This isn't true with Briggs' argument - it can't simply be resolved by having cardinal preferences.

A reductio ad absurdum of this position would be that it can't even distinguish CDT from evil-CDT, which is where you always choose the locally worst option. Their result is really just the statement "some decision theories have different preference orderings,"

Again, not clear that this is an interesting criticism. Briggs' isn't trying to develop a necessary and sufficient criteria for theory adequacy so it's no surprise that her paper doesn't determine which of CDT and evil-CDT one should follow. She's just introducing two necessary criteria for theory adequacy and presenting a proof that no theory can meet these. So both CDT and evil-CDT fail to be entirely adequate theories - that's all she is trying to establish. Of course, we also want a tool that can tell us that CDT is a more adequate theory than evil-CDT but that's not the tool that Briggs is discussing here so it seems unreasonable to criticise her on the grounds that she fails to achieve some aim that's tangential to her purpose.

This isn't true with Briggs' argument - it can't simply be resolved by having cardinal preferences.

Yup, I missed that a year ago.

evil-CDT

I'm not sure where I was going with that either.

Briggs is here primarily considering cases where your preferences don't change as a result of your decision (but where your credences might).

True. Though on the other hand, the smoking lesion problem (and variants) is pretty much the credence-changing equivalent of doing meth :P I still think the requirements are akin to "let's find a decision theory that does meth but never has anything bad happen to it."

Haven't read this one either, but if anyone does, I would be curious how far the analogy carries over: a lot of voting paradoxes are resolved with additional information. (For example, Arrow's paradox is resolved with cardinal information.) If this is true, then whatever problem may be resolved with asymptotically more information - that an algorithm gives bad results with too little information is not very interesting.

I'm not convinced that Briggs' argument succeeds but I take it that the argument is meant to apply as long as the theory ranks decisions ordinally (rather than applying only if they do so and not if they utilise more information). See my response to manfred for a few more minor details.

The first thing I did after reading the abstract was search it for "TDT" and "UDT", found neither, searched for a few related terms that should have turned up if the author were familiar with either of them, and didn't find any. Then I checked the bibliography (it wasn't up to date with recent work). Then I skimmed for anything that looked like a computer program (there weren't any). Then I skimmed a few of the example problems and the discussion at the end. Then I stopped.

This is just another attempt to use English to wrestle with problems that become trivial when you restate them in a programming language. And I think we have more than enough of those already.

This is just another attempt to use English to wrestle with problems that become trivial when you restate them in a programming language. And I think we have more than enough of those already.

Disagree. From my perspective at least the central idea- of treating different decision theories as voting by potential future selves seems to be a novel and interesting idea. Moreover, while they don't state things in terms of programming languages, they are sufficiently precise with a lot of their discussion that I don't think they are running into many problems arising from language. Some of their remarks may however be better stated in a more algorithmic approach.

how to deine

Typo. Apparently not in the original.

For a moment I thought "deine" was some kind of new high-concept philosophical jargon.

I call it to deine a phrase, to deine a phrase.

You know filtering in functional programming? I do that with levels of organization. I start by assuming that a certain formalism applies across all levels of organization, then as I read further I filter away levels where it doesn't apply, and by the time I'm done I've gained information about how the formalism applies or doesn't apply across all scales. I think this takes awhile to learn but I think it's really valuable. /shrugs

E.g. it is hard to find scales where you can't usefully use economics, or ecology, or complex systems, etc.