JGWeissman comments on Another attempt to explain UDT - Less Wrong

35 Post author: cousin_it 14 November 2010 04:52PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 14 November 2010 06:24:06PM 3 points [-]

If you are going to make this sort of claim, which the people you are trying to convince clearly disagree with, you should automatically include at least one example.

Ah, I thought the mention of the sunk costs fallacy or the casino were sufficient as examples.

If I'm at a casino in front of a blackjack table, I first make the decision whether or not to sit down, then if I do how much to bet, then I see my cards, then I choose my response. I don't see how UDT adds value when it comes to making any of those decisions, and it seems detrimental when making the last one (I don't need to be thinking about what I drew in other universes).

For the problems where it does add value- dealing with paradoxes where you need to not betray people because that's higher payoff than betraying them- it seems like an overly complex solution to a simple problem (care about your reputation). Essentially, it sounds to me a lot like "Odin made physics"- it sounds like a rationalization that adds complexity without adding value.

UDT does not propagate ignorance. Instead of using evidence to build knowledge of a single universe, it uses that evidence to identify what effects a decision has, possibly in multiple universes.

What's the difference between this and "thinking ahead"? The only difference I see is it also suggests that you think behind, which puts you at risk for the sunk costs fallacy. In a few edge cases, that's beneficial- the Omega paradoxes are designed to reward sunk cost thinking. But in real life, that sort of thinking is fallacious. If someone offers to sell you a lottery ticket, and you know that ticket is not a winner, you should not buy it on the hopes that they would have offered you the same choice if the ticket was a winner.

Comment author: JGWeissman 14 November 2010 06:54:15PM 1 point [-]

An example in this case would be actually describing a situation where an agent has to make a decision based on specified available information, and an analysis of what decision UDT and whatever decision theory you would like to compare it to make, and what happens to agents that make those decisions.

Essentially, it sounds to me a lot like "Odin made physics"- it sounds like a rationalization that adds complexity without adding value.

It is more like: relativity accurately describes things that go fast, and agrees with Newtonian physics about things that go slow like we are used to.

sunk costs fallacy

The sunk cost fallacy is caring more about making a previous investment payoff than getting the best payoff on your current decision. Where is the previous investment in counterfactual mugging?

Comment author: Vaniver 14 November 2010 09:56:49PM 0 points [-]

I don't have a proper response for you, but this came from thinking about your comments and you may be interested in it.

At the moment, I can't wrap my head around what it actually means to do math with UDT. If it's truly updateless, then it's worthless because a decision theory that ignores evidence is terrible. If it updates in a bizarre fashion, I'm not sure how that's different from updating normally. It seems like UDT is designed specifically to do well on these sorts of problems, but I think that's a horrible criterion (as explained in the linked post), and I don't see it behaving differently from simple second-order game theory. It's different from first-order game theory, but that's not its competitor.