cousin_it comments on Explanations for Less Wrong articles that you didn't understand - Less Wrong

18 Post author: Kaj_Sotala 31 March 2014 11:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (118)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 02 April 2014 03:22:42PM 1 point [-]

I haven't seen any good attempts. If someone else was asking, I'd refer them to you, but since it's you who's asking, I'll just say that I don't know :-)

Comment author: IlyaShpitser 02 April 2014 03:25:51PM *  1 point [-]

I have heard a claim that UDT is a kind of "sane precomputed EDT" (?). Why are "you" (they?) basing UDT on EDT? Is this because you are using the level of abstraction where causality somehow goes away, like it goes away if you look at the universal wave function (???). Maybe I just don't understand UDT? Can you explain UDT? :)


I am trying very very hard to be charitable to the EDT camp, because I am sure there are very smart people in that camp (Savage? Although I think he was aware of confounding issues and tried to rule them out before licensing an action. The trouble is you cannot do it with just conditional independence, that way lie dragons). This is why I keep asking about EDT.

Comment author: cousin_it 02 April 2014 09:03:36PM *  2 points [-]

I'll try to explain UDT by dividing it into "simple UDT" and "general UDT". These are some terms I just came up with, and I'll link to my own posts as examples, so please don't take my comment as some kind of official position.

"Simple UDT" assumes that you have a set of possible histories of a decision problem, and you know the locations of all instances of yourself within these histories. It's basically a reformulation of a certain kind of single-player games that are already well known in game theory literature. For more details, see this post. If you try to work through the problems listed in that post, there's a good chance that the very first one (Absent-Minded Driver) will give you a feeling of how "simple UDT" works. I think it's the complete and correct solution to the kind of problems where it's applicable, and doesn't need much more research.

"General UDT" assumes that the decision problem is given to you in some form that doesn't explicitly point out all instances of yourself, e.g. an initial state of a huge cellular automaton, or a huge computer program that computes a universe, or even a prior over all possible universes. The idea is to reduce the problem to "simple UDT" by searching for instances of yourself within the decision problem, using various mathematical techniques. See this post and this post for examples. Unlike "simple UDT", "general UDT" has many unsolved problems. Most of these problems deal with logical uncertainty and bounded reasoning, like the problem described in this post.

Does that help?

ETA: I notice that the description of "simple UDT" is pretty underwhelming. If you simplify it to "we should model the entire decision problem as a single-player game and play the best strategy in that game", you might say it's trivial and wonder what's the fuss. Maybe it's easier to understand by comparing it to other approaches. If you ask someone who doesn't know UDT to solve Absent-Minded Driver or Psy-Kosh's problem, they might get confused by things like "my subjective probability of being at such-and-such node", which are part of standard Bayesian rationality (Savage's theorem), but excluded from "simple UDT" by design. Or if you give them Counterfactual Mugging, they might get confused by Bayesian updating, which is also excluded from UDT by design.

Comment author: IlyaShpitser 04 April 2014 04:14:59PM *  0 points [-]

Thinking about this.