You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on Suggestions for naming a class of decision theories - Less Wrong Discussion

5 Post author: orthonormal 17 March 2012 05:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread.

Comment author: Dmytry 17 March 2012 09:48:57PM 0 points [-]

I wonder what sort of decision theory would try to generate a bunch of decision theories and evaluate relative performance of the decision theories on the problems identified as, well, problematic, and come up with a decision theory that performs the best. You can probably even formalize this with bounded form of Solomonoff induction - iterate over the decision theories, pick the best - except that you need to really formalize what a decision theory is.

Comment author: [deleted] 17 March 2012 11:37:35PM *  0 points [-]
Comment author: Dmytry 17 March 2012 11:50:48PM *  0 points [-]

Still not quite it.

BTW, an observation: if i want to maximize the distance at which the thrown stone lands, assuming constant initial speed and zero height, I work out the algebra - I have unknown, x, the shoot angle, and I have laws of physics that express distance as function of x, and I find best x. In newcomb's, I have x=my choice, I have been given rules of the world, whereby the payoff formula includes the x itself, i calculate best x, which is one-box (not surprisingly). The smoking lesion also works fine. Once you stop invoking your built-in decision theory on confusing cases, things are plain and clear.

At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel's theorem, there will be some problem that is going to get ya, i.e. cause failure.

Comment author: endoself 18 March 2012 12:19:23AM 1 point [-]

At this point, how well you perform depends to what sort of axiom system you are using to solve for x, and by Godel's theorem, there will be some problem that is going to get ya, i.e. cause failure.

This doesn't seem like something that needs to be solvable. You can you diagonalization to defeat any decision theory; just award some utility iff the agent chooses the option not recommended by that decision theory. A different decision theory can choose the other option, but that decision theory has acausal influence over the right answer that prevents it from winning.

Comment author: Dmytry 18 March 2012 03:19:12AM *  0 points [-]

Yep. Just wanted to mention that every theory where you can do diagonalization, i.e. every formal one, can be defeated.

My point is that one could just make the choice be x, then express the payoff in terms of x, then solve for x that gives maximum payoff, using the methods of algebra, instead of trying to redefine algebra in some stupid sense of iteration of values of x until finding an equality (then omg it fails at x=x), and trying to reinvent already existent reasoning (in form of theorem proving).