You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dmytry comments on Suggestions for naming a class of decision theories - Less Wrong Discussion

5 Post author: orthonormal 17 March 2012 05:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (56)

You are viewing a single comment's thread.

Comment author: Dmytry 18 March 2012 04:38:17PM *  0 points [-]

Okay, I have a question: what will you call the decision theory where x is the action, I write down f(x) which is the pay-off, like in applied mathematics, and then I solve it for maximum f(x) using regular school algebra? I use that a lot when writing AIs for the computer game, really (when I want to find the direction in which to shoot, for instance, or want to turn minimizing the 3rd derivative)

Then I don't need to go into any recursion what so ever if i have conditionals on x inside the pay-offs (as in newcomb). I don't do some update cycle to solve the x=x , i know it doesn't fix x, and i don't recurse if i find x=1+x/2

BTW, an observation on the newcomb's problem: it seems to me that one boxing people write payoff function as

f(1)=box1(1)

f(2)=box2(2)+box1(2)

box1(1)=1000000

box1(2)=0

box2(1)=1000

box2(2)=1000

and one-box, other people write it as:

f(1)=box1

f(2)=box2+box1

box1>=0

box2>=0

and ignore the fact about prediction (boxes as function of x) altogether because they trust a world model where this is forbidden more than they trust the problem statement, which is kind of silly thing to do when you're solving a hypothetical anyway. Or maybe because they listen harder to 'the contents of boxes are fixed' than to 'prediction'. In any case to me the 1-boxing vs 2-boxing now looks like a trivial case of 'people can disagree how to transform verbal sentence into a model' . Given that there's as many versions of English as there are people speaking English, it's not very interesting. One can postulate the both boxes being transparent to make even more nonsensical version.

Comment author: orthonormal 19 March 2012 02:41:43PM *  1 point [-]

Basically, all of the decision theories are just deducing payoffs and calculating argmax, but there's a subtle complication with regard to the deduction of payoffs. I'm almost done with the post that explains it.

Comment author: Dmytry 19 March 2012 04:54:40PM *  0 points [-]

Well, you guys instead of using x for the choice and doing algebra to handle x on both sides of equations, start going meta and considering yourselves inside simulators, which, albeit intellectually stimulating, is unnecessary and makes it hard for you to think straight.

If I needed to calculate ideal orientation of a gun assuming that the enemy can predict orientation of a gun perfectly, i'd just use x for the orientation, and solve for both ballistics and enemy's evasive action.

Also, the newcomb's now sounds to me like simple case of alternative english to math conversions when processing the problem statement, not even a case of calculating anything differently. There's the prediction, but there's also the box contents being constant., you can't put both into math, you can in English but human languages are screwy and we all know it.

Comment author: orthonormal 24 March 2012 04:41:27PM 0 points [-]

I finished the post that explains the problem with the decision theory you proposed- calculating payoffs in the most direct way risks spurious counterfactuals. (I hope you don't mind that I called it "naive decision theory", since you yourself said it seemed like the obvious straightforward thing to do.)