cousin_it comments on indexical uncertainty and the Axiom of Independence - Less Wrong

9 Post author: Wei_Dai 07 June 2009 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 08 June 2009 08:11:35AM 0 points [-]

I don't get Counterfactual Mugging at all. Dissolve the problem thus: exactly which observer-moment do we, as problem-solvers, get to optimize mathematically? Best algorithm we can encode before learning the toss result: precommit to be "trustworthy". Best algorithm we can encode after learning the toss result: keep the $100 and afterwards modify ourselves to be "trustworthy" - iff we expect similar encounters with Omega-like entities in the future with high enough expected utility. It's pretty obvious that more information about the world allows us to encode a better algorithm. Is there anything more to it?

Comment author: Vladimir_Nesov 08 June 2009 08:24:31AM *  1 point [-]

What's observer-moment (more technically, as used here)? What does it mean to be "trustworthy"? (To be a cooperator? To fool Omega of being a cooperator?)

For keeping the $100: you are not the only source of info, you can't really modify yourself like that, being only a human, and it's specified that you don't expect other encounters of this sort.

Whatever algorithm you can encode after you learn the toss result, you can encode before learning the toss result as well, by including it under the conditional clause, to be executed if the toss result matches the appropriate possibility.

More than that, whatever you do after you encounter the new info can be considered the execution of that conditional algorithm, already running in your mind, even if no deliberative effort for choosing it was made. By establishing an explicit conditional algorithm you are only optimizing the algorithm that is already in place, using that same algorithm, so it could be done after learning the info as well as before (well, not quite, but it's unclear how significant is the effect of lack of reflective consistency when reconsidered under reflection).

Comment author: cousin_it 08 June 2009 09:57:27AM *  3 points [-]

Here's a precise definition of "observer-moment", "trustworthiness" and everything else you might care to want defined. But I will ask you for a favor in return...

Mathematical formulation 1: Please enter a program that prints "0" or "1". If it prints "1" you lose $100, otherwise nothing happens.

Mathematical formulation 2: Please enter a program that prints "0" or "1". If it prints "1" you gain $10000 or lose $100 with equal probability, otherwise nothing happens.

Philosophical formulation by Vladimir Nesov, Eliezer Yudkowsky et al: we ought to find some program that optimizes the variables in case 1 and case 2 simultaneously. It must, must, must exist! For grand reasons related to philosophy and AI!

Now the favor request: Vladimir, could you please go out of character just this once? Give me a mathematical formulation in the spirit of 1 and 2 that would show me that your and Eliezer's theories have any nontrivial application whatsoever.

Comment author: Vladimir_Nesov 08 June 2009 09:55:13PM *  0 points [-]

Vladimir, it's work in progress; if I could state everything clearly, I would've written it up. It also seems that what is already written here and there informally on this subject is sufficient to communicate the idea, at least as problem statement.