# cousin_it comments on indexical uncertainty and the Axiom of Independence - Less Wrong

5 07 June 2009 09:18AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Sort By: Best

Comment author: 08 June 2009 09:57:27AM *  2 points [-]

Here's a precise definition of "observer-moment", "trustworthiness" and everything else you might care to want defined. But I will ask you for a favor in return...

Mathematical formulation 1: Please enter a program that prints "0" or "1". If it prints "1" you lose \$100, otherwise nothing happens.

Mathematical formulation 2: Please enter a program that prints "0" or "1". If it prints "1" you gain \$10000 or lose \$100 with equal probability, otherwise nothing happens.

Philosophical formulation by Vladimir Nesov, Eliezer Yudkowsky et al: we ought to find some program that optimizes the variables in case 1 and case 2 simultaneously. It must, must, must exist! For grand reasons related to philosophy and AI!

Now the favor request: Vladimir, could you please go out of character just this once? Give me a mathematical formulation in the spirit of 1 and 2 that would show me that your and Eliezer's theories have any nontrivial application whatsoever.

Comment author: 08 June 2009 09:55:13PM *  0 points [-]

Vladimir, it's work in progress; if I could state everything clearly, I would've written it up. It also seems that what is already written here and there informally on this subject is sufficient to communicate the idea, at least as problem statement.