You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DanArmak comments on A model of UDT with a halting oracle - Less Wrong Discussion

41 Post author: cousin_it 18 December 2011 02:18PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (100)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 18 December 2011 03:38:03PM *  4 points [-]

It's easy to write a universe program U that would make A cry. The post only proves that A won't end up crying for one specific U, and outlines an argument why it won't cry in some other similar problems.

The class of universes where A doesn't end up crying is supposed to roughly correspond to the informal class of "fair" decision problems where the agent's action is the only thing that determines the resulting utility, but I have no strong argument why it's a good formalization of that class.

One example of an "unfair" decision problem would be a universe that rewarded you for having a specific algorithm, rather than for returning a specific value. Such "unfair" problems can be made up to punish any decision theory you can come up with, so they're probably not a good test case.

Comment author: DanArmak 18 December 2011 04:17:13PM 1 point [-]

Thanks for explaining.