Vaniver comments on Timelessness as a Conservative Extension of Causal Decision Theory - Less Wrong

15 [deleted] 28 May 2014 02:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (65)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vaniver 02 June 2014 10:56:14PM *  1 point [-]

On further thought, I would like to see someone explain exactly why I should give Omega $100.

Personally, I think all of the work is being done by Omega's super-trustworthiness, and so I don't think it's a reasonable scenario to optimize for. In the real world, making a 'rational precommitment' on information you don't possess seems like the reference class of 'scams.'

(Note that I am explictly avoiding the question of what the right thing to do is; I don't think my decision theory is currently well-equipped to handle this problem, and I'm okay with that.)

Comment author: [deleted] 03 June 2014 02:09:57PM 2 points [-]

Ok, I've been talking it over with Benjamin Fox some more, and I don't think Omega's trustworthiness is the issue here. The issue is basically to come up with some decision-theoretic notion of "virtue": "I should take action X because, timelessly speaking, a history in which I always respond to choice Y with action X nets me more money/utility/happiness than any other." The idea is that taking action X or not doing so in any one particular instance can change which history we're enacting, while normal decision theories reason only over the scope of a single choice-instance, with little regard for potential futures about which we don't have specific information encoded in our causal graph.

Comment author: Vaniver 04 June 2014 12:01:13AM 2 points [-]

The idea is that taking action X or not doing so in any one particular instance can change which history we're enacting, while normal decision theories reason only over the scope of a single choice-instance, with little regard for potential futures about which we don't have specific information encoded in our causal graph.

It seems to me that the impacts of being virtuous on one's potential future is enough to justify being virtuous, and one does not need to take into account the impacts of being virtuous on alternative presents one might have faced instead. (Basically, instead of trusting that Omega would have given you something in an alternate world, you are trusting that human society is perceptive enough to notice and reward enough of your virtues to justify having them.)

Comment author: [deleted] 04 June 2014 01:23:31PM 2 points [-]

Yes, we agree. "I will get rewarded for this behavior in the future at a profitable rate to justify my sacrifice in the present" is a reason to "self-sacrifice" in the present. The question is how to build a decision-theory that can encode this kind of knowledge without requiring actual prescience (that is, without needing to predict the specific place and time in which the agent will be rewarded).

Comment author: Jiro 03 June 2014 09:15:45PM 1 point [-]

Even using that notion of virtue, whether giving Omega the $100 benefits you only happens if Omega is trustworthy. So Omega's trustworthiness can still be a deciding factor.

Comment author: [deleted] 04 June 2014 01:19:55PM 1 point [-]

Omega's trustworthiness mostly just means we can assign a degenerate probability of 1.0 to all information we receive from Omega.