Lumifer comments on Conceptual Analysis and Moral Theory - Less Wrong

60 Post author: lukeprog 16 May 2011 06:28AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (456)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 18 November 2014 10:23:21PM 0 points [-]

That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia,

By the time the game begins, and the player is called upon to choose which boxes to take, the prediction has already been made, and the contents of box B have already been determined.

and that's clearly not the situation of Joe and Kate.

Comment author: dxu 19 November 2014 02:23:30AM *  2 points [-]

Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?

Comment author: Lumifer 19 November 2014 02:41:09AM -2 points [-]

I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.

Comment author: dxu 19 November 2014 02:59:02AM *  2 points [-]

Reflective inconsistency isn't that hard to grasp, though, even for a human. All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent. If that's the case, then there is no qualitative difference between Newcomb's Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don't you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard--and unless I'm misunderstanding something, that means avoiding reflective inconsistency.

Comment author: Lumifer 19 November 2014 03:01:32AM 0 points [-]

All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent.

I don't consider them equivalent.

Comment author: dxu 19 November 2014 03:06:08AM 1 point [-]

Fair enough. I'm not exactly qualified to talk about this sort of thing, but I'd still be interested to hear why you think the answers to these two ought to be different. (There's no guarantee I'll reply, though!)