dxu comments on Conceptual Analysis and Moral Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (456)
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?
I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.
Reflective inconsistency isn't that hard to grasp, though, even for a human. All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent. If that's the case, then there is no qualitative difference between Newcomb's Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don't you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard--and unless I'm misunderstanding something, that means avoiding reflective inconsistency.
I don't consider them equivalent.
Fair enough. I'm not exactly qualified to talk about this sort of thing, but I'd still be interested to hear why you think the answers to these two ought to be different. (There's no guarantee I'll reply, though!)