Toggle comments on Conceptual Analysis and Moral Theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (456)
That may be the disjunction. Current anglophone philosophy is basically the construction of an abstract system of thought, valued for internal rigor and elegance but largely an intellectual exercise. Ancient Greek philosophies were eudaimonic- instrumental constructions designed to promote happiness. Their schools of thought, literal schools where one could go, were social communities oriented around that goal. The sequences are much more similar to the latter ('rationalists win' + meetups), although probably better phrased as utilitarian rather than eudaimonic. Yudkowsky and Sartre are basically not even playing the same game.
I'm delighted to hear that Clippie and Newcombs box are real-world, happiness promoting issues!
Clippy is pretty speculative, but analogies to Newcomb's problem come up in real-world decision-making all the time; it's a dramatization of a certain class of problem arising from decision-making between agents with models of each other's probable behavior (read: people that know each other), much like how the Prisoner's Dilemma is a dramatization of a certain type of coordination problem. It doesn't have to literally involve near-omniscient aliens handing out money in opaque boxes.
Does it? It seems to me that once Omega stops being omniscient and becomes, basically, your peer in the universe, there is no argument not to two-box in Newcomb's problem.
Seems to me like you only transformed one side of the equation, so to speak. Reallife Newcomblike problems don't involve Omega, but they also don't (mainly) involve highly contrived thought-experiment-like choices regarding which we are not prepared to model each other.
That seems to me to expand the Newcomb's Problem greatly -- in particular, into the area where you know you'll meet Omega and can prepare by modifying your internal state. I don't want to argue definitions, but my understanding of the Newcomb's Problem is much narrower. To quote Wikipedia,
and that's clearly not the situation of Joe and Kate.
Perhaps, but it is my understanding that an agent who is programmed to avoid reflective inconsistency would find the two situations equivalent. Is there something I'm missing here?
I don't know what "an agent who is programmed to avoid reflective inconsistency" would do. I am not one and I think no human is.
Reflective inconsistency isn't that hard to grasp, though, even for a human. All it's really saying is that a normatively rational agent should consider the questions "What should I do in this situation?" and "What would I want to pre-commit to do in this situation?" equivalent. If that's the case, then there is no qualitative difference between Newcomb's Problem and the situation regarding Joe and Kate, at least to a perfectly rational agent. I do agree with you that humans are not perfectly rational. However, don't you agree that we should still try to be as rational as possible, given our hardware? If so, we should strive to fit our own behavior to the normative standard--and unless I'm misunderstanding something, that means avoiding reflective inconsistency.
I don't consider them equivalent.
What, on your view, is the argument for not two-boxing with an omniscient Omega?
How does that argument change with a non-omniscient but skilled predictor?
If Omega is omniscient the two actions (one- and two-boxing) each have a certain outcome with the probability of 1. So you just pick the better outcome. If Omega is just a skilled predictor, there is no certain outcome so you two-box.
Unless you like money and can multiply, in which case you one box and end up (almost but not quite certainly) richer.
You are facing a modified version of Newcomb's Problem, which is identical to standard Newcomb except that Omega now has 99% predictive accuracy instead of ~100%. Do you one-box or two-box?
Think of the situation in the last round of an iterated Prisoner's Dilemma with known bounds. Because of the variety of agents you might be dealing with, the payoffs there aren't strictly Newcomblike, but they're closely related; there's a large class of opposing strategies (assuming reasonably bright agents with some level of insight into your behavior, e.g. if you are a software agent and your opponent has access to your source code) which will cooperate if they model you as likely to cooperate (but, perhaps, don't model you as a CooperateBot) and defect otherwise. If you know you're dealing with an agent like that, then defection can be thought of as analogous to two-boxing in Newcomb.