timtyler comments on Desirable Dispositions and Rational Actions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?
So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.
Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.
When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.
In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.
Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.
This Omega is not impossible.
It says: "Omega has been correct on each of 100 observed occasions so far".
Not particularly hard - if you pick on decision theorists who had previously publicly expressed an opinion on the subject.
Ah! So I need to assign priors to three hypotheses. (1) Omega is a magician (i.e. illusion artist) (2) Omega had bribed people to lie about his past success. (3) He is what he claims.
So I assign a prior of zero probability to hypothesis #3, and cheerfully one-box using everyday decision theory.
First: http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/
You don't seem to be entering into the spirit of the problem. You are "supposed" to reach the conclusion that there's a good chance that Omega can predict your actions in this domain pretty well - from what he knows about you - after reading the premise of the problem.
If you think that's not a practical possibility, then I recommend that you imagine yourself as a deterministic robot - where such a scenario becomes more believable - and then try the problem again.
If I imagine myself as a deterministic robot, who knows that he is a deterministic robot, I am no longer able to maintain the illusion that I care about this problem.
Do you think you aren't a deterministic robot? Or that you are, but you don't know it?
It is a quantum universe. So I would say that I am a stochastic robot. And Omega cannot predict my future actions.
...then you need to imagine you made the robot, it is meeting Omega on your behalf - and that it then gives you all its winnings.
I like this version! Now the answer seems quite obvious.
In this case, I would design the robot to be a one-boxer. And I would harbour the secret hope that a stray cosmic ray will cause the robot to pick both boxes anyway.
Yes - but you would still give its skull a lead-lining - and make use of redundancy to produce reliability...
Agreed.