cousin_it comments on Desirable Dispositions and Rational Actions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (180)
At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?
So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.
Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.
When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.
In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.
Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.
Here's another way of looking at the situation that may or may not be helpful. Suppose I ask you, right here and now, what you'd do in the hypothetical future Parfit's Hitchhiker scenario if your opponent was a regular human with Internet access. You have several options:
Answer truthfully that you'd pay $100, thus proving that you don't subscribe to CDT or EDT. (This is the alternative I would choose.)
Answer that you'd refuse to pay. Now you've created evidence on the Internet, and if/when you face the scenario in real life, the driver will Google your name, check the comments on LW and leave you in the desert to die. (Assume the least convenient possible world where you can't change or delete your answer once it's posted.)
Answer that you'd pay up, but secretly plan to refuse. This means you'd be lying to us here in the comments - surely not a very nice thing to do. But if you subscribe to CDT with respect to utterances as well as actions, this is the alternative you're forced to choose. (Which may or may not make you uneasy about CDT.)
What makes me uneasy is the assumption I wouldn't want to pay $100 to somebody who rescued me from the desert. Given that, lying to people whom I don't really know should be a piece of cake!
I would of course choose option #1, adding that, due to an affliction giving me a trembling hand, I tend to get stranded in the desert and the like a lot and hence that I would appreciate it if he would spread the story of my honesty among other drivers. I might also promise to keep secret the fact of his own credulity in this case, should he ask me to. :)
I understand quite well that the best and simplest way to appear honest is to actually be honest. And also that, as a practical matter, you never really know who might observe your selfish actions and how that might hurt you in the future. But these prudential considerations can already be incorporated into received decision theory (which, incidentally, I don't think matches up with either CDT or EDT - at least as those acronyms seem to be understood here.) We don't seem to need TDT and UDT to somehow glue them in to the foundations.
Hmmm. Is EY perhaps worried that an AI might need need even stronger inducements toward honesty? Maybe it would, but I don't see how you solve the problem by endowing the AI with a flawed decision theory.