Wei_Dai comments on Desirable Dispositions and Rational Actions - Less Wrong

13 Post author: RichardChappell 17 August 2010 03:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (180)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 17 August 2010 06:42:22AM 2 points [-]

Any useful treatment of Newcomblike problems will specify explicitly or implicitly how Omega will handle (quantum) randomness.

At the risk of appearing stupid, I have to ask: exactly what is a "useful treatment of Newcomb-like problems" used for?

So far, the only effect that all the Omega-talk has had on me is to make me honestly suspect that you guys must be into some kind of mind-over-matter quantum woo.

Seriously, Omega is not just counterfactual, he is impossible. Why do you guys keep asking us to believe so many impossible things before breakfast? Jaynes says not to include impossible propositions among the conditions in a conditional probability. Bad things happen if you do. Impossible things need to have zero-probability priors. Omega just has no business hanging around with honest Bayesians.

When I read that you all are searching for improved decision theories that "solve" the one-shot prisoner's dilemma and the one-shot Parfit hitchhiker, I just cringe. Surely you shouldn't change the standard, well-established, and correct decision theories. If you don't like the standard solutions, you should instead revise the problems from unrealistic one-shots to more realistic repeated games or perhaps even more realistic games with observers - observers who may play games with you in the future.

In every case I have seen so far where Eliezer has denigrated the standard game solution because it fails to win, he has been analyzing a game involving a physically and philosophically impossible fictional situation.

Let me ask the question this way: What evidence do you have that the standard solution to the one-shot PD can be improved upon without creating losses elsewhere? My impression is that you are being driven by wishful thinking and misguided intuition.

Comment author: thomblake 17 August 2010 03:36:13PM 2 points [-]

Impossible things need to have zero-probability priors.

0 and 1 are not probabilities. I certainly don't have a prior of 0 that Omega's existence is impossible; he's not defined in a contradictory fashion, and even if he was I harbor the tiniest bit of doubt that I'm wrong about how contradictions work.

Comment author: Perplexed 17 August 2010 06:41:43PM 1 point [-]

I am using sloppy language here, perhaps. But to illustrate my usage, I claim that the probability that 2+2=4 is 1. And that p(2+2=5)=0.

Comment author: thomblake 17 August 2010 06:45:54PM 3 points [-]

If you were a Bayesian and assigned 0 probability to 2+2=5, you'd be in unrecoverable epistemic trouble if you turned out to be wrong about that. See How to convince me 2+2=3.

Comment author: Perplexed 19 August 2010 02:04:08AM 1 point [-]

EY to the contrary, I remain smug in my evaluation p(2+2=5)=0. Of all the evidences that Eliezer offered, the only one to convince me was the one which demonstrated to me that I was confused about the meaning of the digit 5. Yes, by Cromwell's rule, I think it possible I might be mistaken about how to count. "1, 2, 3, 5, 6, 4, 7", I recite to myself. "Yes, I had been wrong about that. Thanks for correcting me."

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

Comment author: WrongBot 19 August 2010 02:21:38AM 3 points [-]

I might then write down p(Eliezer Yupkowski is the guru of Less Wrong)=0.999999999. Once again, I would be mistaken. It is "Yudkowski", not "Yupkowski") But in neither case am I in unrecoverable epistemic trouble. Those were typos. Correcting them is a simple search-and-replace, not a Bayesian updating. Or so I understand.

It's Yudkowsky. Might want to update your general confidence evaluations.