In response to New Improved Lottery
Comment author: Douglas_Knight2 15 April 2007 02:46:01AM 1 point [-]

Eliezer, did you mean to evoke stock markets with "You could feed it to a display on people's cellphones"?

Surely financial markets are well-calibrated for events that happen once a month. Then an option that such an event will happen tomorrow is should be about right. Some claim that there is systematic bias in options against rare events, that on a long shot you do better than even.

Comment author: Douglas_Knight2 11 April 2007 11:49:28PM 0 points [-]

the social dilemma is that neither writing grant proposals, nor showing up at your office desk, is inherently an evil deed.

One answer is that grant-writing is an evil deed. I don't tend to that belief, or the more plausible one that offering grants is an evil deed, but I think they're worth mentioning.

Promotion based on hours at the office, or working at a company that does that do seem to me like evil deeds, but human bias means that practically all companies have this effect, to some extent.

Comment author: Douglas_Knight2 04 April 2007 01:52:38AM 0 points [-]

the argument about how noise traders survive

Surely the argument you give--that false beliefs can lead to extra risk, increasing expected returns while decreasing expected utility--is older than the noise trader literature?

Comment author: Douglas_Knight2 16 March 2007 05:42:40PM 1 point [-]

Perhaps this is because of the larger power distance between the security people and the protected.

How do you measure this distance? The FDA has a monopoly, too. Here's another theory: drug companies are a third player. Moreover, they are concentrated interests, so they affect the public choice. (Airlines play a similar role in the security theater, but their interests are more diffuse. Also, getting rid of airline security is a public good, while getting a drug approved helps one drug company relative the others.)

That's not to say I disagree with Anders's psychology, but I discount it because I find it harder to judge than public choice arguments.

Comment author: Douglas_Knight2 24 January 2007 12:45:26AM 3 points [-]

GPS? You can do better than that! I believe special relativity because it's implied by Maxwell's equations, which I have experienced. Normal human speeds are enough to detect contraction, if you do it by comparing E&M.

Comment author: Douglas_Knight2 22 January 2007 02:36:54AM 7 points [-]

Apparently it is very hard to teach and test regarding the underlying reasons.

Does "apparently" (in general) mean you aren't using additional sources of information? In this case, are you concluding that it's difficult simply from the fact that it isn't done? That only seems to me like evidence that it's not worth it. Unfortunately, the value driving the system is getting published, not advancing science.

In response to The Modesty Argument
Comment author: Douglas_Knight2 11 December 2006 04:17:42AM 1 point [-]

It is certainly true that one should not superficially try to replicate Aumann's theorem, but should try to replicate the process of the bayesians, namely, to model the other agent and see how the other agent could disagree. Surely this is how we disagree with creationists and customer service agents. Even if they are far from bayesian, we can extract information from their behavior, until we can model them.

But modeling is also what RH was advocating for the philosophers. Inwagen accepts Lewis as a peer, perhaps a superior. Moreover, he accepts him as rationally integrating Inwagen's arguments. This is exactly where Aumann's argument applies. In fact, Inwagen does model himself and Lewis and claims (I've only read the quoted excerpt) that their disagreement must be due to incommunicable insights. Although Aumann's framework about modelling the world seems incompatible with the idea of incommunicable insight, I think it is right to worry about symmetry. Possibly this leads us into difficult anthropic territory. But EY is right that we should not respond by simply changing our opinions, but we should try to describe this incommunicable insight and see how it has infected our beliefs.

Anthropic arguments are difficult, but I do not think they are relevant in any of the examples, except maybe the initial superintelligence. In that situation, I would argue in a way that may be your argument about dreaming: if something has a false belief about having a detailed model of the world, there's not much it can do. You might as well say it is dreaming. (I'm not talking about accuracy, but precision and moreover persistence.)

And you seem to say that if it is dreaming it doesn't count. When you claim that my bayesian score goes up if I insist that I'm awake whenever I feel I'm awake, you seem to be asserting that my assertions in my dreams don't count. This seems to be a claim about persistence of identity. Of course, my actions in dreams seem to have less import than my actions when awake, so I should care less about dream error. But I should not discount it entirely.

View more: Prev