Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Gray_Area 31 January 2009 04:39:27PM 4 points [-]

For what it's worth, I find plenty to disagree with Eleazar about, on points of both style and substance, but on death I think he has it exactly right. Death is a really bad thing, and while humans have diverse psychological adaptations for dealing with death, it seems the burden of proof is on people who do NOT want to make the really bad thing go away in the most expedient way possible.

In response to That Alien Message
Comment author: Gray_Area 23 May 2008 07:50:00AM 0 points [-]

This is an amusing empirical test for zombiehood -- do you agree with Daniel Dennett?

Comment author: Gray_Area 20 May 2008 06:54:19PM 4 points [-]

"The idea that Bayesian decision theory being descriptive of the scientific process is very beautifully detailed in classics like Pearl's book, Causality, in a way that a blog or magazine article cannot so easily convey."

I wish people would stop bringing up this book to support arbitrary points, like people used to bring up the Bible. There's barely any mention of decision theory in Causality, let alone an argument for Bayesian decision theory being descriptive of all scientific process (although Pearl clearly does talk about decisions being modeled as interventions).

Comment author: Gray_Area 20 May 2008 01:31:45AM 0 points [-]

"Would you care to try to apply that theory to Einstein's invention of General Relativity? PAC-learning theorems only work relative to a fixed model class about which we have no other information."

PAC-learning stuff is, if anything far easier than general scientific induction. So should the latter require more samples or less?

Comment author: Gray_Area 19 May 2008 10:26:55PM 3 points [-]

"Eliezer is almost certainly wrong about what a hyper-rational AI could determine from a limited set of observations."

Eliezer is being silly. People invented computational learning theory, which among other things, shows the minimum number of samples needed to recover a given error rate.

Comment author: Gray_Area 14 May 2008 07:28:50AM 2 points [-]

Eliezer, why are you concerned with untestable questions?

In response to Joint Configurations
Comment author: Gray_Area 12 April 2008 08:25:36PM 2 points [-]

Richard: Cox's theorem is an example of a particular kind of result in math, where you have some particular object in mind to represent something, and you come up with very plausible, very general axioms that you want this representation to satisfy, and then prove this object is unique in satisfying these. There are equivalent results for entropy in information theory. The problem with these results, they are almost always based on hindsight, so a lot of the times you sneak in an axiom that only SEEMS plausible in hindsight. For instance, Cox's theorem states that plausibility is a real number. Why should it be a real number?

In response to Joint Configurations
Comment author: Gray_Area 11 April 2008 08:06:54AM 0 points [-]

"The probability of two events equals the probability of the first event plus the probability of the second event."

Mutually exclusive events.

It is interesting that you insist that beliefs ought to be represented by classical probability. Given that we can construct multiple kinds of probability theory, on what grounds should we prefer one over the other to represent what 'belief' ought to be?

In response to Trust in Bayes
Comment author: Gray_Area 30 January 2008 09:52:15AM 0 points [-]

"the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked"

Isn't there a simple algorithm which samples uniformly from a list without knowing it's length? Keywords: 'reservoir sampling.'

In response to The Allais Paradox
Comment author: Gray_Area 19 January 2008 12:50:08PM 13 points [-]

People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.

If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.

Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game _once_, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.

View more: Next