Liron comments on Rationality quotes: April 2010 - Less Wrong

5 Post author: wnoise 01 April 2010 08:41PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: Liron 02 April 2010 04:27:39AM 2 points [-]

Bayesians don't believe they lucked into their priors. They have a reflectively consistent causal explanation for their priors.

Comment author: Unknowns 02 April 2010 05:19:23AM 1 point [-]

Even if their explanation were correct, they would still have lucked into them. Others have different priors and no doubt different causes for their priors. So those Bayesians would have been lucky, in order to have the causes that would produce correct priors instead of incorrect ones.

Comment author: Eliezer_Yudkowsky 02 April 2010 05:43:01PM 3 points [-]

But that still doesn't need to be luck. I got my priors offa evolution and they are capable of noticing when something works or doesn't work a hundred times in a row. True, if I had a different prior, I wouldn't care about that either. But even so, that I have this prior is not a question of luck.

Comment author: Yvain 03 April 2010 10:30:13PM 8 points [-]

It is luck in a sense - every way that your opinion differs from someone else, you believe that factors outside of your control (your intelligence, your education, et cetera) have blessed you in such a way that your mind has done better than that poor person's.

It's just that it's not a problem. Lottery winners got richer than everyone else by luck, but that doesn't mean they're deluded in believing that they're rich. But someone who had only weak evidence ze won the lottery should be very skeptical. The real point of this quote is that being much less wrong than average is an improbable state, and you need correspondingly strong evidence to support the possibility. I think many of the people on this site probably do have some of that evidence (things like higher than average IQ scores would be decent signs of higher than normal probability of being right) but it's still something worth worrying about.

Comment author: Eliezer_Yudkowsky 04 April 2010 01:24:22AM 6 points [-]

I think I agree with that: There's nothing necessarily delusive about believing you got lucky, but it should generally require (at least) an amount of evidence proportional to the amount of purported luck.

Comment author: cousin_it 03 April 2010 11:50:51AM 0 points [-]

Then it would make sense to use some evolutionary thingy instead of Bayesianism as your basic theory of "correct behavior", as Shalizi has half-jokingly suggested.

Comment author: Vladimir_Nesov 02 April 2010 07:09:29PM *  -2 points [-]

Priors can't be correct or incorrect.

(Clarified in detail in this comment.)

Comment author: PhilGoetz 02 April 2010 11:07:45PM *  2 points [-]

Sounds mysterious to me. Priors are not claims about the world?

Comment author: Vladimir_Nesov 02 April 2010 11:11:58PM *  1 point [-]

Not quite. They are the way you process claims about the world. A claim has to come in context of a method for its evaluation, but prior can only be evaluated by comparing it to itself...

Comment author: Vladimir_Nesov 03 April 2010 09:35:44AM 2 points [-]

This downvoting should be accompanied with discussion. I've answered the objections that were voiced, but naturally I can't refute an incredulous stare.

Comment author: Nick_Tarleton 03 April 2010 10:14:11PM *  0 points [-]

The normal way of understanding priors is that they are or can be expressed as joint probability distributions, which can be more or less well-calibrated. You're skipping over a lot of inferential steps.

Comment author: Vladimir_Nesov 03 April 2010 10:19:33PM *  0 points [-]

Right. We could talk of quality of an approximation to a fixed object that is defined as the topic of a pursuit, even if we can't choose the fixed object in the process and thus there is no sense in having preferences about its properties.

Comment author: Nick_Tarleton 03 April 2010 10:28:43PM 1 point [-]

I can't tell what you're talking about.

Comment author: Vladimir_Nesov 04 April 2010 08:05:03AM *  0 points [-]

Say, you are trying to figure out what the mass on an electron is. As you develop your experimental techniques, there will be better or worse approximate answers along the way. It makes sense to characterize the approximations to the mass you seek to measure as more or less accurate, and characterize someone else's wild guesses about this value as correct or not correct at all.

On the other hand, it doesn't make sense so similarly characterize the actual mass of an electron. The actual mass of an electron can't be correct or incorrect, can't be more or less well-calibrated -- talking this way would indicate a conceptual confusion.

When I talked about prior or preference in the above comments, I meant the actual facts, not particular approximations to those facts, the concepts that we might want to approximate, not approximations. Characterizing these facts as correct or incorrect doesn't make sense for similar reasons.

Furthermore, since they are fixed elements of ideal decision-making algorithm, it doesn't make sense to ascribe preference to them (more or less useful, more or less preferable). This is a bit more subtle than with the example of the mass of an electron, since in that case we had a factual estimation process, and with decision-making we also have a moral estimation process. With factual estimation, the fact that we are approximating isn't itself an approximation, and so can't be more or less accurate. With moral estimation, we are approximating the true value of a decision (event), and the actual value of a decision (event) can't be too high or too low.

Comment author: RobinZ 04 April 2010 02:14:55PM 1 point [-]

I follow you up until you conclude that priors cannot be correct or incorrect. An agent with more accurate priors will converge toward the actual answer more quickly - I'll grant that's not a binary distinction, but it's a useful one.

Comment author: Vladimir_Nesov 04 April 2010 02:36:42PM 0 points [-]

If you are an agent with "less accurate prior", then you won't be able to recognize a "more accurate prior" as a better one. You are trying to look at the situation from the outside, but it's not possible where we discuss your own decision-making algorithms.

Comment author: wnoise 02 April 2010 07:34:40PM 0 points [-]

They can be more or less useful, though.

Comment author: Vladimir_Nesov 02 April 2010 08:35:51PM *  0 points [-]

According to what criterion? You'd end up comparing a prior to the prior you hold, with the "best" prior for you just being the same as yours. Like with preference. Clearly not the concept Unknowns was assuming -- you don't need luck to satisfy a tautology.

Comment author: wnoise 02 April 2010 09:02:49PM 0 points [-]

Of being better at predicting what happens, of course.

Comment author: Vladimir_Nesov 02 April 2010 09:04:20PM 2 points [-]

You can't judge based on info you don't have. Based on what you do have, you can do no better than current prior.

Comment author: PhilGoetz 05 April 2010 03:29:53PM *  1 point [-]

But you can go and get info, and then judge, and say, "That prior that I held was wrong."

You're speaking as if all truth were relative. I don't know if you mean this, but your comments in this thread imply that there is no such thing as truth.

You've recently had other discussions about values and ethics, and the argument you're making here parallels your position in that argument. You may be trying to keep your believes about values, and about truths in general, in syntactic conformance. But rationally I hope you agree they're different.

Comment author: Vladimir_Nesov 05 April 2010 04:56:33PM *  1 point [-]

But you can go and get info, and then judge, and say, "That prior that I held was wrong."

It is only wrong not to update.

Comment author: wnoise 05 April 2010 06:14:09PM 1 point [-]

And, of course the priors must be updated the correct way.

Nonetheless, it is greatly preferable to have a prior that led to decisions that gave high utility, rather than one that led to decisions that gave low utility. Of course this can't be measured "before hand". But the whole point of updating is to get better priors, in this exact sense, for the next round of decisions and updates.

Comment author: wnoise 02 April 2010 09:11:30PM 0 points [-]

I am in violent agreement.