You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on You only need faith in two things - Less Wrong Discussion

22 Post author: Eliezer_Yudkowsky 10 March 2013 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 12 March 2013 05:55:06PM 2 points [-]

But even one epistemic error is enough to cause an arbitrarily large loss in utility.

This is always true.

Since you can't update away this belief until it's too late, it does seem important to have "reasonable" priors instead of just a non-superexponentially-tiny probability to "induction works".

I'd say more that besides your one reasonable prior you also need to not make various sorts of specifically harmful mistakes, but this only becomes true when instrumental welfare as well as epistemic welfare are being taken into account. :)

Comment author: Wei_Dai 15 April 2013 08:25:39AM 1 point [-]

Do you think it's useful to consider "epistemic welfare" independently of "instrumental welfare"? To me it seems that approach has led to a number of problems in the past.

  1. Solomonoff Induction was historically justified a way similar to your post: you should use the universal prior, because whatever the "right" prior is, if it's computable then substituting the universal prior will cost you only a limited number of epistemic errors. I think this sort of argument is more impressive/persuasive than it should be (at least for some people, including myself when I first came across it), and makes them erroneously think the problem of finding "the right prior" or "a reasonable prior" is already solved or doesn't need to be solved.
  2. Thinking that anthropic reasoning / indexical uncertainty is clearly an epistemic problem and hence ought to be solved within epistemology (rather than decision theory), leading for example to dozens of papers arguing over what is the right way to do Bayesian updating in the Sleeping Beauty problem.