You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on You only need faith in two things - Less Wrong Discussion

22 Post author: Eliezer_Yudkowsky 10 March 2013 11:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (86)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 15 April 2013 08:25:39AM 1 point [-]

Do you think it's useful to consider "epistemic welfare" independently of "instrumental welfare"? To me it seems that approach has led to a number of problems in the past.

  1. Solomonoff Induction was historically justified a way similar to your post: you should use the universal prior, because whatever the "right" prior is, if it's computable then substituting the universal prior will cost you only a limited number of epistemic errors. I think this sort of argument is more impressive/persuasive than it should be (at least for some people, including myself when I first came across it), and makes them erroneously think the problem of finding "the right prior" or "a reasonable prior" is already solved or doesn't need to be solved.
  2. Thinking that anthropic reasoning / indexical uncertainty is clearly an epistemic problem and hence ought to be solved within epistemology (rather than decision theory), leading for example to dozens of papers arguing over what is the right way to do Bayesian updating in the Sleeping Beauty problem.