mattnewport comments on Is Rationality Maximization of Expected Value? - Less Wrong

-23 Post author: AnlamK 22 September 2010 11:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 24 September 2010 05:53:53PM 0 points [-]

Do you think the chain of reasoning is infinite?

Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual's lifetime or from millions of years of evolution baking in some 'hard-coded' priors to the human brain.

When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary 'learning' together with a lifetime of actual learning and assign a single real number to it and call it a 'prior' but this is just a way of making calculation tractable.