Vladimir_Nesov comments on Model Uncertainty, Pascalian Reasoning and Utilitarianism - Less Wrong

23 Post author: multifoliaterose 14 June 2011 03:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (154)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 15 June 2011 02:24:48AM *  2 points [-]

I don't feel there is a need for that. You just present these things as tools, not fundamental ideas, also discussing why they are not fundamental and why figuring out fundamental ideas is important. The relevant lesson is along the lines of Fake Utility Functions (the post has "utility function" in it, but it doesn't seem to need to), applied more broadly to epistemology.

Comment author: Wei_Dai 15 June 2011 04:56:08AM 3 points [-]

You just present these things as tools, not fundamental ideas, also discussing why they are not fundamental and why figuring out fundamental ideas is important.

Thinking of Bayesianism as fundamental is what made some people (e.g., at least Eliezer and me) think that fundamental ideas exist and are important. (Does that mean we ought to rethink whether fundamental ideas exist and are important?) From Eliezer's My Bayesian Enlightenment:

The first time I heard of "Bayesianism", I marked it off as obvious; I didn't go much further in than Bayes's rule itself. At that time I still thought of probability theory as a tool rather than a law. I didn't think there were mathematical laws of intelligence (my best and worst mistake). Like nearly all AGI wannabes, Eliezer2001 thought in terms of techniques, methods, algorithms, building up a toolbox full of cool things he could do; he searched for tools, not understanding. Bayes's Rule was a really neat tool, applicable in a surprising number of cases.

(Besides, even if your suggestion is feasible, somebody would have to rewrite a great deal of Eliezer's material to not present Bayesianism as fundamental.)

Comment author: Vladimir_Nesov 16 June 2011 09:18:14PM 1 point [-]

The ideas of Bayesian credence levels and maximum entropy priors are important epistemic tools that in particular allow you to understand that those kludgy AI tools won't get you what you want.

(Besides, even if your suggestion is feasible, somebody would have to rewrite a great deal of Eliezer's material to not present Bayesianism as fundamental.)

(It doesn't matter for the normative judgment, but I guess that's why you wrote this in parentheses.)

I don't think Eliezer misused the idea in the sequences, as Bayesian way of thinking is a very important tool that must be mastered to understand many important arguments. And I guess at this point we are arguing about the sense of "fundamental".