cousin_it comments on Metaphilosophical Mysteries - Less Wrong

35 Post author: Wei_Dai 27 July 2010 12:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (255)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 27 July 2010 11:29:29AM *  4 points [-]

Consider a universal prior based on an arbitrary logical language L, and a device that can decide the truth value of any sentence in that language. Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability.

What do you mean by "decide the truth value"? Most statements aren't valid or unsatisfiable, there is no truth value for them. We are not assuming any models here, just assigning plausibility to (statement) elements of language's Lindenbaum algebra.

Such a device has no finite description in L (according to Tarski's undefinability theorem), so the universal prior based on L would assign it zero probability.

Whatever model you have in mind, it will be categorized on one side of each statement of the language. We are assigning plausibility to statements, and hence classes of structures, not individual structures (which are like individual points for a continuous distribution).

Comment author: cousin_it 27 July 2010 01:57:25PM *  9 points [-]

Vladimir, ever since I joined this site I've been hearing many interesting not-quite-formal ideas from you, and as my understanding grows I can parse more and more of what you say. But you always seem to move on to the next idea before finishing the last one. I think you should spend way more effort on transforming your ideas into actual theorems with proofs and publishing them online. Sharing "intuitions" only gets us so far.

I have much less trouble reading math papers from unfamiliar fields than reading your informal arguments, because your arguments rely on unstated background assumptions much more than you seem to realize. Properly preparing your results for publication, even if they don't get actually published somewhere peer-reviewed, should fix this problem.

Comment author: Vladimir_Nesov 27 July 2010 02:19:08PM 3 points [-]

I discuss things here because it's fun (and sometimes I learn useful lessons from expressing them here, in addition to my private notes), not because I consider it effective means of communication. The not-quite-formal ideas are most of the time in fact not-quite-formal, rather than informally communicated formal ideas (often because I don't understand the relevant math, a failure I'm working on). The dropped ideas are those I either found useless/meaningless/wrong or those that never came up in the discussion after some point.

Communicating informal ideas is too difficult, specifically because they assume tons of unstated background, background that you not only have to state, but convince people about. This is work both for the writer and for the reader. In addition, these informal ideas are not particularly valuable, which together with difficulty of communication makes the whole endeavor a waste of effort.

(At least on LW, common background gives a chance for some remarks to be understood, without that background having to be delivered explicitly.)

The plan is for all these hunches to eventually come together in a framework for decision theory, that should be transparently mathematical, and thus allow efficient little-hidden-background communication.