IlyaShpitser comments on Logics for Mind-Building Should Have Computational Meaning - Less Wrong

21 [deleted] 25 September 2014 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread. Show more comments above.

Comment author: IlyaShpitser 30 September 2014 08:18:16AM 3 points [-]

Hi Eli,

A lot of effort in AI went into combining advantages of logic and probability theory for representing things. Languages that admit uncertainty and are strictly more powerful than propositional logic are practically a cottage industry now. There is Brian Milch's BLOG, Pedro Domingo's Markov logic networks, etc. etc.

Have you read Joe Halpern's paper on semantics:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.5699

Comment author: [deleted] 30 September 2014 06:10:22PM *  2 points [-]

A lot of effort in AI went into combining advantages of logic and probability theory for representing things. Languages that admit uncertainty and are strictly more powerful than propositional logic are practically a cottage industry now. There is Brian Milch's BLOG, Pedro Domingo's Markov logic networks, etc. etc.

They culminate in the present-day probabilistic programming field, which is the subject studied by the lab I'm about to go visit in a few short hours. They are exactly the approach to this problem which I think makes sense: treat the search for a program as a search for a proof of a proposition, then make programs represent distributions over proofs rather than single proofs, then probabilize everything and make various forms of statistical inference correspond to updating the distributions over proofs, culminating in statistically learned, logically rich knowledge about arbitrary constructions. Curry-Howard + probability = fully general probabilistic models.

So, why does anyone still consider "logical probability" an actual problem, given that these all exist? I am frustratingly ready to raise my belief in the sentence, "Academia solved what LW (and much of the rest of the AI community) still believes are open problems decades ago, but in such thick language that nobody quite realized it."

I mean, Hutter published a 52-page paper on probability values for sentences in first-order logic just last year, and I generally consider him professional-level competent.

Have you read Joe Halpern's paper on semantics:

Not yet. I'm looking it over now.