eli_sennesh comments on Harper’s Fishing Nets: a review of Plato’s Camera by Paul Churchland - Less Wrong

14 [deleted] 02 July 2015 02:19AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (8)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 02 July 2015 10:50:48PM *  2 points [-]

Denotational equivalence is actually undecidable for any class of languages stronger than deterministic pushdown automata.

This doesn't mean that we can't obtain certain evidence that two languages/automata are equal/equivalent via some mechanism other than a decision algorithm, of course. It also doesn't mean that we can't assign a probability of equality in an entirely sensible way. In fact, in probabilistic programming, probabilistic extensional equality of random variables is trivial to model: the problem is that you're talking, there, about zero-free-parameter thunks rather than arbitrarily parameterized lambdas.

So we can't really decide the denotational equivalence of lambda expressions (or recurrent neural networks), but I think that decision algorithms aren't really useful, from a cognitive perspective, for more than obtaining a 100% likelihood of equivalence. That's powerful, when you can do it, but you should also be able to get non-100% likelihoods in other ways.

The various forms of probabilistic static analysis can probably handle that problem.

Comment author: johnswentworth 03 July 2015 12:13:10AM 2 points [-]

So, you're thinking that human abstraction ability derives from probable morphisms rather than certain morphisms over weaker classes? That makes a lot of sense.

On the other hand, from what I've seen in CS classes, humans do not seem very good at recognizing equivalences even between pushdown automata beyond a few simple cases. A human equipped with pencil and lots of paper can do a good job, but that's an awful lot more powerful than just a human.

Comment author: [deleted] 04 July 2015 10:14:52PM 2 points [-]

A human equipped with pencil and lots of paper can do a good job, but that's an awful lot more powerful than just a human.

A professional scientist falls more under "human equipped with pencil and lots of paper". Nobody said second-level learning was easy.

So, you're thinking that human abstraction ability derives from probable morphisms rather than certain morphisms over weaker classes?

Maybe? I think that first what happens is that we identify a probable morphism, and then we adjust the two theories at hand in order to eliminate the uncertainty from the morphism by pushing it "upwards" (into the imprecision of the higher-level model) and "downwards" (into the parameter dimensionality of the lower-level model).