Cyan comments on Open Thread June 2010, Part 4 - Less Wrong

5 Post author: Will_Newsome 19 June 2010 04:34AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (325)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 02 July 2010 03:10:47PM 1 point [-]

I'd like a formal description of what it should be

Solomonoff induction, mebbe?

Comment author: cousin_it 02 July 2010 04:15:05PM 1 point [-]

Wei Dai thought up a counterexample to that :-)

Comment author: steven0461 02 July 2010 07:41:40PM *  1 point [-]

Gelman/Shalizi don't seem to be arguing from the possibility that physics is noncomputable; they seem to think their argument (against Bayes as induction) works even under ordinary circumstances.

Comment author: magfrump 05 July 2010 06:07:47PM 0 points [-]

It seems to me that Wei Dai's argument is flawed (and I may be overly arrogant in saying this; I haven't even had breakfast this morning.)

He says that the probability of knowing an uncomputable problem would be evaluated at 0 originally; I don't fundamentally see why "measure zero hypothesis" is equivalent to "impossible;" for example the hypothesis of "they're making it up as they go along" having probability 2^(-S) based on the size of the set shrinks at a certain rate as evidence arrives; that means that given any finite amount of inference the AI should be able to distinguish between two possibilities (they are very good at computing or guessing vs all humans have been wrong about mathematics forever) unless new evidence comes in to support one over the other "humans have been wrong forever" should have a consistent probability mass which will grow in comparison to the other hypothesis "they are making it up."

Nobody seems to propose this (although I may have missed it skimming some of the replies) and it seems like a relatively simple thing (to me) to adjust the AI's prior distribution to give "impossible" things low but nonzero probability.

Comment author: cousin_it 05 July 2010 06:32:40PM *  0 points [-]

Wei Dai's argument was specifically against the Solomonoff prior, which assigns probability 0 to the existence of halting problem oracles. If you have an idea how to formulate another universal prior that would give such "impossible" things positive probability, but still sum to 1.0 over all hypotheses, then by all means let's hear it.

Comment author: magfrump 06 July 2010 06:15:16AM 0 points [-]

Yeah well it is certainly a good argument against that. The title of the thread is "is induction unformalizable" which point I'm unconvinced of.

If I were to formalize some kind of prior, I would probably use a lot of epsilons (since zero is not a probability); including an epsilon for "things I haven't thought up yet." On the other hand I'm not really an expert on any of these things so I imagine Wei Dai would be able to poke holes in anything I came up with anyway.

Comment author: cousin_it 06 July 2010 08:52:14AM 1 point [-]

There's no general way to have a "none of the above" hypothesis as part of your prior, because it doesn't make any specific prediction and thus you can't update its likelihood as data comes in. See the discussion with Cyan and others about NOTA somewhere around here.

Comment author: magfrump 06 July 2010 08:18:57PM 0 points [-]

Well then I guess I would hypothesize that solving the problem of a universal prior is equivalent to solving the problem of NOTA. I don't really know enough to get technical here. If your point is that it's not a good idea to model humans as Bayesians, I agree. If your point is that it's impossible, I'm unconvinced. Maybe after I finish reading Jaynes I'll have a better idea of the formalisms involved.