FAWS comments on Hedging our Bets: The Case for Pursuing Whole Brain Emulation to Safeguard Humanity's Future - Less Wrong

11 Post author: inklesspen 01 March 2010 02:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (244)

You are viewing a single comment's thread. Show more comments above.

Comment author: FAWS 03 March 2010 06:26:40PM 0 points [-]

Do you mean "avoiding being overwhelmed by the magnitude of the problem as a whole and making steady progress in small steps" or "substituting wishful thinking for thinking about the problem", or something else?

Comment author: RichardKennaway 03 March 2010 06:32:47PM 0 points [-]

Using wishful thinking to avoid the magnitude of the problem.

Comment author: timtyler 03 March 2010 08:27:40PM 0 points [-]

This is Solomonoff induction:

''Solomonoff’s model of induction rapidly learns to make optimal predictions for any computable sequence, including probabilistic ones. It neatly brings together the philosophical principles of Occam’s razor, Epicurus’ principle of multiple explanations, Bayes theorem and Turing’s model of universal computation into a theoretical sequence predictor with astonishingly powerful properties.''

It is hard to describe the idea that thinking Solomonoff induction bears on machine intelligence as "wishful thinking". Prediction is useful and important - and this is basically how you do it.

Comment author: RichardKennaway 03 March 2010 10:42:50PM 1 point [-]

But:

"Indeed the problem of sequence prediction could well be considered solved, if it were not for the fact that Solomonoff’s theoretical model is incomputable."

and:

"Could there exist elegant computable prediction algorithms that are in some sense universal? Unfortunately this is impossible, as pointed out by Dawid."

and:

"We then prove that some sequences, however, can only be predicted by very complex predictors. This implies that very general prediction algorithms, in particular those that can learn to predict all sequences up to a given Kolmogorov complex[ity], must themselves be complex. This puts an end to our hope of there being an extremely general and yet relatively simple prediction algorithm. We then use this fact to prove that although very powerful prediction algorithms exist, they cannot be mathematically discovered due to Gödel incompleteness. Given how fundamental prediction is to intelligence, this result implies that beyond a moderate level of complexity the development of powerful artificial intelligence algorithms can only be an experimental science."

While Solomonoff induction is mathematically interesting, the paper itself seems to reject your assessment of it.

Comment author: timtyler 03 March 2010 10:51:04PM *  -1 points [-]

Not at all! I have no quarrel whatsoever with any of that (except some minor quibbles about the distinction between "math" and "science").

I suspect you are not properly weighing the term "elegant" in the second quotation.

The paper is actually arguing that sufficiently comprehensive universal prediction algorithms are necessarily large and complex. Just so.