timtyler comments on A taxonomy of Oracle AIs - Less Wrong

13 Post author: lukeprog 08 March 2012 11:14PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (51)

You are viewing a single comment's thread.

Comment author: timtyler 09 March 2012 05:53:03PM *  1 point [-]

Very accurate and general Predictors may be based on Solomonoff's theory of universal induction. Very powerful Predictors are unsafe in a rather surprising way: when given sufficient data about the real world, they exhibit goal-seeking behavior, i.e. they calculate a distribution over future data in a way that brings about certain real-world states. This is surprising, since a Predictor is theoretically just a very large and expensive application of Bayes' law, not even performing a search over its possible outputs.

I am not yet convinced by this argument. Think about a computable approximation to Solomonoff induction - like Levin search. Why does it "want" its predictions to be right any more than it "wants" them to be wrong? Superficially, correct and incorrect predictions are treated symmetrically by such systems.

Comment author: timtyler 12 March 2012 08:55:46PM 1 point [-]

The original argument appears to lack defenders or supporters. Perhaps this is because it is not very strong.