You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jsteinhardt comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsteinhardt 29 December 2011 12:24:18AM 2 points [-]

I don't understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.

Comment author: timtyler 29 December 2011 03:27:07PM 0 points [-]

Are you thinking of applying Solomonoff induction to the whole universe?!?

If so, that would be a very strange thing to try and do.

Normally you apply Solomonoff induction to some kind of sensory input stream (or a preprocessed abstraction from that stream).

Comment author: jsteinhardt 29 December 2011 04:03:06PM 0 points [-]

Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?

Comment author: timtyler 29 December 2011 04:38:12PM *  0 points [-]

It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.

This is all far-future stuff. Why should we worry about it? Aren't there more pressing issues?