You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

dlthomas comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: dlthomas 29 December 2011 12:24:33AM *  0 points [-]

The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.

Wouldn't it put an upper bound on the complexity of any given piece, as you can describe it with "the universe, plus the location of what I care about"?

Edited to add: Ah, yes but "the location of what I care about" is has potentially a huge amount of complexity to it.

Comment author: timtyler 29 December 2011 03:31:26PM *  1 point [-]

Wouldn't it put an upper bound on the complexity of any given piece, as you can describe it with "the universe, plus the location of what I care about"?

As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...

...but yes, things we see might well have a maximum complexity - associated with the size and complexity of the universe.

When dealing with practical approximations to Solomonoff induction this is "angels and pinheads" material, though. We neither know nor care about such things.

Comment author: dlthomas 29 December 2011 04:48:51PM 0 points [-]

Fair enough.