You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

timtyler comments on Polarized gamma rays and manifest infinity - Less Wrong Discussion

16 Post author: rwallace 30 July 2011 06:56AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread.

Comment author: timtyler 30 July 2011 08:36:12AM 0 points [-]

It is worth noting that Solomonoff induction would do otherwise. SI is based on the assumption that the universe is computable; it assigns a halting oracle a prior probability (and therefore a posterior probability after any finite amount of evidence) of zero.

A box being a halting oracle isn't something you can determine or test. Solomonoff induction only assigns probability to things that can be observed, not untestable things.

Comment author: rwallace 30 July 2011 08:05:16PM 0 points [-]

Well we can observe what answer it gives for the next case we run it on, and the next, and the next. So there is still the question of whether we expect, given that the box has passed every case we were able to test, that it will continue to give the right answer for future cases.

Comment author: timtyler 30 July 2011 08:07:36PM 1 point [-]

Right - and the answers Solomonoff induction would give for such questions look pretty reasonable to me.

Comment author: rwallace 30 July 2011 08:28:25PM 1 point [-]

Remaining forever certain that the box can't really be a halting oracle and its successes thus far have been essentially luck, no matter how many successes are accumulated? If so, you're the first human I've seen express that view. Or do you have a different interpretation of how to apply Solomonoff induction to this case?

Comment author: CronoDAS 30 July 2011 11:22:35PM 2 points [-]

For any finite subset of Turing machines, there exists a program that will act as a halting oracle on that subset. For example, the alien box might be a Giant Look-Up Table that has the right answer for every Turing Machine up to some really, really big number. (Would we be able to tell the difference between a true halting oracle and one that has an upper bound on the size of a Turing machine that it can analyze accurately?)

Comment author: timtyler 31 July 2011 10:25:44AM 1 point [-]

Luck?!? A system that can apparently quickly and reliably tell if TMs halt would not be relying on luck.