endoself comments on A potential problem with using Solomonoff induction as a prior - Less Wrong

13 Post author: JoshuaZ 07 April 2011 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (18)

You are viewing a single comment's thread. Show more comments above.

Comment author: endoself 09 April 2011 12:47:41AM 3 points [-]

When we do enough testing, K could get arbitrarily large and, therefore, arbitrarily complex so, unless you, like Solomonoff induction, essentially assign infinite complexity to anything uncomputable, there exists a K such that "Halts within K" is more complex than "never halts".

Comment author: timtyler 09 April 2011 12:21:42PM 0 points [-]

At this stage, an uncomputable universe is an extraordinary hypothesis, that would require extraordinary evidence to support. A finite string of numbers seems like poor evidence for uncomputability. It seems much easier to believe that it is a computable approximation. So, I am inclined to side with Solomonoff induction here.

The uncomputable is always infinite. Our observations are always finite. I don't see how making such a leap could be justified.

Comment author: ciphergoth 11 April 2011 07:30:13PM 3 points [-]

So if a simple setup you can do at home with some liquid nitrogen, a laser and a kumquat appeared to allow direct hypercomputation - could instantaneously determine whether a particular Turing machine halted - and you were able to test this against all sorts of extraordinary examples, you would never come to the conclusion that it was a hypercomputation engine, instead building ever-more-complex computable models of what it was doing?

Comment author: Cyan 11 April 2011 08:09:33PM 1 point [-]

I would never conclude that, because the proposition that I'm in a simulation and the DLotM are messing with me is about as plausible on the evidence.

Comment author: timtyler 15 April 2011 02:04:46PM 0 points [-]

A hotline to a computable halt-finding machine seems much more plausible than something uncomputable, yes. We have no idea how something uncomputable could possibly work. You should not give weight to the uncomputable while computable approximations exist.

Comment author: ciphergoth 15 April 2011 03:19:24PM 1 point [-]

So how many cycles of

  • hypothesize a computable halt-tester
  • figure out from that an example where it makes the wrong call
  • test it, find that the experiment makes the correct call
  • hypothesize a new computable halt-tester that makes the correct call on this example

would you have to go through before concluding the universe was not computable?

I have to confess that I don't really see what's so special about the bottom of the compute hierarchy that you would be so certain that's where the universe is.

Comment author: endoself 09 April 2011 03:49:28PM 2 points [-]

I'm not talking about "this stage", I'm talking about what would happen in principle, after we make enough observations.

The uncomputable is always infinite. Our observations are always finite. I don't see how making such a leap could be justified.

Are you saying that you assign literally 0 probability to an uncomputable universe? How could you possibly be so sure? Remember there are an infinite number of possible computable universes, so there must be computable universes with probability less than epsilon, for all epsilon > 0.