Wei_Dai comments on A Request for Open Problems - Less Wrong

25 Post author: MrHen 08 May 2009 01:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 09 May 2009 01:25:45AM 2 points [-]

I suspect the answer is no, at least not the kind of formal languages that have been suggested so far. The problem is this: as soon as you define a formal language, I can say "the lexicographically first object which can't be described in less than a million bits in your language". Given the uniqueness of this object, why should it be a priori as unlikely as a random million-bit string?

Comment author: Peter_de_Blanc 10 May 2009 04:22:33AM 4 points [-]

Note that in general, this sort of specification is uncomputable. To find the lexicographically first object which can't be described in less than a million bits, we would have to make a list of all objects which can be described in less than a million bits (and return the first thing that's not on the list). But we don't know if our UTM will halt for any given string with less than a million bits, so we can never know if our list is complete.

Comment author: Wei_Dai 10 May 2009 10:10:03AM 3 points [-]

Yes, and that's sort of intentional. I was trying to come up with a mathematical model of an agent that can deal with uncomputable physics. The physics of our universe seems likely to be computable, but there is no a priori reason to assume that it must be. We may eventually discover a law of physics that's not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can't deal with these scenarios.

So I tried to find a "better", i.e., more expressive, language for describing objects, but then realized that any fixed formal language has a similar problem. Here's my current idea for solving this: make the language extensible instead of fixed. That is, define a base language, and a procedure for extending the language. Then, when the agent encounters some object that can't be described concisely using his current language, he recursively extends it until a short description is possible. What the extension procedure should be is still unclear.

Comment author: Toby_Ord 10 May 2009 02:42:49PM 3 points [-]

I agree that there are very interesting questions here. We have quite natural ways of describing uncomputable functions very far up the arithmetical hierarchy, and it seems that they can be described in some kind of recursive language even if the things they describe are not recursive (using recursive in the recursion theory sense both times). Turing tried something like this in Systems of Logic Based on Ordinals (Turing, 1939), but that was with formal logic and systems where you repeatedly add the Godel sentence of a system into the system as an axiom, repeating this into the transfinite. A similar thing could be done describing levels of computability transfinitely far up the arithmetical hierarchy using recursively represented ordinals to index them. However, then people like you and I will want to use certain well defined but non-recursive ordinals to do the indexing, and it seems to degenerate in the standard kind of way, just a lot further up the hierarchy than before.

Comment author: Wei_Dai 10 May 2009 03:48:24PM 1 point [-]

And then there are objects that are completely outside the arithmetical hierarchy, but we probably shouldn't assign zero priors to either. Things like large cardinals, perhaps.

Another mystery is, why did evolution create minds capable of thinking about these issues, given that agents equipped with a fixed UTM-based prior would have done perfectly fine in our place, at least up to now?

Comment author: Vladimir_Nesov 10 May 2009 11:03:44AM *  1 point [-]

We may eventually discover a law of physics that's not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can't deal with these scenarios.

Then how can you deal with these scenarios? Did the idiot God make you better equipped for this task, Oh uncomputable ape-brain?

Comment author: Wei_Dai 10 May 2009 02:50:16PM 1 point [-]

The idea of agents using UTM-based priors is a human invention, and therefore subject to human error. I'm not claiming to have an uncomputable brain, just that I've found such an error.

For a specific example of how human beings might deal with such scenarios, compared to agents using UTM-based priors, see "is induction unformalizable?".

Comment author: Vladimir_Nesov 10 May 2009 03:25:01PM *  1 point [-]

The model of environment values observations and behaviors, not statements about "uncomputability" and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label "halting oracle", that's your decision, and this is a decision you'd want any trusted AI to carry through as well.

I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I'm not competent to discuss domain-specific aspects of this question.

Comment author: ciphergoth 10 May 2009 02:22:30PM *  1 point [-]

We aren't really Bayesian reasoning machines at all, and it isn't really accurate to speak of us having a prior. We choose a prior in order to use Bayesian reasoning to analyze a situation, and we seek to bend our natural reasoning to a Bayesian template in order to improve its accuracy, but we cannot wholly succeed in doing so. So the problem you raise should worry someone building AGI, but it's not realistic to imagine a human agent becoming so Bayesian that they swallow the Solomonoff prior whole and are literally unable to contemplate super-Turing Universes.

I don't think it's unreasonable, therefore, to adopt the Solomonoff prior as a useful model to aid reasoning and discussion, and rely on our human ability to make and adopt a new, super-Turing model if some more general prior would have favoured it.

Comment author: timtyler 10 May 2009 09:03:27AM 1 point [-]

Not a serious problem - just consider objects which can't be described in less than a million bits after a million timesteps instead.

Comment author: Peter_de_Blanc 10 May 2009 01:00:21PM 0 points [-]

If a problem is so serious that you have to change your prior, then I'd call that a serious problem.

Comment author: timtyler 10 May 2009 06:59:47PM 1 point [-]

Did you understand why the problem is not serious? Your comment was nit-picking the example. Other similar examples make exactly the same point - but are not vulnerable to such nit-picking.

Comment author: timtyler 09 May 2009 08:16:17AM 1 point [-]

It doesn't sound too serious - if all the candidate languages have the same problem.

It is hard to deny that some languages are better than other ones, and so therefore there is a set of "best" ones. The intent of my first question was more along the lines of: is one formulation of Occam's razor optimal for all the different situations in which Occam's razor is used - or should we be using different descriptive languages under different circumstances.