Vladimir_Nesov comments on A Request for Open Problems - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
Yes, and that's sort of intentional. I was trying to come up with a mathematical model of an agent that can deal with uncomputable physics. The physics of our universe seems likely to be computable, but there is no a priori reason to assume that it must be. We may eventually discover a law of physics that's not computable, or find out that we are in a simulation running inside a larger universe that has uncomputable physics. Agents using UTM-based priors can't deal with these scenarios.
So I tried to find a "better", i.e., more expressive, language for describing objects, but then realized that any fixed formal language has a similar problem. Here's my current idea for solving this: make the language extensible instead of fixed. That is, define a base language, and a procedure for extending the language. Then, when the agent encounters some object that can't be described concisely using his current language, he recursively extends it until a short description is possible. What the extension procedure should be is still unclear.
Then how can you deal with these scenarios? Did the idiot God make you better equipped for this task, Oh uncomputable ape-brain?
The idea of agents using UTM-based priors is a human invention, and therefore subject to human error. I'm not claiming to have an uncomputable brain, just that I've found such an error.
For a specific example of how human beings might deal with such scenarios, compared to agents using UTM-based priors, see "is induction unformalizable?".
The model of environment values observations and behaviors, not statements about "uncomputability" and such. No observation should be left out, declared impossible. If you, as a human, decide to trust in something you label "halting oracle", that's your decision, and this is a decision you'd want any trusted AI to carry through as well.
I suspect that the roots of this confusion are something not unlike mind projection fallacy, with magical properties attributed to models, but I'm not competent to discuss domain-specific aspects of this question.