komponisto comments on What's a "natural number"? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (17)
Thanks! Your comment prompts me to reformulate my original question this way: given a formal system, how can the AI determine that it talks about "the" natural numbers? For example, we can add to PA some axiom that rules out its standard model, but leaves many nonstandard ones. The simplest example would be to add the inconsistency of PA - the resulting theory will (counterintuitively) be just as consistent as PA, but quite weird. It will have many interesting provable theorems that are nevertheless common-sensically "false", e.g. "PA proves 1+1=3". Can the AI recognize such situations and say "no way, this formal system doesn't seem to describe my regular integers"?
About the consistency of ZFC: it's certainly a neat idea to conclude an arithmetical statement is "probably true" if you can't find a disproof for a long time. Unfortunately, if we have an arithmetical statement that we can neither prove or disprove so far, your idea would have us believe that it's true and its negation is also true. That doesn't look like correct Bayesian reasoning to me!
It need not -- asking whether a formal system "describes my regular integers" is a disguised query for whether it satisfies some set of properties that happen to be useful. All the AI needs to be able to do is evaluate how effectively different models describe whatever it's trying to use them to describe.
I don't see why not. It's not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.
What are these properties?
komponisto, did you leave my question unanswered because you don't know the answer, or because you thought the question stupid and decided to bail out? If you can dissolve my confusion, please do.
Sorry! I didn't have an answer immediately, but thought I might come up with one after a day or two. Unfortunately, by that time, I had forgotten about the question!
Anyway, the way I'd approach it is to ask what is wrong, from our point of view, with a given nonstandard theory.
Actually, I just thought of something while writing this comment. Take your example of adding a "PA is inconsistent" axiom to PA. Yes, we could add such an axiom, but why bother? What use do we get from this new system that we didn't already get from PA? If the answer is "nothing", then we can invoke a simplicity criterion. On the other hand, if there is some situation where this system is actually convenient, then there is indeed nothing "wrong" with it, and we wouldn't want an AI to think that there was.
(Edit: I'll try to make sure I reply more quickly next time.)