cousin_it comments on What's a "natural number"? - Less Wrong

8 Post author: cousin_it 07 October 2010 01:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 08 October 2010 08:09:02AM *  2 points [-]

Thanks! Your comment prompts me to reformulate my original question this way: given a formal system, how can the AI determine that it talks about "the" natural numbers? For example, we can add to PA some axiom that rules out its standard model, but leaves many nonstandard ones. The simplest example would be to add the inconsistency of PA - the resulting theory will (counterintuitively) be just as consistent as PA, but quite weird. It will have many interesting provable theorems that are nevertheless common-sensically "false", e.g. "PA proves 1+1=3". Can the AI recognize such situations and say "no way, this formal system doesn't seem to describe my regular integers"?

About the consistency of ZFC: it's certainly a neat idea to conclude an arithmetical statement is "probably true" if you can't find a disproof for a long time. Unfortunately, if we have an arithmetical statement that we can neither prove or disprove so far, your idea would have us believe that it's true and its negation is also true. That doesn't look like correct Bayesian reasoning to me!

Comment author: komponisto 08 October 2010 10:21:29PM *  3 points [-]

Can the AI recognize such situations and say "no way, this formal system doesn't seem to describe my regular integers"?

It need not -- asking whether a formal system "describes my regular integers" is a disguised query for whether it satisfies some set of properties that happen to be useful. All the AI needs to be able to do is evaluate how effectively different models describe whatever it's trying to use them to describe.

Unfortunately, if we have an arithmetical statement that we can neither prove or disprove so far, your idea would have us believe that it's true and its negation is also true. That doesn't look like correct Bayesian reasoning to me!

I don't see why not. It's not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.

Comment author: cousin_it 09 October 2010 05:31:36PM *  2 points [-]

asking whether a formal system "describes my regular integers" is a disguised query for whether it satisfies some set of properties that happen to be useful

What are these properties?

Comment author: cousin_it 12 October 2010 02:01:16PM *  0 points [-]

komponisto, did you leave my question unanswered because you don't know the answer, or because you thought the question stupid and decided to bail out? If you can dissolve my confusion, please do.

Comment author: komponisto 12 October 2010 04:42:30PM *  1 point [-]

Sorry! I didn't have an answer immediately, but thought I might come up with one after a day or two. Unfortunately, by that time, I had forgotten about the question!

Anyway, the way I'd approach it is to ask what is wrong, from our point of view, with a given nonstandard theory.

Actually, I just thought of something while writing this comment. Take your example of adding a "PA is inconsistent" axiom to PA. Yes, we could add such an axiom, but why bother? What use do we get from this new system that we didn't already get from PA? If the answer is "nothing", then we can invoke a simplicity criterion. On the other hand, if there is some situation where this system is actually convenient, then there is indeed nothing "wrong" with it, and we wouldn't want an AI to think that there was.

(Edit: I'll try to make sure I reply more quickly next time.)

Comment author: Vladimir_M 08 October 2010 06:24:40PM *  1 point [-]

cousin_it:

For example, we can add to PA some axiom that rules out its standard model, but leaves many nonstandard ones. The simplest example would be to add the inconsistency of PA - the resulting theory will (counterintuitively) be just as consistent as PA, but quite weird. [...] Can the AI recognize such situations and say "no way, this formal system doesn't seem to describe my regular integers"?

This seems to be a matter of some relatively straightforward human heuristics. A theory that is somehow directly "talking about itself" via its axioms looks weird and suspicious. And not to mention what this axiom would look like if you actually wrote it down alongside the standard PA axioms in the same format!

Note that when these heuristics don't apply, humans end up confused and in disagreement about what should be considered as "our regular" stuff when faced with various independent statements that look like potential axioms. (As with Euclid's fifth postulate, the continuum hypothesis, the axiom of choice, etc.)

It will have many interesting provable theorems that are nevertheless common-sensically "false", e.g. "PA proves 1+1=3".

More specifically, it will prove: "There exists a natural number that, when decoded according to your Goedelization scheme, yields a proof in PA that 1+1=3." However, it won't prove anything about that number that would actually allow you to write it down and see what that proof is. ("Writing down" a natural number essentially means expressing a theorem about a specific relation it has with another number that you use as the number system base.) This suggests another straightforward heuristic: if the theory asserts the existence of objects with interesting properties about which it refuses to say anything more that would enable us to study them, it's not "our regular" kind of thing.

Comment author: pengvado 09 October 2010 02:38:35PM *  1 point [-]

If the theory asserts the existence of objects with interesting properties about which it refuses to say anything more that would enable us to study them, it's not "our regular" kind of thing.

Does this heuristic say that Chaitin's constant isn't a regular real number?

Comment author: Vladimir_M 09 October 2010 04:56:29PM *  1 point [-]

Good point. I should have said "bizarre properties" or something to that effect. Which of course leads us to the problem of what exactly qualifies as such. (There's certainly lots of seemingly bizarre stuff easily derivable in ordinary real analysis.) So perhaps I should discard the second part of my above comment.

But I think my main point still stands: when there is no obvious way to see what choice of axioms (pun unintended) is "normal" using some heuristics like these, humans are also unable to agree what theory is the "normal" one. Differentiating between "normal" theories and "pathological" ones like PA + ~Con(PA) is ultimately a matter of some such heuristics, not some very deep insight. That's my two cents, in any case.