cousin_it comments on What's a "natural number"? - Less Wrong

8 Post author: cousin_it 07 October 2010 01:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (17)

You are viewing a single comment's thread. Show more comments above.

Comment author: komponisto 08 October 2010 10:21:29PM *  3 points [-]

Can the AI recognize such situations and say "no way, this formal system doesn't seem to describe my regular integers"?

It need not -- asking whether a formal system "describes my regular integers" is a disguised query for whether it satisfies some set of properties that happen to be useful. All the AI needs to be able to do is evaluate how effectively different models describe whatever it's trying to use them to describe.

Unfortunately, if we have an arithmetical statement that we can neither prove or disprove so far, your idea would have us believe that it's true and its negation is also true. That doesn't look like correct Bayesian reasoning to me!

I don't see why not. It's not that we would believe the statement and its negation are both true; rather, we would believe that the statement is true with probability x and false with probability 1-x, as usual.

Comment author: cousin_it 12 October 2010 02:01:16PM *  0 points [-]

komponisto, did you leave my question unanswered because you don't know the answer, or because you thought the question stupid and decided to bail out? If you can dissolve my confusion, please do.

Comment author: komponisto 12 October 2010 04:42:30PM *  1 point [-]

Sorry! I didn't have an answer immediately, but thought I might come up with one after a day or two. Unfortunately, by that time, I had forgotten about the question!

Anyway, the way I'd approach it is to ask what is wrong, from our point of view, with a given nonstandard theory.

Actually, I just thought of something while writing this comment. Take your example of adding a "PA is inconsistent" axiom to PA. Yes, we could add such an axiom, but why bother? What use do we get from this new system that we didn't already get from PA? If the answer is "nothing", then we can invoke a simplicity criterion. On the other hand, if there is some situation where this system is actually convenient, then there is indeed nothing "wrong" with it, and we wouldn't want an AI to think that there was.

(Edit: I'll try to make sure I reply more quickly next time.)