Building on the very bad Gödel anti-AI argument (computers's are formal and can't prove their own Gödel sentence, hence no AI), it occurred to me that you could make a strong case that humans could never recognise a human Gödel sentence. The argument goes like this:
- Humans have a meta-proof that all Gödel sentences are true.
- If humans could recognise a human Gödel sentence G as being a Gödel sentence, we would therefore prove it was true.
- This contradicts the definition of G, which humans should never be able to prove.
- Hence humans could never recognise that G was a human Gödel sentence.
Now, the more usual way of dealing with human Gödel sentences is to say that humans are inconsistent, but that the inconsistency doesn't blow up our reasoning system because we use something akin to relevance logic.
But, if we do assume humans are consistent (or can become consistent), then it does seem we will never knowingly encounter our own Gödel sentences. As to where this G could hide and we could never find it? My guess would be somewhere in the larger ordinals, up where our understanding starts to get flaky.
Everyone seems to be taking the phrase "human Gödel sentence" (and, for that matter, "the Gödel sentence of a turing machine") as if its widely understood, so perhaps it's a piece of jargon I'm not familiar with. I know what the Gödel sentence of a computably enumerable theory is, which is the usual formulation. And I know how to get from a computably enumerable theory to the Turing machine which outputs the statements of that theory. But not every Turing machine is of this form, so I don't know what it means to talk about the Gödel sentence of an arbitrary Turing machine. For instance, what is the Gödel sentence of a universal Turing machine?
Some posters seem to be taking the human Gödel number to mean something like the Gödel number of the collection of things that person will ever believe, but the collection of things a person will ever believe has absolutely no need to be consistent, since people can (and should!) sometimes change their mind.
(This is primarily an issue with the original anti-AI argument; I don't know how defenders of that argument clarify their definitions.)
Entirely agree.