Building on the very bad Gödel anti-AI argument (computers's are formal and can't prove their own Gödel sentence, hence no AI), it occurred to me that you could make a strong case that humans could never recognise a human Gödel sentence. The argument goes like this:
- Humans have a meta-proof that all Gödel sentences are true.
- If humans could recognise a human Gödel sentence G as being a Gödel sentence, we would therefore prove it was true.
- This contradicts the definition of G, which humans should never be able to prove.
- Hence humans could never recognise that G was a human Gödel sentence.
Now, the more usual way of dealing with human Gödel sentences is to say that humans are inconsistent, but that the inconsistency doesn't blow up our reasoning system because we use something akin to relevance logic.
But, if we do assume humans are consistent (or can become consistent), then it does seem we will never knowingly encounter our own Gödel sentences. As to where this G could hide and we could never find it? My guess would be somewhere in the larger ordinals, up where our understanding starts to get flaky.
There is nothing like the Goedel theorem inside a finite world, in which we operate/live.
This assumes the universe is finite. But aside from that it has two serious problems:
First, finiteness doesn't save you from undecidability.
Second, if in fact the world is finite we get even worse situations. Let for example (n) be your favorite fast growing computable function, say f(n)=A(n,n) where A is the Ackermann function. Consider a question like "does f(10)+2 have an even number of distinct prime factors"? It is likely then that this question is... (read more)