If we can model the standard natural numbers, then it seems we're fine - the number godel-encoding a proof would actually correspond to a proof, we don't need to worry further about models.
If we can't pick out the standard natural numbers, we can't say that any Godel sentences are true, even for very simple formal systems. All we can say is that they are unprovable from within that system.
If my brain is a Turing machine, doesn't it pretty much follow that I can't pick out the standard model? How would I do that?
Building on the very bad Gödel anti-AI argument (computers's are formal and can't prove their own Gödel sentence, hence no AI), it occurred to me that you could make a strong case that humans could never recognise a human Gödel sentence. The argument goes like this:
Now, the more usual way of dealing with human Gödel sentences is to say that humans are inconsistent, but that the inconsistency doesn't blow up our reasoning system because we use something akin to relevance logic.
But, if we do assume humans are consistent (or can become consistent), then it does seem we will never knowingly encounter our own Gödel sentences. As to where this G could hide and we could never find it? My guess would be somewhere in the larger ordinals, up where our understanding starts to get flaky.