The Open Thread from the beginning of the month has more than 500 comments – new Open Thread comments may be made here.
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.
I think that Eliezer's post, Complexity and Intelligence, is really germane to your query.
Here's a thought experiment, just for fun:
Let's say, for simplicity's sake, that your mind (and environment) is currently being run on some Turing machine T, which had initial state S. What if you considered the sentence G, which is a Gödel-encoded statement that "if you run T on S, it will never contain an instance of humpolec rationally concluding that G is a theorem"? (Of course, specifying that predicate would be a beastly problem, but in theory it's a finite mathematical specification.)
You would therefore be actually unable to rationally conclude that G is a theorem, and of course it would thereby be a true, finitely specifiable mathematical statement.
It's up to you, of course, which bullets you choose to bite in response to this.