anonymous1 comments on Second-Order Logic: The Controversy - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
One of the participants in this dialogue seems too concerned with pinning down models uniquely and also seems too convinced he knows what model he's in. Suppose we live in a simulation which is being run by superbeings who have access to oracles that can tell them when Turing machines are attempting to find contradictions in PA. Whenever they detect that something in the simulation is attempting to find contradictions in PA, that part of the simulation mysteriously stops working after the billionth or trillionth step or something. Then running such Turing machines can't tell us whether we live in a universe where PA is consistent or not.
I also wish both participants in the dialogue would take ultrafinitism more seriously. It is not as wacky as it sounds, and it seems like a good idea to be conservative about such things when designing AI.
Edit: Here is an ultrafinitist fable that might be useful or at least amusing, from the link.
I like your proposal, but why not just standard finitism? What is your objection to primitive recursive arithmetic?
As a side note, I personally consider a theory "safe" if it fits with my intuitions and can be proved consistent using primitive recursive arithmetic and transfinite induction up to an ordinal I can picture. So I think of Peano arithmetic as safe, since I can picture epsilon-zero.
This isn't my objection personally, but a sufficiently ultra finitist rejects the principle of induction.
I think it's more that an ultrafinitist claims not to know that successor is a total function - you could still induct for as long as succession lasts. Though this is me guessing, not something I've read.
Either that, or they claim that certain numbers, such as 3^^^3, cannot be reached by any number of iterations of the successor function starting from 0. It's disprovable, but without induction or cut it's too long a proof for any computer in the universe to do in a finite human lifespan, or even the earth's lifespan.