We've discussed Edward Nelson's beliefs and work before. Now, he claims to have a proof of a contradiction in Peano Arithmetic; which if correct is not that specific to PA but imports itself into much weaker systems. I'm skeptical of the proof but haven't had the time to look at it in detail. There seem to be two possible weakpoints in his approach. His approach is to construct a system Q_0^* which looks almost but not quite a fragment of PA and then show that PA both proves this system's consistency and proves its inconsistency. 

First, he may be mis-applying the Hilbert-Ackermann theorem-when it applies is highly technical and can be subtle. I don't know enough to comment on that in detail. The second issue is that in trying to show that he can use finitary methods to show there's a contradiction in Q_0^* he may have proven something closer to Q_0^* being omega-inconsistent. Right now, I'm extremely skeptical of this result.  

If anyone is going to find an actual contradiction in PA or ZFC it would probably be Nelson. There some clearly interesting material here such as using a formalization of the  surprise examiation/unexpected hanging to get a new proof of of Godel's Second Incompleteness Theorem. The exact conditions which this version of Godel's theorem applies  may be different from the conditions under which the standard theorem can be proven. 

New Comment
116 comments, sorted by Click to highlight new comments since: Today at 8:01 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]12y220

Nelson has withdrawn his claim. Link.

5gwern12y
Is that a withdrawal of the entire approach or just one part of it?

On the FOM list, he writes:

Terrence Tao, at http://golem.ph.utexas.edu/category/2011/09/ and independently Daniel Tausk (private communication) have found an irreparable error in my outline. (...)

(...) I withdraw my claim.

The consistency of P remains an open problem.

4gwern12y
Thanks, that's very useful for http://predictionbook.com/predictions/3508
3[anonymous]12y
I am interested in how this event has influenced peoples view of the consistency of arithmetic. For those who had a viewpoint on it prior to this, what would you have estimated before and after and with what confidence? I will not state my own view for fear of damaging the results.
0JoshuaZ12y
It seems important in this context to distinguish between consistency of different systems Overall, my consistency estimate for PA if anything has gone up in this context. Because if Edward Nelson tries really hard and fails to find a contradiction that's more evidence that there isn't one. How much should it go up by? I'm not sure. On the other hand, this may very well make my estimate for consistency of ZFC go down. That's because although in this specific case Nelson's approach didn't work, it opens up new avenues into demonstrating inconsistency, and part of me could see this sort of thing being more successfully applied to ZFC (although that also seems unlikely). (Also note that there have been attempts by a much larger set of mathematicians in the last few years to find an inconsistency in ZF and that has essentially failed.) One thing that this has also done is made me very aware in a visceral fashion that number theory and logic are not the same things even when one is talking about subsystems of PA. I already knew that, but maybe had not fully emotionally processed it as much as I should have as when I saw Nelson's outline, and then the subsequent discussions and had a lot of trouble following the details. The main impact of this is to suggest that when making logic related claims (especially consistency and independence/undecidability issues) I should probably not rate my own expertise as highly as I do. As a working mathematician, I almost certainly have more relevant expertise than a random individual, but I've probably been overestimating how much my expertise matters. In both the cases of ZF/ZFC and the case of PA, reducing my confidence means increasing the chance that they are inconsistent. But, I'm not at all sure by how much this should matter. So maybe this should leave everything alone? So overall I'd say around .995 consistency for PA and .99 consistency for ZF. Not much change from the old values.

More here, including a comment by Terence Tao.

[-][anonymous]13y130

Specifically, Tao's comment:

I have read through the outline. Even though it is too sketchy to count as a full proof, I think I can reconstruct enough of the argument to figure out where the error in reasoning is going to be. Basically, in order for Chaitin's theorem (10) to hold, the Kolmogorov complexity of the consistent theory T has to be less than l. But when one arithmetises (10) at a given rank and level on page 5, the complexity of the associated theory will depend on the complexity of that rank and level; because there are going to be more than 2^l ranks and levels involved in the iterative argument, at some point the complexity must exceed l, at which point Chaitin's theorem cannot be arithmetised for this value of l.

(One can try to outrun this issue by arithmetising using the full strength of Q_0^*, rather than a restricted version of this language in which the rank and level are bounded; but then one would need the consistency of Q_0^* to be provable inside Q_0^*, which is not possible by the second incompleteness theorem.)

I suppose it is possible that this obstruction could be evaded by a suitably clever trick, but personally I think that the FTL neutrino confirmation will arrive first.

He's such a glorious mathematician. <3

He gave a more detailed comment on the n-Category Café.

0[anonymous]13y
This confuses me. Second incompleteness applies to extensions of PA . I'll bet Terry Tao knows this too. So what does this remark mean?

FTL neutrinos and now a proof of inconsistency in Peano Arithmetic? What next?

I have devised a little proof of inconsistency of the Newtonian mechanics, years ago.

http://critticall.com/alog/Antinomy_inside_mechanics.pdf

Can you spot the error?

6Richard_Kennaway13y
Vg vf pbeerpg gung ng g=0, rnpu cnegvpyr unf n yrsgjneqf irybpvgl. Ohg vg vf abg pbeerpg gung gur prager bs tenivgl ng g=0 unf n yrsgjneqf irybpvgl. Guvf vf orpnhfr gur flfgrz orunirf qvfpbagvahbhfyl ng g=0, naq fb gur irybpvgl bs gur prager bs tenivgl pnaabg or qrgrezvarq ol gnxvat n jrvtugrq nirentr bs gur vavgvny irybpvgvrf bs gur pbzcbaragf. Va zber qrgnvy: Yrg P(g) or gur cbfvgvba bs gur prager bs tenivgl ng gvzr g. Gur irybpvgl bs gur prager bs tenivgl ng g=0 vf gur yvzvg nf g -> 0 bs (P(g)-P(0))/g. Ng nal gvzr g, P(g) vf gur fhz bire a bs Z(a)C(a,g), jurer Z(a) vf gur znff bs gur agu cbvag naq C(a,g) vgf cbfvgvba. Gur nethzrag eryvrf ba pbzchgvat qP/qg nf gur fhz bire a bs Z(a)qC(a,g)/qg. Guvf fjncf gur beqre bs gjb yvzvgvat cebprffrf, fhzzngvba naq qvssreragvngvba, juvpu vf inyvq sbe jryy-orunirq fhzf, ohg abg sbe guvf bar. For intuition, imagine that there were just the first three particles in the sequence, and that the number 10 is replaced by something much larger. The two leftmost particles will rapidly oscillate about one another, while drifting slowly towards the rightmost. Now imagine the first four particles are present. The two leftmost will oscillate madly about each other, while that system as a whole oscillates rapidly about the third from the left (I'm conveniently ignoring what happens at the instants when particles meet and pass through each other), and the ensemble of the three leftmost drifts towards the rightmost. Repeat.
2AlexMennen13y
It's acceleration that matters, not velocity (the initial velocity of all points is zero, or at least that's how I thought of it). However, your argument does generalize nicely to acceleration, and could possibly be the correct resolution.
0Richard_Kennaway13y
Yes, stupid mistake of mine. Replace velocity by acceleration and position by velocity and the same argument works. The fallacy is in the vagrepunatr bs yvzvgf.
3Alejandro112y
Similar, for those who enjoyed discussing this problem: Did you know that Newtonian mechanics is indeterministic?
2Sniffnoy12y
These equations don't make sense dimensionally. Are there supposed to be constants of proportionality that aren't being mentioned? Are they using the convention c=1? Well, I doubt it's relevant (scaling things shouldn't change the result), but... Edit: Also, perhaps I just don't know enough differential equations, but it's not obvious to me that a curve such as he describes exists. I expect it does; it's easy enough to write down a differential equation for the height, which will give you a curve that makes sense when r>0, but it's not obvious to me that everything still works when we allow x=0.
0Alejandro112y
That is my guess. The simplest way IMO would be replace the g in eq. 1 by a constant c with units of distance^(-1/2). The differential equation becomes r'' = g c r^1/2, which works dimensionally. The nontrivial solution (eq. 4) is correct with an added (c g)^2 in front. I'm not sure what you mean here. What could be wrong in principle with a curve h = c r^3/2 describing the shape of a dome, even at r = 0?
1Sniffnoy12y
Notice "r" here does not mean the usual radial distance but rather the radial distance along the curve itself. I don't see any obvious barrier to such a thing and I expect it exists, but that it actually does isn't obvious to me; the resulting differential equation seems to have a problem at h=0, and not knowing much about differential equations I have no idea if it's removable or not (since getting an explicit solution is not so easy).
1Alejandro112y
You piqued my curiosity, so I sat to play a little with the equation. If R is the usual radial coordinate, I got: dh/dR = c y^(1/3) / [(1 - c^2 y^(2/3))^(1/2)] with y = 3h/(2c), using the definition of c in my previous comment. (I got this by switching from dh/dr to dh/dR with the relation between sin and tan, and replaciong r by r(h). Feel free to check my math, I might have made a mistake.) This was easily integrated by Mathematica, giving a result that is too long to write here, but has no particular problem at h = 0, other than dR/Dh being infinite there. That is expected, it just means dh/dR = 0 so the peak of the dome is a smooth maximum.
0lessdazed13y
Vf vg gung gur vavgvny irybpvgl bs gur znggre vf mreb, gur vavgvny naq riraghny nppryrengvba bs gur pragre bs tenivgl vf mreb, ohg gur pragre bs tenivgl vgfrys ortvaf jvgu n yrsgjneq irybpvgl?
0ArisKatsaris13y
This is a good one, but I won't try doing the math to figure out the specific error with it (other than the obvious problem of (rot13) gur arrq gb pnyphyngr gur rssrpg bs nyy gur znffrf, abg whfg gur vzzrqvngryl ba gur evtug naq gur vzzrqvngryl ba gur yrsg.) Here's what my intuition is telling me: Gung jung jvyy npghnyyl unccra vf gung gur znffrf jvyy pbairetr fbzrjurer va gur pragre, jvgu fbzr bs gur znffrf nsgre gur zvqqyr cbvag zbivat evtugjneqf vafgrnq. V vzntvar gung V'q unir gb hfr vagrtengvba bire gur ahzore bs vasvavgr znffrf ba gur yrsg, gb frr gung gur zngu sbepr gur jubyr guvat gb onynape.
0rwallace13y
An elegant puzzle. Gur fbyhgvba vf rnfvrfg gb frr vs jr pbafvqre n svavgr ahzore bs vgrengvbaf bs gur cebprqher bs fcyvggvat bss n fznyyre znff. Va rnpu vgrengvba, gur yrsgzbfg znff onynaprf gur obbxf ol haqretbvat irel encvq nppryrengvba gb gur evtug. Va gur yvzvg nf gur ahzore bs vgrengvbaf tbrf gb vasvavgl, jr unir na vasvavgrfvzny znff onynapvat gur obbxf jvgu vasvavgryl ynetr nppryrengvba.
2[anonymous]13y
I think the solution I came up with is, in spirit, the same as this one. Pbafvqre gur nzbhag bs nppryrengvba rnpu cnegvpyr haqretbrf. Gur snegure lbh tb gb gur yrsg, gur terngre gur nppryrengvbaf naq gur fznyyre gur qvfgnaprf, naq, guhf, gur yrff gvzr vg gnxrf orsber n pbyyvfvba unccraf. (V pbhyq or zvfgnxra gurer.) Sbe rirel cbfvgvir nzbhag bs gvzr, pbyyvfvbaf unccra orsber gung nzbhag bs gvzr. Gurersber, gur orunivbe bs gur flfgrz nsgre nal nzbhag bs gvzr vf haqrsvarq.
1Tyrrell_McAllister13y
Jung vs jr fhccbfr gung gur obqvrf pna cnff guebhtu rnpu bgure? Ubj qbrf gur flfgrz orunir nf gvzr cebterffrf?
0pengvado13y
Vg'f rnfl rabhtu gb rkgraq arjgbavna zrpunavpf gb unaqyr gjb-cnegvpyr pbyyvfvbaf. Whfg cvpx bar bs "rynfgvp pbyyvfvba" be "cnff guebhtu rnpu bgure", naq nccyl gur pbafreingvba ynjf naq flzzrgevrf. Ohg V'z abg njner bs nal cebcbfnyf gb unaqyr guerr-be-zber-cnegvpyr pbyyvfvbaf: va gung pnfr, pbafreingvba bs raretl naq zbzraghz vfa'g rabhtu gb qrgrezvar gur bhgchg fgngr, lbh'q arrq fbzr npghny qlanzvpf gung ner qrsvarq ba gur vafgnag bs pbyyvfvba.
0Douglas_Knight13y
Do you have a reference for how to extend Newtonian mechanics to collisions or passing-through of point particles subject to gravity? how about people complaining about 3 body collisions? Bringing in energy and momentum doesn't sound helpful to me because they are both infinite at the point of collision. My understanding is that both of the choices has a unique analytic extension to the complex plane away from the point of collision. Most limiting approaches will agree with this one. In particular, Richard Kennaway's approach (to non-collision) is perturb the particles into another dimension, so that the particles don't collide. The limit of small perturbation is the passing-through model. For elastic collisions, I would take the limit of small radius collisions. I think this is fine for two body collisions. In dimension one, I can see a couple ways to do three body collisions, including this one. The other, once you can do two body collisions, is to perturb one of bodies, to get a bunch of two body collisions; in the limit of small perturbation, you get a three body collision. But if you extend this to higher dimension, it results in the third body passing through the collision (which is a bad sign for my claim that most limiting approaches agree). When I started writing, I thought the small radius approach had the same problem, but I'm not sure anymore.
0rwallace13y
Yes, I think so too.
0Oscar_Cunningham13y
Gur prager bs znff vf ng gur bevtva naq fgnlf gurer?
0[anonymous]13y
Clever! It took me a couple of minutes to get it: Gur yrsgzbfg cbvag (gur 1 xt znff) jvyy zbir gb gur evtug, xrrcvat gur pragre bs tenivgl jurer vg vf. Fb vg'f abg gehr gung "jr pna cebir guvf sbe rirel cbvag.
0ArisKatsaris13y
You are misreading which one's the rightmost mass. The rightmost mass is the one with 1kg.
0[anonymous]13y
Whoops, you're right. I assumed +1/10 meant +1/10th from +1. I had the whole thing backwards.
-1AlexMennen13y
Gurer vf ab reebe. Gur cevapvcyr gung gur pragre bs znff pnaabg nppryrengr pbzrf sebz gur snpg gung, sbe nal cbvag znff, gur sbepr vg rkregf ba bgure znffrf vf rknpgyl bccbfvgr gb gur sbepr bgure znffrf rkreg ba vg. Guvf qbrf abg nccyl gb vasvavgr pbyyrpgvbaf bs znffrf sbe gur fbzr ernfba gung na vasvavgr frevrf bs barf qbrf abg nqq gb mreb, rira gubhtu vg vf gur qvssrerapr orgjrra gjb vqragvpny frevrf, rnpu gur fhz bs nyy cbfvgvir vagrtref.
0Douglas_Knight13y
Vs lbh guvax gurer vf n fcrpvsvp qviretrag frevrf, anzr vg.
0AlexMennen13y
Gur fhz bs gur gbgny sbeprf ba nyy cnegvpyrf. sum(n=0 to inf) { [sum(m=0 to n-1) G*(1/2)^n*(1/2)^m/((1/10)^m-(1/10)^n)^2] - [sum(m=n-1 to inf) G*(1/2)^n*(1/2)^m/((1/10)^n-(1/10)^m)^2] } vf yrff guna 0 sbe rirel grez va gur bhgre fhz, rira gubhtu lbh pna pnapry bhg nyy gur grezf ol ernpuvat vagb gur vaare fhzf.
1wedrifid13y
To put code blocks in mark down put four spaces before the lines you want to be code. I assume that is what you intended?
0AlexMennen13y
Edited, although I'm not sure it's all that much more readable now.
0wedrifid13y
I was thinking of including a line break or two. ;)
0AlexMennen13y
Good idea.
0Douglas_Knight13y
bx, gung vf n ceboyrzngvp erneenatrzrag. Ohg juvpu fvqr vf pbeerpg? Lbh znqr n pynvz nobhg guvf va lbhe bevtvany pbzzrag, ohg V guvax lbh tbg vg onpxjneqf.
0AlexMennen13y
My previous attempt at a reply didn't make much sense. And this is deep enough into the thread that I won't bother with rot13. Let's try again: The description given by Thomas accurately models how each individual particle accelerates (towards the origin). It thus also accurately models how the center of mass accelerates in the non-limiting case.
0Douglas_Knight13y
I think Richard Kennaway's answer is correct and it seems to me to be opposite to your original answer. The non-limiting case is not interesting. The question is whether there is a coherent extension of Newtonian mechanics to infinitely many particles. It does cover this case and the center of mass still works.
0AlexMennen13y
I think this paradox shows that the answer is no. I can't figure out what this sentence was intended to mean.
2Douglas_Knight13y
Richard Kennaway's answer
0AlexMennen13y
V guvax gung va gur aba-yvzvgvat pnfr, gur neenatrzrag fubjvat rirelguvat qevsgvat gbjneqf gur bevtva vf pbeerpg naq gur neenatrzrag fubjvat gur pragre bs znff fgnlvat fgvyy vf va reebe. Bs pbhefr, va gur yvzvg nf gur ahzore bs cbvag znffrf vapyhqrq tbrf gb vasvavgl, vg'f gur bgure jnl nebhaq.
-8Vaniver13y
[-][anonymous]12y80

So by now Nelson's outline has been challenged by the formidable Terry Tao, and Nelson (himself formidable!) has responded to this challenge and isn't budging. Link.

The FTL thread has attracted many confident predictions about the ultimate outcome. But this one hasn't. Is this because people find the subject less interesting? Or because they are less confident?

For what it's worth, here's the timeline of my thoughts/beliefs, in silly internal-monologue form. Maybe the numbers shouldn't be taken too seriously, and I'm not trying to bait anyone into bett... (read more)

4JoshuaZ12y
I've put up on Prediction Book a relevant prediction. The main reason I've given only a 75% percent chance that it may take a lot of time to study the matter in detail. If you are putting 10% on the chance that Nelson's proof will turn out to be correct, then 10% seems way too high for essentially reasons that benelliot touched on below. My own estimate for this is being correct is around 0.5%. (That's sort of a meta estimate which tries to take into account my own patterns of over and underconfidence for different types of claims.) I'd be willing to bet 100$ to 1$ which would be well within both of our stated estimates.
0[anonymous]12y
Interesting. Has your prior for "a contradiction will be found in the next hundred years" moved since Nelson's announcement? My 5/40/5/10 numbers are (were) my estimation of the chance that a contradiction will be found in my lifetime. I'm far less certain that Nelson will have it sewed up in a few months, as he's claiming, but it's still quite likely he's made a big breakthrough.
0JoshuaZ12y
Oh. I see. I thought your numbers were for Nelson's proof working. Hmm, given that, they seem more reasonable but still too high. I'd need to think more carefully about how likely I'd estimate things.
0benelliott12y
Depending on the exact conditions, I might be prepared to take that.
4benelliott12y
The issue is mainly that I don't think I'm qualified to determine the outcome of the debate, Nelson and Tao are both vastly better at maths than me, so I don't have much to go on. I suspect others are in a similar predicament. I've been trying to understand their conversation and if I understand correctly Tao is right and Nelson has made a subtle and quite understandable error, but I also estimate P(my understanding is correct) < 0.5 is not very large, so this doesn't help much, and even if I am right there could easily be something I'm missing. I would also assign this theorem quite a low prior. Compare it to P =! NP, when a claimed proof of that comes out, mathematicians are usually highly sceptical (and usually right), even if the author is a serious mathematician like Deolalikar rather than a crank. Another example that springs to mind is Cauchy's failed proof of Fermat's last theorem (even though that was eventually proven, I still think a low prior was justified in both cases). This, if correct, would be a vastly bigger result than either of those. I don't think it would be an exaggeration to call this the single most important theorem in the history of mathematics if correct, so I think it deserves a much lower prior than them. Even more so, since in the case of P =! NP most mathematicians at least think it's true, even if they are sceptical of most proofs, in this case most mathematicians would probably be happy to bet against it (that counts for something, even if you disagree with them). I don't have much money to lose at this point in my life, but I'd be happy to bet $50 and $1 that this is wrong.
5[anonymous]12y
I think there's a salient difference between this and P = NP or other famous open problems. P = NP is something that thousands of people are working on and have worked on over decades, while "PA is inconsistent" is a much lonelier affair. A standard reply is that every time a mathematician proves an interesting theorem without encountering a contradiction in PA, he has given evidence for the consistency of PA. For various reasons I don't see it that way. Same question as for JoshuaZ: has your prior for "a contradiction in PA will be found within a hundred years" moved since Nelson's announcement?
3benelliott12y
Yes, obviously P(respectable mathematician claims a contradiction | a contradiction exists) > P(respectable mathematician claims a contradiction | no contradiction exists), so it has definitely moved my estimate. Like yours, it also moved back down when Tao responded, back up a bit when Nelson responded to him, and back down a bit more when Tao responded to him and I finally managed a coherent guess at what they were talking about. I'm not sure this is an important difference. I think scepticism about P =! NP proofs might well be just a valid even if far fewer people were working on it. If anything it would be more valid, lots of failed proofs gives you lots of chances to learn from the mistakes of others, as well as building avoiding routes which are proven not to work by others in the field. Furthermore, the fact that huge numbers of mathematicians work on P vs NP but have never claimed a proof suggests a selection effect in favour of those who do claim proofs, which is absent in the case of inconsistency. Furthermore, not wanting to be unfair to Nelson, but the fact he's working alone on a task most mathematicians consider a waste of time may suggest a substantial ideological axe to grind (what I have heard of him supports this thoery) and sadly it is easier to come up with a fallacious proof for something when you want it to be true. I'm not sure if this line of debate is a productive one, the issue will be resolved one way or the other by actual mathematicians doing actual maths, not by you and me debating about priors (to put it another way, whatever the answer ends up being, this conversation will have been wasted time in retrospect).
2[anonymous]12y
Can you roughly quantify it? Are we talking from million-to-one to million-to-one-point-five, or from million-to-one to hundred-to-one? Sorry if I gave you a bad impression: I am not trying to start a debate in any adversarial sense. I am just curious. Of that there's no doubt, but it speaks well of Nelson that he's apparently resisted the temptation toward self-deceipt for decades, openly working on this problem the whole time.
3benelliott12y
The announcement came as a surprise, so the update wasn't negligible. I probably wouldn't have gone as low as million-to-one before, but I might have been prepared to estimate a 99.9% chance that arithmetic is consistent. However, I'm not quite sure how much of this change is a Bayesian update and how much is the fact that I got a shock and thought about the issue a lot more carefully.
4[anonymous]12y
Cauchy? Don't you mean Lamé? Anyway if you lose the bet 1 = 0 therefore you don't need to transfer any money.
4benelliott12y
From what gathered both went down a similar route at the same time, although it is possible that I am mistaken in this. I named Cauchy as he seems to be the best-known of the two, and therefore the one with the highest absolutely-positively-not-just-some-crank factor. If I lose the bet (not that anyone has yet agreed to accept the bet), then payment will be based on restricted number theory without the axiom of induction.

And here I thought there wasn't anything besides c I'd bet on at 99-to-1 odds.

Two! Two things in the universe I'd bet on at 99-to-1 odds. Though I'm not actually going to do it for more than say $2k if anyone wants it, since I don't bet more than I have, period.

Would you bet at 99-to-1 odds that I will not win the lottery tomorrow?

Assume for a second that FTL communication is possible and that PA is inconsistent. How could this possibly influence a proof of AI friendliness that has been invented before those discoveries were made and how can one make an AI provably friendly given other possible fundamental errors in our understanding of physics and mathematics?

4Jack13y
We'd have to know what a proof of friendliness looks like to answer this.
4JoshuaZ13y
So this is an interesting set of questions. I don't think FTL issues should matter in this context. Inconsistency in PA is more of an issue. I'm not terribly worried about an actual inconsistency in PA. So much goes wrong if that's the case that AI issues might actually be small (no really. It could be that bad.). That said, however, this sort of worry makes a lot more sense for proofs in ZF or ZFC. I'd be a lot more worried about a contradiction showing up in ZF than in PA. So one might want to make sure that any such proofs use as small a segment of ZF as possible.
5[anonymous]13y
That is an understandable reaction. But since encountering Nelson's ideas a while ago, I've found myself thinking less and less that it would be that bad at all. There is no very strong evidence that completed infinities need to be grappled with at a physical or real-world level. The main "finitary" thing you lose without PA is superexponentiation. But that shows up surprisingly rarely, in the universe and in math. (Or maybe not so surprisingly!) Interestingly, superexponentiation shows up very often on less wrong. Maybe that's a good answer to XiXiDu's question. If Nelson is right, then people should stop using 3^^^3 in their thought experiments.
1ArisKatsaris13y
...I don't understand how this can be. 3^^3 is just (3^(3^3))= 3^27 = 7625597484987 And 3^^^3 is just (3^(3^(3^(... 7625597484987 times ...)))) Superexponentiation is just made of exponentiation many times. And exponentiation is made of multiplication, and multiplication is made of addition. How can superexponentiation be made invalid without making invalid even normal addition?
4[anonymous]13y
I can say more if you want, but maybe see here and here for an explanation.
1JoshuaZ13y
Any finite calculation of superexponentiation will be valid. But as I understand it you can't in general in Nelson's formulation prove that superexponentation is well-defined in general.
1JoshuaZ13y
My understanding is that Nelson doesn't necessarily believe in exponentiation either although his work here doesn't have the direct potential to wreck that. The real issue isn't PA as much as the fact that there are a very large number of theorems that we use regularly that live in axiomatic systems that model PA in a nearly trivial fashion. For example, ZFC minus replacement, choice and foundation. Even much weaker systems where one replaces the power set axiom with some weaker axiom allows you to construct PA. For example if you replace the power set axiom with the claim that for any two sets A and B their cartesian product always exists. Even this is much stronger than you need. Similarly, you can massively reduce the allowed types of input sets and predicates in the axiom schema of specification and still have a lot more than you need. You can for example restrict the allowed predicate sets to at most three quantifiers and still be ok (this may be restrictable to even just two quantifiers but I haven't worked out the details). And what Nelson's proof (if valid) breaks is weaker than PA which means that if anything the situation is worse. It may be for example that any axiomatic system which lets us talk about both the real numbers and the integers allows Nelson's problem. Say one for example one has an axiomatic system which includes Nelson's extreme finitary version of the natural numbers, include the very minimal amount of set theory necessary to talk about sets, include second order reals, and an axiom embedding your "natural numbers" into the reals. Can you do this without getting all sorts of extra stuff thrown in? I don't know and this looks like an interesting question from a foundational perspective. It may depend a lot on which little bits of set theory you allow. But, I'm going to guess that the answer is no. I'll need to think about this a bit more to make the claim more precise.
7[anonymous]13y
I know this isn't the main point of your comment but I got carried away looking this stuff up. In his 1986 book, he defines a predicate exp(x,k,f) which basically means "f is a function defined for i < k and f(i) = x^i". And he defines another predicate epsilon(k) which is "for all x there exists f such that exp(x,k,f)". I think you can give a proof of epsilon(k) in his regime in O(log log k) steps. In standard arithmetic that takes O(1) steps using induction, but O(log log k) is pretty cheap too if you're not going to iterate exponentiation. So that's a sense in which Nelson does believe in exponentiation. On the other hand I think it's accurate to say that Nelson doesn't believe that exponentiation is total, i.e. that for all n, epsilon(n). Here's an extract from Nelson's "Predicative Arithmetic" (Princeton U Press 1986). Chapter heading: "Is exponentiation total?"
2benelliott12y
This argument applies just as well to addition and multiplication.
3[anonymous]12y
See Nelsons paper where he talks about the qualitative difference between types of recursion.
1[anonymous]12y
I think I can explain that. Nelson doesn't believe in any kind of Platonic number, but he does believe in formal number systems and that they can sometimes accurately describe the world. We have essentially two number systems: the tally-mark system (e.g. SSSSS0 and so on), and the positional system (e.g. 2011 and so on). Both of these are successful at modeling aspects of the world. Most people regard these two number systems as equivalent in some sense. Specifically, there is a procedure for rewriting a positional number as a tally number, and most people believe that this procedure terminates "at least in principle." (For instance, one can formally prove this in PA using the induction axiom). But Nelson denies it. He believes that the positional number system and the tally number system are different, and that the positional number system is strictly stronger. I think your remark -- -- applies to the tally system, but not to the positional system. But Nelson does not believe that the positional system is inconsistent. So he is happy to adjoin axioms to the tally system that allow for working with addition and multiplication.
2benelliott12y
My knowledge of complexity theory is very limited, but I thought exponentiation could be done in polynomial time. Is that incorrect, or am I misunderstanding Nelson's argument (please don't tell me he also doubts complexity theory, we must be allowed some fun :P). EDIT: Yes, I was wrong, the claim of polynomial time in the length of b may well be too strong, but it does seem to use less than 2^b. I think I can use the position system to compute 2^b in O(b^2) steps, which seems to lead us straight from accepting multiplication to accepting exponentiation.
2[anonymous]12y
I think so, in positional notation. That's what I was alluding to with my O(log log k) comment. But there is no polynomial time algorithm to reduce 2^n to a tally number (obviously). I once read something by Nelson about the polynomial hierarchy in complexity theory, but I can't find it now.
[-][anonymous]13y40

This aspect is very interesting:

Qea. If this were normal science, the proof that P is inconsistent could be written up rather quickly. But since this work calls for a paradigm shift in mathematics, it is essential that all details be developed fully. At present, I have written just over 100 pages beginning this. The current version is posted as a work in progress at http://www.math.princeton.edu/~nelson/books.html, and the book will be updated from time to time. The proofs are automatically checked by a program I devised called qea (for quod est absurdum

... (read more)
3[anonymous]13y
It seems easier to me to simply trust Coq than to read through 23 megabytes of proofs by eye. But I'm not entirely certain what the purpose of qea is.
2vi21maobk9vp13y
It looks like Qea does verify the proof and generates human-readable text. In that case, my trust in it or Coq would be based on their relative size adjusted for amount of testing these tools received. Coq uses a non-trivially unique proof system (calclulus of coinductive constructions, if I remember correctly), which counts against it. Did anything look what proof system Qea uses?
0[anonymous]13y
Yeah, I've heard about this "notorious difficulty" in verifying proof checkers before, and I don't understand what it could mean. Humans are way more unverifiable. I bet the real purpose of qea is that it allowed its author to avoid learning to use something else. But I find it interesting that he recognizes the importance of computer verification for something like this, and maybe it indicates that this has been in the works for a while. It doesn't have to be half-baked to be wrong, though.
1benelliott13y
I think the idea is that for a point this controversial he is well aware that mathematicians may actually object to his trustworthiness (they've objected to less questionable things in the past), and want to verify the proof for themselves. I think he may well be right in this. However, I don't see why he can't give a full explanation (his current paper isn't) for humans as well, since this would probably be finished sooner and would probably save a lot of his own time if there is a mistake.
3vi21maobk9vp13y
Actually, I would be more interested in the size of checking core of Qea. Quickly inspecting 23 megabytes of proofs sounds like a bad idea. A converter to TSTP first-order proof format (for example) would allow me to use a few different bug-for-bug-independent well-tested checkers for this format that are already out there; that I would trust more than both human inspection and any single-implementation prover.
3benelliott13y
How do you get to 733 axioms? Maybe I'm being stupid, but doesn't PA run on just 5?
5Sniffnoy13y
Strictly speaking, PA uses infinitely many axioms -- the induction axiom is actually an axiom schema, one axiom for each predicate you can plug into it. If you actually had it as one axiom quantifying over predicates, that would be second-order.
2[anonymous]13y
I think that Nelson denies that there is a completed infinity of predicates that you can plug into the schema.
6Sniffnoy13y
Well, you'd certainly only need finitely many to prove inconsistency.
3[anonymous]13y
I think 733 is counting axioms, definitions, and theorems all.
1benelliott13y
That would explain it.
3ArisKatsaris13y
It said "733 axioms, definitions, and theorems" I'm guessing 733 is the sum of the axioms, definitions and theorems, not just the axioms alone.

Here's a summary and discussion of the affair, with historical comparison to the Gödel results and their reception (as well as comments from several luminaries, and David Chalmers) on a philosophy of mathematics blog whose authors seem to take the position that the reasons for consensus in the mathematical community are mysterious. (It is admitted that "arguably, it cannot be fully explained as a merely socially imposed kind of consensus, due to homogeneous ‘indoctrination’ by means of mathematical education.") This is a subject that needs to be discussed more on LW, in my opinion.