Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Crackpot Offer

38 Post author: Eliezer_Yudkowsky 08 September 2007 02:32PM

When I was very young—I think thirteen or maybe fourteen—I thought I had found a disproof of Cantor's Diagonal Argument, a famous theorem which demonstrates that the real numbers outnumber the rational numbers.  Ah, the dreams of fame and glory that danced in my head!

My idea was that since each whole number can be decomposed into a bag of powers of 2, it was possible to map the whole numbers onto the set of subsets of whole numbers simply by writing out the binary expansion.  13, for example, 1101, would map onto {0, 2, 3}.  It took a whole week before it occurred to me that perhaps I should apply Cantor's Diagonal Argument to my clever construction, and of course it found a counterexample—the binary number ...1111, which does not correspond to any finite whole number.

So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

I was initially a bit disappointed.

The thought went through my mind:  "I'll get that theorem eventually!  Someday I'll disprove Cantor's Diagonal Argument, even though my first try failed!"  I resented the theorem for being obstinately true, for depriving me of my fame and fortune, and I began to look for other disproofs.

And then I realized something.  I realized that I had made a mistake, and that, now that I'd spotted my mistake, there was absolutely no reason to suspect the strength of Cantor's Diagonal Argument any more than other major theorems of mathematics.

I saw then very clearly that I was being offered the opportunity to become a math crank, and to spend the rest of my life writing angry letters in green ink to math professors.  (I'd read a book once about math cranks.)

I did not wish this to be my future, so I gave a small laugh, and let it go.  I waved Cantor's Diagonal Argument on with all good wishes, and I did not question it again.

And I don't remember, now, if I thought this at the time, or if I thought it afterward... but what a terribly unfair test to visit upon a child of thirteen.  That I had to be that rational, already, at that age, or fail.

The smarter you are, the younger you may be, the first time you have what looks to you like a really revolutionary idea.  I was lucky in that I saw the mistake myself; that it did not take another mathematician to point it out to me, and perhaps give me an outside source to blame.  I was lucky in that the disproof was simple enough for me to understand.  Maybe I would have recovered eventually, otherwise.  I've recovered from much worse, as an adult.  But if I had gone wrong that early, would I ever have developed that skill?

I wonder how many people writing angry letters in green ink were thirteen when they made that first fatal misstep.  I wonder how many were promising minds before then. 

I made a mistake.  That was all.  I was not really right, deep down; I did not win a moral victory; I was not displaying ambition or skepticism or any other wondrous virtue; it was not a reasonable error; I was not half right or even the tiniest fraction right.  I thought a thought I would never have thought if I had been wiser, and that was all there ever was to it.

If I had been unable to admit this to myself, if I had reinterpreted my mistake as virtuous, if I had insisted on being at least a little right for the sake of pride, then I would not have let go.  I would have gone on looking for a flaw in the Diagonal Argument.  And, sooner or later, I might have found one.

Until you admit you were wrong, you cannot get on with your life; your self-image will still be bound to the old mistake.

Whenever you are tempted to hold on to a thought you would never have thought if you had been wiser, you are being offered the opportunity to become a crackpot—even if you never write any angry letters in green ink.  If no one bothers to argue with you, or if you never tell anyone your idea, you may still be a crackpot.  It's the clinging that defines it.

It's not true.  It's not true deep down.  It's not half-true or even a little true.  It's nothing but a thought you should never have thought.  Not every cloud has a silver lining.  Human beings make mistakes, and not all of them are disguised successes.  Human beings make mistakes; it happens, that's all.  Say "oops", and get on with your life.

 

Part of the Letting Go subsequence of How To Actually Change Your Mind

Next post: "Just Lose Hope Already"

Previous post: "The Importance of Saying "Oops""

Comments (69)

Sort By: Old
Comment author: Tom_McCabe 08 September 2007 03:13:52PM 31 points [-]

"So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory."

I know how that feels. When I was 14 or so, I took a course on cryptography, and the textbook proclaimed that modular inverses were the basis of public-key algorithms like RSA. I felt that modular inverses were crackable, and I plodded along on the problem for a few weeks, until I finally discovered a polynomial-time algorithms for doing modular inverses. It turned out that I had reinvented Euclid's algorithm, and the textbook authors were idiots.

Comment author: Baughn 04 February 2012 05:11:46PM 3 points [-]

Well, that's a pretty impressive "error" though. :-)

Comment author: Phil 08 September 2007 03:57:45PM 4 points [-]

Not to draw attention away from your main argument, but how does 1101 map onto {0, 2, 3}? It's probably obvious, but I don't see it.

Comment author: CCC 03 October 2012 09:35:30AM 4 points [-]

It's the positions of the ones, starting from position zero on the far right. Similarly, 19 (10011) would map to {0, 1, 4}.

Comment author: Sebastian_Hagen2 08 September 2007 04:17:43PM 2 points [-]

Phil: Build the set from the used exponents of the powers of two. For instance, 1101[2] = 2**0 + 2**2 + 2**3

Comment author: Stuart_Armstrong 08 September 2007 05:48:29PM 3 points [-]

So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

Feels familiar - when I was younger, I proved the PoincarĂŠ conjecture, and Fermat's last theorem (twice). I generally managed to slay my proofs by myself, though I felt not regret at being wrong, just frustration and anger at myself.

Even now, as a mathematical researcher, it's very hard to give up a nice result that can't be proved. But I manage. And I do feel that there is a silver lining: greater, more confident accuracy.

Comment author: Robin_Hanson2 08 September 2007 05:50:52PM 9 points [-]

After the fact you could see you made a mistake. But the key question is: what were the clearest signals at the time, the sort of signals you had a chance to notice and recognize? What is the warning to others? Presumably it is not to give up after your first failure.

Comment author: Stuart_Armstrong 08 September 2007 05:54:54PM 8 points [-]

But the key question is: what were the clearest signals at the time, the sort of signals you had a chance to notice and recognize?

In my case, it was the fact that brilliant mathematicians had tried to prove these results for generations. No matter how brilliant I think myself, it would be unlikely for me to have found a simple proof where everyone else had failed.

Comment author: Stuart_Armstrong 08 September 2007 05:59:26PM 2 points [-]

Minor quibble: since binary 0.1111... is 1, you need a number like 0.1010101... to get an actual counterexample.

Comment author: alex_zag_al 19 September 2012 02:29:37PM *  0 points [-]

He's looking for a correspondence between the natural numbers and their subsets because the subsets have a correspondence with the interval of reals [0,1], right? So .1111... = 1 is a counterexample, since it corresponds to the set of all whole numbers. Being equal to 1 doesn't make it representable by a finite subset.

Comment author: thomblake 19 September 2012 03:12:36PM *  5 points [-]

You're both wrong, as pointed out later down in the comments. Eliezer wasn't referring to 0.1111...; he was referring to the infinite string ...1111.0. That doesn't represent any finite whole number, but it does represent an infinite set of whole numbers.

And yes, being equal to 1 does make it representable by a finite subset. Notably, {0}.

Comment author: Daniel_Humphries 08 September 2007 06:22:16PM 10 points [-]

It seems like one of the key factors in your story, Eliezer, is that you had read that book on math cranks. You were able to make the leap from your project of disproving Cantor and see its implications for the rest of your life thanks in part to having the example of the math crank in your mind.

Seeking evidence outside the immediate domain of inquiry can be tricky because it might lead one to include evidence that has no bearing on the actual problem, but because human endeavors don't happen in a vacuum, it's a great way of checking yourself for more general errors (like tilting at windmills).

Comment author: Joshua_Fox 08 September 2007 07:10:39PM 8 points [-]

I was not displaying ... any ... virtue

Most math teachers would be delighted if a student was able to understand Cantor's proof, think critically enough to search for a counter-proof, think creatively enough to describe a counter-proof (and based on different mathematical constructs at that), even though the proof was wrong at some critical steps.

This would be quite an achievement even for those who do not go on to the crucial last step of thinking self-critically enough to find the mistake in that "proof."

Comment author: Sebastian_Hagen2 08 September 2007 07:22:01PM 4 points [-]

Minor quibble: since binary 0.1111... is 1, you need a number like 0.1010101... to get an actual counterexample.

Afaict, the original post doesn't contain any mention of binary fractions. An infinite binary sequence consisting entirely of ones doesn't represent any finite integer.

Comment author: Tom_Breton 08 September 2007 07:30:05PM 8 points [-]

It seems to be a common childhood experience on this list to have tried to disprove famous mathematical theorems.

Me, I tried to disprove the four-color map conjecture when I was 10 or 11. At that point it was a conjecture, not a theorem. I came up with a nice moderate size map that, after a apparently free initial labelling and a sequence of apparently forced moves, required a fifth color.

Fortunately the first thing that occured to me was to double-check my result, and of course I found a 4-color coloring.

Comment author: pnrjulius 30 June 2012 03:23:29AM -1 points [-]

I did exactly the same thing.

I also discovered shortly thereafter that I could force an n-coloring if I allowed discontinuous regions, which might seem trivial... except that real nations on real maps are sometimes discontinuous (Alaska, anyone?).

Comment author: Andrew_Clough 08 September 2007 10:29:03PM 9 points [-]

I expect that many people who grew up to be scientists and mathematicians attempted to create famous proofs when they were young, but I also expect that for many engineers such as myself our youthful folly went more along the direction of perpetual motion machines. I'd actually like to see some research on what the correlations really are.

Comment author: Flynn 09 September 2007 12:15:42AM 2 points [-]

LOL. Color me for both, Andrew. Perpetual motion using magnetic levitation in a vacuum at 10. Attempting to come up with a simple proof of Fermat's Theorem at 20 (if there was an easy way to determine n-roots of non-primes, I'd have been SET! :-) )

Comment author: pnrjulius 30 June 2012 03:25:02AM -1 points [-]

Actually, perpetual motion using vacuum energy might really be feasible, since the vacuum energy keeps expanding itself... at present, it looks sort of like a loophole in the laws of nature.

On the other hand, quantum gravity may close this loophole.

Comment author: DaFranker 06 August 2012 08:24:10PM *  4 points [-]

Expansion of the original point: Finding various "loopholes" in the "laws of nature" that would allow FTL/perpetual-motion/infinite-scalable-free-energy/[insert-absurdly-surreal-technology-here].

I did that from age 16 (initially as a bored-by-this-math-class-let's-think-about-something-else tactic, gradually becoming more serious) onwards to around 19, when I finally realize that the "loopholes" aren't actually in the laws of nature, just in how shitty our (or in many of those cases, mine specifically) understanding of them is.

If there exists any loophole in the laws of nature such that something impossible becomes possible through this loophole, then the map was upside-down, and it was a feature of the laws of nature all along; the laws of nature had always permitted it, we just didn't know how. The Universe doesn't rape itself.

Comment author: [deleted] 17 September 2012 09:04:23PM 1 point [-]

Building pmms based on loopholes in the laws of physics is probably a good way to design experiments. Physics says X reality says Y.

Comment author: James_Bach 09 September 2007 01:21:35AM 4 points [-]

Something seems out of kilter about this, Eliezer.

When I was 13, I thought I had a proof in principle that there must be a minimum possible distance-- because to move is to move a finite distance, but no sum of infitesimal distances can compose a finite distance. I shared my idea with a professional physicist, who dismissed my idea using an appeal to authority. I don't care how fabulous the authority was, nor how ignorant I may have been, it was a terrible thing to for him to do that. It killed my enthusiasm for questioning physics, or math, at the time.

Reasoning, even mathematical reasoning, is not about just about right and wrong. It's also about how we model the world and apply our models to it. See Imre Lakatos's wonderful Proofs and Refutations for a look at how proofs are not just proofs, they are assertions about what's worth talking about and what we mean by our words.

And reasoning is also about honing our skills. We must develop the guts to recognize when we are wrong, but also the guts not worry so much about being wrong that we give up before we learn very much.

I once discovered a way to trisect an angle with a compass and straight edge. This has been proven to be impossible, apparently, but I did it. Later I discovered that I used an operation that wasn't "allowed" (an approximation maneuver), even though I performed the maneuver with only a compass and straight edge. To me, the proof that it can't be done is obviously incorrect, by any practical standard. Show me an angle and I can trisect it to an arbitrarily high degree of accuracy with my mechanical procedure. I challenge the "rules" set out by whomever thinks he's the know-all on what can be done with a compass and straight edge.

I hope other 13 year-olds don't read your essay and decide that the rational attitude is never to try to reinvent or challenge the Ancient Ones.

Comment author: Tom_McCabe 09 September 2007 02:00:10AM 1 point [-]

"I challenge the "rules" set out by whomever thinks he's the know-all on what can be done with a compass and straight edge."

I would be interested to see what you can get out of a compass and straightedge if you change the allowable operations. You could wind up with something much more complex than the things the ancient Greeks studied (think of how much more complex a Riemannian manifold is than a Euclidean n-space, once you remove a few of Euclid's axioms).

Comment author: paper-machine 24 August 2011 11:47:41AM *  17 points [-]

I know this is an old comment, but the answer is actually quite nice.

What the compass and straight-edge basically give you is the capacity for solving quadratic equations. There's a field of numbers between the rational and real numbers called the Constructible numbers that completely characterizes what can be done there.

Alternative techniques (e.g., folding) can allow one to solve cubic equations, and so the field of numbers that can be constructed in this way is an extension of the Constructible numbers.

So the full answer to "what you can get if you change the allowable operations" is that construction techniques correspond to field extensions of the rational numbers, and this characterizes their expressive power.

Comment author: Liron 18 September 2011 08:25:57PM 3 points [-]

You are more than a paper-machine, you are a paper-based math expert.

Comment author: Douglas_Knight2 09 September 2007 04:15:34AM 2 points [-]

The ancient Greeks themselves played around with the rules. Archimedes used a "marked straightedge" to trisect an angle.

The first hit on google for trisect an angle is about ways to do it, not discussions of impossibility.

Comment author: Robin_Hanson2 09 September 2007 11:20:11AM 10 points [-]

It seems to me that unless Eliezer was unusual in some other important way not described, he was not at close risk of becoming a math crank.

Comment author: pnrjulius 30 June 2012 03:27:35AM 3 points [-]

But we know that he was unusual: He has a very high IQ. This by itself raises the probability of being a math crank (it also raises the probability of being a mathematician of course).

It's similar to how our LW!Harry Potter has increased chances of being both hero and Dark Lord.

Comment author: Eliezer_Yudkowsky 09 September 2007 02:01:18PM 1 point [-]

I'm also getting that impression, Robin. I'd say, "But there may be a selection effect in the people who comment at Overcoming Bias", but perhaps that would be, well, clinging.

This of course begs the question of where math cranks do come from.

Comment author: Douglas_Knight2 09 September 2007 02:41:27PM 6 points [-]

While many people have mentioned similar disappointments, no one has echoed "I'll get that theorem eventually...even though my first try failed!" That was what seemed like a really bad sign when I read the essay before the comments. But I think we're really bad at communicating feelings, so I don't know how the feelings relate, how strong they were, and especially, how the commenters see the parallels with their reactions.

Comment author: Quill_McGee 03 October 2012 08:11:38AM 1 point [-]

Does it count if the state of trying lasted for a long(but now ended) time? because if so, I kept on trying to create a bijection between the reals and the wholes, until I was about 13 and found an actual number that I could actually write down that none of my obvious ideas could reach, and find an equivalent for all the non obvious ones.( 0.21111111..., by the way)

Comment author: V_V 03 October 2012 08:45:41AM *  2 points [-]

While many people have mentioned similar disappointments, no one has echoed "I'll get that theorem eventually...even though my first try failed!" That was what seemed like a really bad sign when I read the essay before the comments.

I think it's worse than that. Many people mentioned that they have tried to solve open conjectures, which is something that would require exceptional intelligence, expecially without many years of experience. But if you are a smart teenager, thinking that you are exceptionally intelligent falls in the range of normal juvenile hubris.

Yudkowsky didn't try to solve an open conjecture. He tried to disprove a theorem. A theorem that was proved one hundred years ago, and has been known by pretty much everybody who had a math education since then. Thus, Yudkowsky didn't just think he was exceptionally intelligent, he thought that everyone else was basically an idiot.

That's actually a bad symptom of crackpot thought patterns, IMHO.

Comment author: mrsbayes 09 September 2007 09:40:11PM 1 point [-]

This argument that one should admit when they're wrong doesn't generalize beyond the exact reasoning of mathematical proofs and the like. In probablistic reasoning one can be, indeed usually is, wrong but close. The whole Bayesian worldview is predicated on the assumption that being a little bit wrong, or less wrong than the next guy, means you are probably on a more correct track towards the truth. But it doesn't and can't prove that, given just a few more important bits of information, the guy who's currently "more wrong" is right after all. So just how far from 100% probability must one be before one should admit that one is wrong? At what point does searching for more data relevant to a low-probability hypothesis become crackpottery? Should there not be more than just a single probability figure by which one makes this decision?

Comment author: Eliezer_Yudkowsky 10 September 2007 05:10:01AM 3 points [-]

Would any regular commenters/readers object if I deleted comments like those from "a woo just like you"? I've always been nervous around censorship, especially where it carries the appearance of conflict of interest, but lack of censorship also carries its penalties. If I don't get any requests not to do so, I'll delete the comment tomorrow.

Comment author: James_Blair 10 September 2007 05:52:30AM 4 points [-]

As I'm not much of a contributor, you can take my suggestion with a grain of salt but: Why not file away all deleted non-spam comments to a place where they can be read, but are out of the way? That way, moderators don't have to worry so much about censoring people and can instead focus on keeping discussions civil/troll-free.

Comment author: Eliezer_Yudkowsky 10 September 2007 06:10:17AM 2 points [-]

I would much prefer that, but I don't think this blog has the technology.

Comment author: The_Vicar 10 September 2007 07:01:56AM 3 points [-]

Do you remember the title of the book? It sounds interesting, speaking as a lapsed mathematician.

Comment author: Eliezer_Yudkowsky 10 September 2007 10:16:53PM 3 points [-]
Comment author: Barkley__Rosser 11 September 2007 05:38:01PM 1 point [-]

Not sure if this is cranky or not, but when I was youthful I noticed that the Lorentz transformation of space-time due relativistic effects, square root of one minuc v squared over c squared, implies an imaginary solution for an v greater than c, that is for traveling faster than the speed of light. Now, most sci fi stories suggest that one would go backwards in time if one exceeded the speed of light, but I deduced that one would go into a second time dimension.

Of course the problem is that as long as Einstein is right, it is simply impossible to exceed the speed of light, thereby making the entire speculation irrelevant.

Comment author: pnrjulius 30 June 2012 03:28:58AM 0 points [-]

Well, some rather serious physicists have considered the idea: tachyons

Comment author: Gil 12 September 2007 05:24:31PM 6 points [-]

I don't like the formulation: "A thought you should never have thought."

I'd prefer, "An idea you should have quickly rejected."

I suspect that many genuine innovations might first appear to be mistakes or unwarranted challenges to the prevailing wisdom. They should be thought. And they should be considered and criticized. But, we should be ready to reject them if they don't survive the criticism.

Comment author: billswift 13 September 2007 06:56:07AM 0 points [-]

Don't know what your blogging software allows, but richarddawkins.net now has a separate thread for off-topic posts; you click on a label at the end of the article to get to the off-topic thread.

Comment author: markrkrebs 26 February 2010 09:27:21AM *  7 points [-]

I love this site. Found it when looking at a piece of crackpot science on the internet and, wondering, typed "crackpot" into google. I am trying to argue with someone who's my nemesis in most every way, and I'm trying to do it honestly. I feel his vested interest in the preferred answer vastly biases his judgment & wonder what biases do I have, and how did they get there. You seem to address a key one I liken to tree roots, growing in deep and steadfast wherever you first happen to fall, whether it's good ground or not.

Not unlike that analogy, I landed here first, on your post, and found it very good ground indeed.

Comment author: RobinZ 26 February 2010 12:43:19PM 16 points [-]

Wecome to LessWrong!

If you want another couple threads to start exploring, one very good starting place is What Do We Mean By Rationality? and its links; then there is the massive collection of posts accumulated in the Sequences which you can pick over for interesting nuggets. A lot of posts (and comments!) will have links back to related material, both at the top and throughout the text.

Comment author: ciphergoth 26 February 2010 01:09:44PM 13 points [-]

Just to make it explicit: I really appreciate your "welcome" comments, they're good for the site. Thanks.

Comment author: RobinZ 26 February 2010 01:15:47PM 4 points [-]

You're welcome! I saw some other people doing it, and thought that I should do so as well.

Comment author: TheOtherDave 26 October 2010 05:07:13PM 6 points [-]

Let me echo ciphergoth. The effect is broader than you might think; it was because of one of these sorts of comments that I (years later) found the introduction thread when I did.

Admittedly, most readers probably don't start from the beginning and work their way forward. But some of us do!

Comment author: Kevin 28 February 2010 03:23:27AM *  1 point [-]

This is one of my favorite crackpot writings. It does seem plausible that held breath underwater swimming is really good exercise. http://www.winwenger.com/ebooks/guaran.htm

Comment author: taryneast 21 March 2011 09:48:24PM 2 points [-]

Any sort of swimming is good exercise. Hypoxia, however, is bad for you... IMO it's better do the swimming with the oxygen ;)

Comment author: Normal_Anomaly 30 March 2011 02:34:10PM *  0 points [-]

Since everyone is sharing their stories, here's mine. When I was around 10, a family friend introduced me to the four-color map problem. I spent months trying to draw a map that required five colors, and one time I thought I had it. I dreamed of fame and glory for a few hours, then I showed the map to a relative who colored it with four colors. Shortly after, I accepted that I wasn't going to get it and stopped.

Comment author: benelliott 24 August 2011 11:40:06AM 0 points [-]

Shortly after, I demonstrated to my own satisfaction that it was impossible and stopped.

If you really did prove the 4-colour theorem at age 10 then dreams of fame and glory would have been quite justified.

Comment author: Kingreaper 24 August 2011 01:51:48PM 0 points [-]

Demonstrated satisfactorily =/= 100% proven

Most things outside mathematics can never be proven, but many can easily be demonstrated satisfactorily.

Comment author: benelliott 24 August 2011 05:13:45PM 1 point [-]

The 4-colour theorem is not outside mathematics. I think I sort of understand what you mean by 'demonstrated satisfactorily' but in my experience such demonstrations aren't worth much, not only can they be wrong in principle, they are often wrong in practice. Mathematics is nothing if not counter-intuitive.

Comment author: Normal_Anomaly 26 August 2011 09:02:42PM 0 points [-]

I don't mean I proved it. I just meant that I worked at it long enough that I became pretty confident I wouldn't be able to do it. Edited my earlier post to make that clearer.

Comment author: benelliott 26 August 2011 09:53:54PM 0 points [-]

I understand that, I was being intentionally facetious.

Comment author: handoflixue 11 June 2011 11:29:18AM 3 points [-]

what a terribly unfair test to visit upon a child of thirteen. That I had to be that rational, already, at that age, or fail.

I always find it odd that you seem to write as though there is no hope of redemption when one makes a mistake of this magnitude. Certainly, lifetimes can be lost to such mistakes. But then, sometimes, it only takes a week to realise our folly, neh?

Comment author: ec429 16 September 2011 12:26:39PM 2 points [-]

I fear that I might be currently trapped in this error: I've always resented Gödel's Incompleteness Theorems. When I was about 17 I thought I'd disproved 1IT (turned out I'd just reconstructed the proof of 2IT and missed the detail that Con(T)≠ProvT(Con(T))). It took me about a year after that to realise that, no, I wasn't going to disprove the ITs no matter how much I wanted to, and I accepted that trying to disprove them anyway would be a crackpot thing to do. Since then I've been trying to construct a philosophical framework of mathematics in which the ITs become irrelevant. Have I, in fact, taken the Crackpot Offer?

Comment author: Vladimir_Nesov 16 September 2011 01:40:12PM *  4 points [-]

From your description it looks like you might have. You should retract failed conjectures, not rectify them. Another (less efficient) way to recover is to get expertise in the topic strong enough to sever incorrect intuitions (it doesn't always work in itself, human "ability" for rationalization is strong too). I think if you know math (specifically logic, algebra and set theory) less than on graduate level, you should either drop what you're doing, or get to that level.

Comment author: ec429 16 September 2011 02:03:15PM 0 points [-]

Well, I'm studying for an undergraduate degree in mathematics at a good university; the "trying to construct..." is just one of several things I do in my copious free time. Also, I'm spending a much smaller proportion of my time on this project than I was spending on trying to disprove the ITs. So it looks to me as though I'm actually behaving rationally, but maybe that's just how the algorithm looks from the inside.

I think that by "make the ITs become irrelevant" I mean that I'm trying to find a philosophy in which the things that make me want the ITs to be false are no longer represented, because if I have any assumption that implies "And therefore the ITs are false" then that assumption is wrong. But again, is that just me rationalising?

Comment author: pnrjulius 30 June 2012 03:36:32AM *  0 points [-]

I don't think you're just rationalizing. I think this is exactly what the philosophy of mathematics needs in fact.

If we really understand the foundations of mathematics, Godel's theorems should seem to us, if not irrelevant, then perfectly reasonable---perhaps even trivially obvious (or at least trivially obvious in hindsight, which is of course not the same thing), the way that a lot of very well-understood things seem to us.

In my mind I've gotten fairly close to this point, so maybe this will help: By being inside the system, you're always going to get "paradoxes" of self-reference that aren't really catastrophes.

For example, I cannot coherently and honestly assert this statement: "It is raining in Bangladesh but Patrick Julius does not believe that." The statement could in fact be true. It has often been true many times in the past. But I can't assert it, because I am part of it, and part of what it says is that I don't believe it, and hence can't assert it.

Likewise, Godel's theorems are a way of making number theory talk about itself and say things like "Number theory can't prove this statement"; well, of course it can't, because you made the statement about number theory proving things.

Comment author: ec429 14 August 2012 06:39:14PM 0 points [-]

There is a further subtlety here. As I discussed in "Syntacticism", in Gödel's theorems number theory is in fact talking about "number theory", and we apply a metatheory to prove that "number theory is "number theory"", and think we've proved that number theory is "number theory". The answer I came to was to conclude that number theory isn't talking about anything (ie. ascription of semantics to mathematics does not reflect any underlying reality), it's just a set of symbols and rules for manipulating same, and that those symbols and rules together embody a Platonic object. Others may reach different conclusions.

Comment author: raptortech97 20 April 2012 02:43:03AM 0 points [-]

I don't remember ever coming up with a false disproof in math, though I did manage to "solve" perpetual motion machines. I did successfully prove a trivial result in solving quadratic equations in modular arithmetic.

Comment author: evand 04 June 2012 02:47:55PM 1 point [-]

Eliezer, did you realize at the time that what you had done was construct the basic outline of the proof that 2^aleph0 = aleph1? There was an interesting gem hiding in your disproof, had you looked. Reversed stupidity is not intelligence, and all that :)

Comment author: Bundle_Gerbe 21 September 2012 01:20:27PM *  4 points [-]

No 2^alpeh0=aleph1 is the continuum hypothesis, which is independent of the standard axioms of math, and can't be proven. I think maybe you mean he was close to showing 2^aleph0 is the cardinality of the reals, but I think he knew this already and was trying to use it as the basis of the proof.

Making mistakes like Eliezer's is a big part of learning math though, if we are looking for a silver lining. When you prove something you know is wrong, usually it's because of some misunderstanding or incomplete understanding, and not because of some trivial error. I think the diagonal argument seems like some stupid syntactical trick the first time you hear it, but the concept is far-reaching. Surely Eliezer came away with a bit better understanding of its implications after he straightened himself out.