Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: VKS 03 December 2012 06:50:16AM 17 points [-]

Truth comes out of error more easily than out of confusion.

-Francis Bacon

Comment author: DanArmak 06 September 2012 10:53:51PM *  1 point [-]

Consider the equation x + 1 = x.

(Edited again: this example is wrong, and thanks to Kindly for pointing out why. CronoDAS gives a much better answer.)

Curiously enough, the Peano axioms don't seem to say that S(n)!=n. Lo, a finite model of Peano:

X = {0, 1} Where: 0+0=0; 0+1=1+0=1+1=1 And the usual equality operation.

In this model, x+1=1 has a solution, namely x=1. Not a very interesting model, but it serves to illustrate my point below.

sometimes a contradiction does point to a way in which you can revise your assumptions to gain access to "intriguing new ideas", but sometimes it just indicates that your assumptions are wrong.

Contradiction in conclusions always indicates a contradiction in assumptions. And you can always use different assumptions to get different, and perhaps non contradictory, conclusions. The usefulness and interest of this varies, of course. But proof by contradiction remains valid even if it gives you an idea about other interesting assumptions you could explore.

And that's why I feel it's confusing and counterproductive to use ironic language in one example, and serious proof by contradiction in another, completely analogous example, to indicate that in one case you just said "meh, a contradiction, I was wrong" while in the other you invented a cool new theory with new assumptions. The essence of math is formal language and it doesn't mix well with irony, the best of which is the kind that not all readers notice.

Comment author: VKS 07 September 2012 05:05:09AM 3 points [-]

But that's the entire point of the quote! That mathematicians cannot afford the use of irony!

Comment author: alex_zag_al 05 September 2012 03:45:39AM 5 points [-]

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him saying, "And therefore such-and-such is true.

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that, then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so . . ." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can only prove trivial theorems, because every theorem that's proved is trivial."

The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

From "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character

Comment author: VKS 05 September 2012 07:12:06AM *  5 points [-]

The view, I think, is that anything you can prove immediately off the top of your head is trivial. No matter how much you have to know. So, sometimes you get conditional trivialities, like "this is trivial if you know this and that, but I don't know how to get this and that from somesuch...".

Comment author: VKS 04 September 2012 11:51:02PM *  23 points [-]

After I spoke at the 2005 "Mathematics and Narrative" conference in Mykonos, a suggestion was made that proofs by contradiction are the mathematician's version of irony. I'm not sure I agree with that: when we give a proof by contradiction, we make it very clear that we are discussing a counterfactual, so our words are intended to be taken at face value. But perhaps this is not necessary. Consider the following passage.

There are those who would believe that every polynomial equation with integer coefficients has a rational solution, a view that leads to some intriguing new ideas. For example, take the equation x² - 2 = 0. Let p/q be a rational solution. Then (p/q)² - 2 = 0, from which it follows that p² = 2q². The highest power of 2 that divides p² is obviously an even power, since if 2^k is the highest power of 2 that divides p, then 2^2k is the highest power of 2 that divides p². Similarly, the highest power of 2 that divides 2q² is an odd power, since it is greater by 1 than the highest power that divides q². Since p² and 2q² are equal, there must exist a positive integer that is both even and odd. Integers with this remarkable property are quite unlike the integers we are familiar with: as such, they are surely worthy of further study.

I find that it conveys the irrationality of √2 rather forcefully. But could mathematicians afford to use this literary device? How would a reader be able to tell the difference in intent between what I have just written and the following superficially similar passage?

There are those who would believe that every polynomial equation has a solution, a view that leads to some intriguing new ideas. For example, take the equation x² + 1 = 0. Let i be a solution of this equation. Then i² + 1 = 0, from which it follows that i² = -1. We know that i cannot be positive, since then i² would be positive. Similarly, i cannot be negative, since i² would again be positive (because the product of two negative numbers is always positive). And i cannot be 0, since 0² = 0. It follows that we have found a number that is not positive, not negative, and not zero. Numbers with this remarkable property are quite unlike the numbers we are familiar with: as such, they are surely worthy of further study.

  • Timothy Gowers, Vividness in Mathematics and Narrative, in Circles Disturbed: The Interplay of Mathematics and Narrative
Comment author: Eliezer_Yudkowsky 17 August 2012 07:09:06AM 6 points [-]

"Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall."

-- In Flight Gaiden: Playing with Tropes

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

Comment author: VKS 21 August 2012 09:06:29PM *  5 points [-]

impossibilities such as ... tiling a corridor in pentagons

Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...

This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.

And this isn't even the sort of thing you can write fanfics about. :¬(

Comment author: Incorrect 02 August 2012 11:13:29PM 27 points [-]

It is absurd to divide people into good and bad. People are either charming or tedious.

-- Oscar Wilde

Comment author: VKS 06 August 2012 02:57:48AM *  3 points [-]

I don't know that you can really classify people as X or ¬X. I mean, have you not seen individuals be X in certain situations and ¬X in other situations?

&c.

Comment author: RichardKennaway 03 August 2012 12:26:11PM 4 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.

How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?

Comment author: VKS 03 August 2012 10:27:05PM 0 points [-]

I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.

Comment author: maia 03 August 2012 07:41:58PM 4 points [-]

a lossless copy of itself and itself two words ago

But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Comment author: VKS 03 August 2012 10:15:17PM *  0 points [-]

Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".

Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.

Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.

So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")

Comment author: [deleted] 03 August 2012 07:29:00AM 1 point [-]

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)

In response to comment by [deleted] on Rationality Quotes August 2012
Comment author: VKS 03 August 2012 09:46:47AM 0 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...

But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.

Which I can't.

For now.

Comment author: Eugine_Nier 02 August 2012 11:42:23PM 5 points [-]

Do you mean lossy or lossless compression? If you mean lossy compression then that is precisely Szabo's point.

On the other hand, if you mean lossless, then if you had some way to losslessly compress a brain, this would only work if you were the only one with this compression scheme, since otherwise other people would apply it to their own brains and use the freed space to store more information.

Comment author: VKS 02 August 2012 11:51:41PM 8 points [-]

You'll probably have more success losslessly compressing two brains than losslessly compressing one.

View more: Next