Comment author: CCC 05 September 2012 06:55:11AM -5 points [-]

In the first case, starting with p such that the highest power of 2 that divides p is an integer power of 2 (2^k for some integer k); then the highest power of 2 that divides p² is 2^2k; then the highest power of 2 that divides 2q² is also 2^2k; then the highest power of 2 that divides q is 2^(2k-1); therefore q must be a multiple of 2^(k-0.5); a noninteger power of 2.

This implies that there is a number 2^(0.5). It makes no claims as to whether or not this number is rational, or integer; it merely claims that such a number must exist. (Consider: if I had started instead with the equation x²-4=0, I would have ended up showing that a number of the form 4^(0.5) must exist - that number is rational, is indeed an integer).

Now, I think I can prove that an integer q which is a multiple of 2^(k-0.5) but which is not a multiple of 2^k, for integer k, does not exist; but I can only complete that proof by knowing in advance that 2^0.5 is irrational, so I can't use it to prove the irrationality of 2^0.5. I can easily prove that a rational number of the form 4^(k-0.5) for integer k does exist; indeed, an infinite number of such numbers exist (examples include 2, 8, 32).

No matter how forcefully that first passage conveys the irrationality of √2, it does not prove it.

Comment author: VKS 05 September 2012 07:28:16AM *  1 point [-]

The paragraph, of course, was talking about integer powers of 2 that divide p. As in, the largest number 2^k such that 2^k divides p and k is an integer.

The largest real power of 2 that divides p is, of course, p itself, as 2^log_2(p) = p.

Comment author: alex_zag_al 05 September 2012 03:45:39AM 5 points [-]

At the Princeton graduate school, the physics department and the math department shared a common lounge, and every day at four o'clock we would have tea. It was a way of relaxing in the afternoon, in addition to imitating an English college. People would sit around playing Go, or discussing theorems. In those days topology was the big thing.

I still remember a guy sitting on the couch, thinking very hard, and another guy standing in front of him saying, "And therefore such-and-such is true.

"Why is that?" the guy on the couch asks.

"It's trivial! It's trivial!" the standing guy says, and he rapidly reels off a series of logical steps: "First you assume thus-and-so, then we have Kerchoff's this-and-that, then there's Waffenstoffer's Theorem, and we substitute this and construct that. Now you put the vector which goes around here and then thus-and-so . . ." The guy on the couch is struggling to understand all this stuff, which goes on at high speed for about fifteen minutes!

Finally the standing guy comes out the other end, and the guy on the couch says, "Yeah, yeah. It's trivial."

We physicists were laughing, trying to figure them out. We decided that "trivial" means "proved." So we joked with the mathematicians: "We have a new theorem -- that mathematicians can only prove trivial theorems, because every theorem that's proved is trivial."

The mathematicians didn't like that theorem, and I teased them about it. I said there are never any surprises -- that the mathematicians only prove things that are obvious.

From "Surely You're Joking, Mr. Feynman!": Adventures of a Curious Character

Comment author: VKS 05 September 2012 07:12:06AM *  5 points [-]

The view, I think, is that anything you can prove immediately off the top of your head is trivial. No matter how much you have to know. So, sometimes you get conditional trivialities, like "this is trivial if you know this and that, but I don't know how to get this and that from somesuch...".

Comment author: VKS 04 September 2012 11:51:02PM *  23 points [-]

After I spoke at the 2005 "Mathematics and Narrative" conference in Mykonos, a suggestion was made that proofs by contradiction are the mathematician's version of irony. I'm not sure I agree with that: when we give a proof by contradiction, we make it very clear that we are discussing a counterfactual, so our words are intended to be taken at face value. But perhaps this is not necessary. Consider the following passage.

There are those who would believe that every polynomial equation with integer coefficients has a rational solution, a view that leads to some intriguing new ideas. For example, take the equation x² - 2 = 0. Let p/q be a rational solution. Then (p/q)² - 2 = 0, from which it follows that p² = 2q². The highest power of 2 that divides p² is obviously an even power, since if 2^k is the highest power of 2 that divides p, then 2^2k is the highest power of 2 that divides p². Similarly, the highest power of 2 that divides 2q² is an odd power, since it is greater by 1 than the highest power that divides q². Since p² and 2q² are equal, there must exist a positive integer that is both even and odd. Integers with this remarkable property are quite unlike the integers we are familiar with: as such, they are surely worthy of further study.

I find that it conveys the irrationality of √2 rather forcefully. But could mathematicians afford to use this literary device? How would a reader be able to tell the difference in intent between what I have just written and the following superficially similar passage?

There are those who would believe that every polynomial equation has a solution, a view that leads to some intriguing new ideas. For example, take the equation x² + 1 = 0. Let i be a solution of this equation. Then i² + 1 = 0, from which it follows that i² = -1. We know that i cannot be positive, since then i² would be positive. Similarly, i cannot be negative, since i² would again be positive (because the product of two negative numbers is always positive). And i cannot be 0, since 0² = 0. It follows that we have found a number that is not positive, not negative, and not zero. Numbers with this remarkable property are quite unlike the numbers we are familiar with: as such, they are surely worthy of further study.

  • Timothy Gowers, Vividness in Mathematics and Narrative, in Circles Disturbed: The Interplay of Mathematics and Narrative
Comment author: Will_Newsome 02 September 2012 01:29:20AM -4 points [-]

Only on LessWrong would a statement with that much insight be downvoted because it could be taken to signal something vaguely positive about religion.

Comment author: VKS 04 September 2012 11:09:52AM 6 points [-]

The quote, phrased in a less tortuous way, says that mathematics contains true statements that cannot be proven, and is unique in being able to demonstrate that it does. So far, so good, although the uniqueness part can be debated.

But the quote also states that mathematics therefore contains an element of faith, that is, that there exist statements that have to be assumed to be true. This is not the case.

Mathematics only compels you to believe that certain things follow from certain axioms. That is all. While these axioms sometimes imply that there exist statements whose truth will never be determined, they do not imply that we should then assume that such-and-such a statement is true or false.

That is why it should be downvoted. Because not knowing something doesn't mean having to pretend that you do.

Comment author: roland 04 September 2012 12:40:37AM *  -3 points [-]

Thinking is an act, feeling is a fact. Don't bother comprehending, living is far beyond any comprehension...

-- Clarice Lispector.

Original in Brazilian portuguese:

Pensar é um ato, sentir é um fato. Não se preocupe em entender, viver ultrapassa qualquer entendimento...

Before you downvote: I think this has a lot to do with rationality because we tend to be caught up in thinking and the models about the world we create in our minds, actually science is about this. But those models have limitations and are often wrong as the history of science shows time and again.

EDIT: added the originals. Fixed typo.

Comment author: VKS 04 September 2012 01:44:09AM 2 points [-]

... we tend to be caught up in thinking and the models about the world we create in our minds, actually science is about this. But those models have limitations and are often wrong as the history of science shows time and again.

Now that you have noticed this, what are you going to do with it?

Comment author: Eliezer_Yudkowsky 17 August 2012 07:09:06AM 6 points [-]

"Given the nature of the multiverse, everything that can possibly happen will happen. This includes works of fiction: anything that can be imagined and written about, will be imagined and written about. If every story is being written, then someone, somewhere in the multiverse is writing your story. To them, you are a fictional character. What that means is that the barrier which separates the dimensions from each other is in fact the Fourth Wall."

-- In Flight Gaiden: Playing with Tropes

(Conversely, many fictions are instantiated somewhere, in some infinitesimal measure. However, I deliberately included logical impossibilities into HPMOR, such as tiling a corridor in pentagons and having the objects in Dumbledore's room change number without any being added or subtracted, to avoid the story being real anywhere.)

Comment author: VKS 21 August 2012 09:06:29PM *  5 points [-]

impossibilities such as ... tiling a corridor in pentagons

Huh. And here I thought that space was just negatively curved in there, with the corridor shaped in such a way that it looks normal (not that hard to imagine), and just used this to tile the floor. Such disappointment...

This was part of a thing, too, in my head, where Harry (or, I guess, the reader) slowly realizes that Hogwarts, rather than having no geometry, has a highly local geometry. I was even starting to look for that as a thematic thing, perhaps an echo of some moral lesson, somehow.

And this isn't even the sort of thing you can write fanfics about. :¬(

Comment author: Incorrect 02 August 2012 11:13:29PM 27 points [-]

It is absurd to divide people into good and bad. People are either charming or tedious.

-- Oscar Wilde

Comment author: VKS 06 August 2012 02:57:48AM *  3 points [-]

I don't know that you can really classify people as X or ¬X. I mean, have you not seen individuals be X in certain situations and ¬X in other situations?

&c.

Comment author: RichardKennaway 03 August 2012 12:26:11PM 4 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

I'd argue that your brain doesn't even contain a lossless copy of itself. It is a lossless copy of itself, but your knowledge of yourself is limited. So I think that Nick Szabo's point about the limits of being able to model other people applies just as strongly to modelling oneself. I don't, and cannot, know all about myself -- past, current, or future, and that must have substantial implications about something or other that this lunch hour is too small to contain.

How much knowledge of itself can an artificial system have? There is probably some interesting mathematics to be done -- for example, it is possible to write a program that prints out an exact copy of itself (without having access to the file that contains it), the proof of Gödel's theorem involves constructing a proposition that talks about itself, and TDT depends on agents being able to reason about their own and other agents' source codes. Are there mathematical limits to this?

Comment author: VKS 03 August 2012 10:27:05PM 0 points [-]

I never meant to say that I could give you an exact description of my own brain and itself ε ago, just that you could deduce one from looking at mine.

Comment author: maia 03 August 2012 07:41:58PM 4 points [-]

a lossless copy of itself and itself two words ago

But our memories discard huge amounts of information all the time. Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Comment author: VKS 03 August 2012 10:15:17PM *  0 points [-]

Certainly. I am suggesting that over sufficiently short timescales, though, you can deduce the previous structure from the current one. Maybe I should have said "epsilon" instead of "two words".

Surely there's been at least a little degradation in the space of two words, or we'd never forget anything.

Why would you expect the degradation to be completely uniform? It seems more reasonable to suspect that, given a sufficiently small timescale, the brain will sometimes be forgetting things and sometimes not, in a way that probably isn't synchronized with its learning of new things.

So, depending on your choice of two words, sometimes the brain would take marginally more bits to describe and sometimes marginally fewer.

Actually, so long as the brain can be considered as operating independently from the outside world (which, given an appropriately chosen small interval of time, makes some amount of sense), a complete description at time t will imply a complete description at time t + δ. The information required to describe the first brain therefore describes the second one too.

So I've made another error: I should have said that my brain contains a lossless copy of itself and itself two words later. (where "two words" = "epsilon")

Comment author: [deleted] 03 August 2012 07:29:00AM 1 point [-]

Still, I don't think you could compress the content of 1000 brains into one. (And I'm not sure about two brains, either. Maybe the brains of two six-year-olds into that of a 25-year-old.)

In response to comment by [deleted] on Rationality Quotes August 2012
Comment author: VKS 03 August 2012 09:46:47AM 0 points [-]

I argue that my brain right now contains a lossless copy of itself and itself two words ago!

Getting 1000 brains in here would take some creativity, but I'm sure I can figure something out...

But this is all rather facetious. Breaking the quote's point would require me to be able to compute the (legitimate) results of the computations of an arbitrary number of arbitrarily different brains, at the same speed as them.

Which I can't.

For now.

View more: Prev | Next