humpolec comments on Open Thread: May 2010, Part 2 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (348)
Let's suppose Church-Turing thesis is true.
Are all mathematical problems solvable?
Are they all solvable to humans?
If there is a proof* for every true theorem, then we need only to enumerate all possible texts and look for one that proves - or disproves - say, Goldbach's conjecture. The procedure will stop every time.
(* Proof not in the sense of "formal proof in a specific system", but "a text understandable by a human as a proof".)
But this can't possibly be right - if the human mind that looks at the proofs is Turing-computable, then we've just solved the Halting Problem - after all, we can pose the halting of any Turing machine as a mathematical problem.
So what does that mean?
You can also extend the question to any human-made AIs/posthuman minds, but this doesn't help much - if the one looking at proofs can reliably self-improve, then the Halting Problem would still be solved.
EDIT: A longer explanation of the problem, by a friend.
Picture an enormous polynomial f(x, y, ...) with integer coefficients: something like 3x^2 - 6y + 5 but bigger. Now, if the Diophantine equation f(x, y, ...) = 0 has a solution then this can easily be proved - you just have to plug in the numbers and calculate the result. (Even if you're not told the numbers in advance, you can iterate over all possible arguments and still prove the result in a finite time.)
But now suppose that this particular f doesn't have any solutions. (Think about whether you want to deny that the previous sentence is meaningful - personally I think it is).
Can we necessarily prove it doesn't have any solutions? Well, there's no algorithm that can correctly decide whether f has a solution for all Diophantine equations f. (See "Hilbert's Tenth Problem".) So certainly there exists an f, without any solutions, such that "f has no solutions" is not a theorem of (say) ZFC set theory. (Because for any formal axiomatic system, one can write down an algorithm that will enumerate all of its theorems.)
Perhaps, like Roger Penrose, you think that human mathematicians have some magical non-algorithmic 'truth-seeing' capability. Unfortunately, human thought being non-algorithmic would require that physics itself be uncomputable i.e. an accurate computer simulation of a brain solving a mathematical problem would be impossible even in principle. Otherwise, you must conclude that some theorems of the form "this Diophantine equation has no solutions" are not humanly provable.
I think that Eliezer's post, Complexity and Intelligence, is really germane to your query.
Here's a thought experiment, just for fun:
Let's say, for simplicity's sake, that your mind (and environment) is currently being run on some Turing machine T, which had initial state S. What if you considered the sentence G, which is a Gödel-encoded statement that "if you run T on S, it will never contain an instance of humpolec rationally concluding that G is a theorem"? (Of course, specifying that predicate would be a beastly problem, but in theory it's a finite mathematical specification.)
You would therefore be actually unable to rationally conclude that G is a theorem, and of course it would thereby be a true, finitely specifiable mathematical statement.
It's up to you, of course, which bullets you choose to bite in response to this.
You seem to be somewhat confused about the basic notions of computability and Goedel's incompleteness results and their mutual connection. Besides the replies you've received in this thread, I'd recommend that you read through this lecture by Scott Aaronson, which is, out of anything I've seen so far, the clearest and most accessible brief exposition of these issues that is still fully accurate and free of nonsense:
http://www.scottaaronson.com/democritus/lec3.html
Nope. Not if physics is computable.
Nope. Not if human minds are computable.
It means exactly that your Turing machine enumerating all possible texts may never halt. What does it mean in terms of the validity of the theorem? Nothing. The truth value of that theorem may be forever inaccessible to us without appeal to a more powerful axiomatic system or without access to a hypercomputer.
I also believe there are true things about the material universe which people are intrinsically unable to comprehend-- aspects so complex that they can't be broken down into few or small enough chunks for people to fit it into their minds.
This isn't the same thing as chaos theory-- I'm suggesting that there are aspects of the universe which are as explicable as Newtonian mechanics-- except that we, even with our best tools and with improved brains, won't be able to understand them.
This is obviously unprovable (and I don't think it can be proved that any particular thing is unmanageably complex*), but considering how much bigger the universe is than human brains, I think it's the way to bet.
*Ever since it was proven that arbitrary digits of pi can be computed (afaik, only in binary) without computing the preceding digits, I don't think I can trust my intuition about what tasks are possible.
Not just in binary.
Is that really a 'physical' aspect, or a mathematical one? Newtonian mechanics can be (I think) derived from lower level principles.
So do you mean something that is a consequence of possible 'theory of everything', or a part of it?
I'm not dead certain whether "physical" and "mathematical" can be completely disentangled. I'm assuming that gravity following an inverse square law is just a fact which couldn't be deduced from first principles.
I'm not sure what "theory of everything" covers. I thought it represented the hope that a fundamental general theory would be simple enough that at least a few people could understand it.
It may actually be derivable anthropically: exponents other than 2 or 1 prohibit stable orbits, and an exponent of 1, as Zack says, implies 2-dimensional space, which might be too simple for observers.
You can deduce it from the fact that that space is three-dimensional (consider an illustrative diagram), but why space should be three-dimensional, I can't say.
That's a plausible argument. A priori, one could have a three-dimensional world with some other inverse law, and it would be mathematically consistent. It would just be weird (and would rule out a lot simple causation mechanisms for the force.)
Well, we do inhabit a three-dimensional world in which the inverse-square law holds only approximately, and when a more accurate theory was arrived upon, it turned out to be weird and anything but simple.
Interestingly, when the perihelion precession of Mercury turned out be an unsolvable problem for Newton's theory, there were serious proposals to reconsider whether the exponent in Newton's law might perhaps be not exactly two, but some other close number:
Of course, in the sort of space that general relativity deals with, our Euclidean intuitive concept of "distance" completely breaks down, and r itself is no longer an automatically clear concept. There are actually several different general-relativistic definitions of "spatial distance" that all make some practical sense and correspond to our intuitive concept in the classical limit, but yield completely different numbers in situations where Euclidean/Newtonian approximations no longer hold.
Also, I don't know if there's any a priori reason for gravity.
Theory of everything as I see it (and apparently Wikipedia agrees ) would allow us (in principle - given full information and enough resources) to predict every outcome. So every other aspect of physical universe would be (again, in principle) derivable from it.
I think I'm saying that there will be parts of a theory of everything which just won't compress small enough to fit into human minds, not just that the consequences of a TOE will be too hard to compute.
Do you think a theory of everything is possible?
Parts that won't compress? Almost certainly, the expansions of small parts of a system can have much higher Kolmogorov complexity than the entire theory of everything.
The Tegmark IV multiverse is so big that a human brain can't comprehend nearly any of it, but the theory as a whole can be written with four words: "All mathematical structures exist". In terms of Kolmogorov complexity, it doesn't get much simpler than those four words.
For anyone reading this that hasn't read any of Tegmark's writing, you should. http://space.mit.edu/home/tegmark/crazy.html Tegmark is one of the best popular science writers out there, so the popular versions he has posted aren't dumbed down, they are just missing most of the math.
Tegmark predicts that in 50 years you will be able to buy a t-shirt with the theory of everything printed on it.
To be fair, every one of those words is hiding a substantial amount of complexity. Not as much hidden complexity as "A wizard did it" (even shorter!), but still.
(I do still find the Level IV Multiverse plausible, and it is probably the most parsimonious explanation of why the universe happens to exist; I only mean to say that to convey a real understanding of it still takes a bit more than four words.)
Actually, I'm quite unclear about what the statement "All mathematical structures exist" could mean, so I have a hard time evaluating its Kolmogorov complexity. I mean, what does it mean to say that a mathematical structure exists, over and above the assertion that the mathematical structure was, in some sense, available for its existence to be considered in the first place?
ETA: When I try to think about how I would fully flesh out the hypothesis that "All mathematical structures exist", all I can imagine is that you would have the source code for program that recursively generates all mathematical structures, together with the source code of a second program that applies the tag "exists" to all the outputs of the first program.
Two immediate problems:
(1) To say that we can recursively generate all mathematical structures is to say that the collection of all mathematical structures is denumerable. Maintaining this position runs into complications, to say the least.
(2) More to the point that I was making above, nothing significant really follows from applying the tag "exists" to things. You would have functionally the same overall program if you applied the tag "is blue" to all the outputs of the first program instead. You aren't really saying anything just by applying arbitrary tags to things. But what else are you going to do?
What are the Tegmark multiverses relevant to? Why should I try to understand them?
Really? In which parallel universe? Every one? This one?
This one.
Don't we live in a multiverse? Doesn't our Universe splits in two after every quantum event?
How then Tegmark & Co. can predict something for the next 50 years? Almost certainly will happen - somewhere in the Multiverse. Just as almost everything opposite, only on the other side of the Multiverse.
According to Tegmark, at least.
Now he predicts a T shirt in 50 years time! Isn't it a little weird?
I think a relatively simple theory of everything is possible. This is however not based on anything solid - I'm a Math/CS student and my knowledge of physics does not (yet!) exceed high school level.
Alas, both of those are correct.
Read about Gödel's Incompleteness Theorem, preferably from Gödel, Escher, Bach by Douglas Hofstadter. As for the specific example of Goldbach's conjecture, I'd bet on it being provable (or if it is false, the procedure would prove that by finding a counterexample), but yes, there are true facts of number theory that cannot be proven.
Next, if I remember correctly, theorem-proving programs have already produced correct proofs that are easily machine-verifiable but intractably long and complicated and apparently meaningless to humans.
I read GEB. Doesn't Gödel's theorem talk about proofs in specific formal systems?
I consider this a question of scale. Besides, the theorem-proving program is written by humans and humans understand (and agree with) its correctness, so in some sense humans understand the correct proofs.
It applies to any formal system capable of proving theorems of number theory.
But then what do you mean by "possible to follow by a human"?
Right. So if humans reasoning follows some specified formal system, they can't prove it. But does it really follow one?
We can't, for example, point to some Turing machine and say "It halts because of (...), but I can't prove it" - because in doing so we're already providing some sort of reasoning.
Maybe "it's possible for a human, given enough time and resources, to verify validity of such proof".
Yes and no. It is likely that the brain, as a physical system, can be modeled by a formal system, but "the human brain is isomorphic to a formal system" does not imply "a human's knowledge of some fact is isomorphic to a formal proof". What human brains do (and, most likely, what an advanced AI would do) is approximate empirical reasoning, i.e. Bayesian reasoning, even in its acquisition of knowledge about mathematical truths. If you have P(X) = 1 then you have X = true, but you can't get to P(X) = 1 through empirical reasoning, including by looking at a proof on a sheet of paper and thinking that it looks right. Even if you check it really really carefully. (All reasoning must have some empirical component.) Most likely, there is no structure in your brain that is isomorphic to a proof that 1 + 1 = 2, but you still know and use that fact.
So we (and AIs) can use intelligent reasoning about formal systems (not reasoning that looks like formal deduction from the inside) to come to very high or very low probability estimates for certain formally undecidable statements, as this does not need to be isomorphic to any impossible proofs in any actual formal system. This just doesn't count as "solving the halting problem" (any more than Gödel's ability to identify certain unprovable statements as true in the first place refutes his own theorem), because a solution to the halting problem must be at the level of formal proof, not of empirical reasoning; the latter is necessarily imprecise and probabilistic. Unless you think that a human "given enough time and resources" could literally always get an answer and always be right, a human cannot be a true halting oracle, even if they can correctly assign a very high or very low probability to some formally undecidable statements.
Well written— maybe this deserves a full post, even granted that the posts you linked are very near in concept-space.
Perhaps. But would it be controversial or novel enough to warrant one? I'd think that most people here 1) already don't believe that the human mind is more powerful than a universal Turing machine or a formal system, and 2) could correctly refute this type of argument, if they thought about it. Am I wrong about either of those (probably #2 if anything)? Or, perhaps, have sufficiently few people thought about it that bringing it up as a thought exercise (presenting the argument and encouraging people to evaluate it for themselves before looking at anyone else's take) would be worthwhile, even if it doesn't generally result in people changing their minds about anything?
It would be to some extent redundant with the posts you linked, but the specific point about the difference between human reasoning and formal reasoning is a new one to this blog. I, too, would be interested in reading it.
You're probably right about both, but I would still enjoy reading such a post.
I think it could turn out really well if written with the relatively new lurkers in mind, and it does include a new idea that takes a few paragraphs to spell out well. That says "top-level" to me.
Comments:
(1) Empirical vs Non-empirical is, I think, a bit of a red herring because insofar as empirical data (e.g. the output of a computer program) bears on mathematical questions, what we glean from it could all, in principle, have been deduced 'a priori' (i.e. entirely in the thinker's mind, without any sensory engagement with the world.)
(2) You ought to read about Chaitin's constant 'Omega', the 'halting probability', which is a number between 0 and 1.
I think we should be able to prove something along these lines: Assume that there is a constant K such that your "mental state" does not contain more than K bits of information (this seems horribly vague, but if we assume that the mind's information is contained in the body's information then we just need to assume that your body never requires more than K bits to 'write down').
Then it is impossible for you to 'compress' the binary expansion of Omega by more than K + L bits, for some constant L (the same L for all possible intelligent beings.)
This puts some very severe limits on how closely your 'subjective probabilities' for the bits of Omega can approach the real thing. For instance, either there must be only finitely many bits b where your subjective probability that b = 0 differs from 1/2, or else, if you guess something other than 1/2 infinitely many times, you must 'guess wrongly' exactly 1/2 of the time (with the pattern of correct and incorrect guesses being itself totally random).
Basically, it sounds like you're saying: "If we're prepared to let go of the demand to have strict, formal proofs, we can still acquire empirical evidence, even very convincing evidence, about the truth or falsity of mathematical statements." This may be true in some cases, but there are others (like the bits of Omega) where we find mathematical facts (expressible as propositions of number theory) that are completely inaccessible by any means. (And in some way that I'm not quite sure yet how to express, I suspect that the 'gap' between the limits of 'formal proof' and 'empirical reasoning' is insignificant compared to the vast 'terra incognita' that lies beyond both.)
I'll have to think some more about it, but this looks like a correct answer. Thank you.
I myself will have to recheck this in the morning, as it's 4:30 AM here and I am suspicious of philosophical reasoning I do while tired, but I'll probably still agree with it tomorrow since I mostly copied that (with a bit of elaboration) from something I had already written elsewhere. :)
One thing I haven't elaborated on here (and probably more hand-waving/philosophy than mathematics):
If Church-Turing thesis is true, there is no way for a human to prove any mathematical problem. However, does it have to follow that not every theorem has a proof?
What if every true theorem has a proof, not necessarily understandable to humans, yet somehow sound? That is, there exists a (Turing-computable) mind that can understand/verify this proof.
(Of course there is no one 'universal mind' that would understand all proofs, or this would obviously fail. And for the same reason there can be no procedure of finding such a mind/verifying one is right.)
Does the idea of not-universally-comprehensible proofs make sense? Or does it collapse in some way?