Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Paul Christiano has devised a new fundamental approach to the "Löb Problem" wherein Löb's Theorem seems to pose an obstacle to AIs building successor AIs, or adopting successor versions of their own code, that trust the same amount of mathematics as the original. (I am currently writing up a more thorough description of the question this preliminary technical report is working on answering. For now the main online description is in a quick Summit talk I gave. See also Benja Fallenstein's description of the problem in the course of presenting a different angle of attack. Roughly the problem is that mathematical systems can only prove the soundness of, aka 'trust', weaker mathematical systems. If you try to write out an exact description of how AIs would build their successors or successor versions of their code in the most obvious way, it looks like the mathematical strength of the proof system would tend to be stepped down each time, which is undesirable.)
Paul Christiano's approach is inspired by the idea that whereof one cannot prove or disprove, thereof one must assign probabilities: and that although no mathematical system can contain its own truth predicate, a mathematical system might be able to contain a reflectively consistent probability predicate. In particular, it looks like we can have:
∀a, b: (a < P(φ) < b) ⇒ P(a < P('φ') < b) = 1
∀a, b: P(a ≤ P('φ') ≤ b) > 0 ⇒ a ≤ P(φ) ≤ b
Suppose I present you with the human and probabilistic version of a Gödel sentence, the Whitely sentence "You assign this statement a probability less than 30%." If you disbelieve this statement, it is true. If you believe it, it is false. If you assign 30% probability to it, it is false. If you assign 29% probability to it, it is true.
Paul's approach resolves this problem by restricting your belief about your own probability assignment to within epsilon of 30% for any epsilon. So Paul's approach replies, "Well, I assign almost exactly 30% probability to that statement - maybe a little more, maybe a little less - in fact I think there's about a 30% chance that I'm a tiny bit under 0.3 probability and a 70% chance that I'm a tiny bit over 0.3 probability." A standard fixed-point theorem then implies that a consistent assignment like this should exist. If asked if the probability is over 0.2999 or under 0.30001 you will reply with a definite yes.
Followup to: Godel's Completeness and Incompleteness Theorems
"So the question you asked me last time was, 'Why does anyone bother with first-order logic at all, if second-order logic is so much more powerful?'"
Right. If first-order logic can't talk about finiteness, or distinguish the size of the integers from the size of the reals, why even bother?
"The first thing to realize is that first-order theories can still have a lot of power. First-order arithmetic does narrow down the possible models by a lot, even if it doesn't narrow them down to a single model. You can prove things like the existence of an infinite number of primes, because every model of the first-order axioms has an infinite number of primes. First-order arithmetic is never going to prove anything that's wrong about the standard numbers. Anything that's true in all models of first-order arithmetic will also be true in the particular model we call the standard numbers."
Even so, if first-order theory is strictly weaker, why bother? Unless second-order logic is just as incomplete relative to third-order logic, which is weaker than fourth-order logic, which is weaker than omega-order logic -
"No, surprisingly enough - there's tricks for making second-order logic encode any proposition in third-order logic and so on. If there's a collection of third-order axioms that characterizes a model, there's a collection of second-order axioms that characterizes the same model. Once you make the jump to second-order logic, you're done - so far as anyone knows (so far as I know) there's nothing more powerful than second-order logic in terms of which models it can characterize."
Then if there's one spoon which can eat anything, why not just use the spoon?
"Well... this gets into complex issues. There are mathematicians who don't believe there is a spoon when it comes to second-order logic."
Like there are mathematicians who don't believe in infinity?
"Kind of. Look, suppose you couldn't use second-order logic - you belonged to a species that doesn't have second-order logic, or anything like it. Your species doesn't have any native mental intuition you could use to construct the notion of 'all properties'. And then suppose that, after somebody used first-order set theory to prove that first-order arithmetic had many possible models, you stood around shouting that you believed in only one model, what you called the standard model, but you couldn't explain what made this model different from any other model -"
Well... a lot of times, even in math, we make statements that genuinely mean something, but take a while to figure out how to define. I think somebody who talked about 'the numbers' would mean something even before second-order logic was invented.
"But here the hypothesis is that you belong to a species that can't invent second-order logic, or think in second-order logic, or anything like it."
Then I suppose you want me to draw the conclusion that this hypothetical alien is just standing there shouting about standardness, but its words don't mean anything because they have no way to pin down one model as opposed to another one. And I expect this species is also magically forbidden from talking about all possible subsets of a set?
"Yeah. They can't talk about the largest powerset, just like they can't talk about the smallest model of Peano arithmetic."
Then you could arguably deny that shouting about the 'standard' numbers would mean anything, to the members of this particular species. You might as well shout about the 'fleem' numbers, I guess.
"Right. Even if all the members of this species did have a built-in sense that there was a special model of first-order arithmetic that was fleemer than any other model, if that fleem-ness wasn't bound to anything else, it would be meaningless. Just a floating word. Or if all you could do was define fleemness as floobness and floobness as fleemness, you'd have a loop of floating words; and that might give you the impression that each particular word had a meaning, but the loop as a whole wouldn't be connected to reality. That's why it doesn't help to say that the standard model of numbers is the smallest among all possible models of Peano arithmetic, if you can't define 'smallest possible' any more than you can define 'connected chain' or 'finite number of predecessors'."
But second-order logic does seem to have consequences first-order logic doesn't. Like, what about all that Godelian stuff? Doesn't second-order arithmetic semantically imply... its own Godel statement? Because the unique model of second-order arithmetic doesn't contain any number encoding a proof of a contradiction from second-order arithmetic? Wait, now I'm confused.
"No, that's correct. It's not paradoxical, because there's no effective way of finding all the semantic implications of a collection of second-order axioms. There's no analogue of Godel's Completeness Theorem for second-order logic - no syntactic system for deriving all the semantic consequences. Second-order logic is sound, in the sense that anything syntactically provable from a set of premises, is true in any model obeying those premises. But second-order logic isn't complete; there are semantic consequences you can't derive. If you take second-order logic at face value, there's no effectively computable way of deriving all the consequences of what you say you 'believe'... which is a major reason some mathematicians are suspicious of second-order logic. What does it mean to believe something whose consequences you can't derive?"
But second-order logic clearly has some experiential consequences first-order logic doesn't. Suppose I build a Turing machine that looks for proofs of a contradiction from first-order Peano arithmetic. If PA's consistency isn't provable in PA, then by the Completeness Theorem there must exist nonstandard models of PA where this machine halts after finding a proof of a contradiction. So first-order logic doesn't tell me that this machine runs forever - maybe it has nonstandard halting times, i.e., it runs at all standard N, but halts at -2* somewhere along a disconnected chain. Only second-order logic tells me there's no proof of PA's inconsistency and therefore this machine runs forever. Only second-order logic tells me I should expect to see this machine keep running, and not expect - note falsifiability - that the machine ever halts.
"Sure, you just used a second-order theory to derive a consequence you didn't get from a first-order theory. But that's not the same as saying that you can only get that consequence using second-order logic. Maybe another first-order theory would give you the same prediction."
"Well, for one thing, first-order set theory can prove that first-order arithmetic has a model. Zermelo-Fraenkel set theory can prove the existence of a set such that all the first-order Peano axioms are true about that set. It follows within ZF that sound reasoning on first-order arithmetic will never prove a contradiction. And since ZF can prove that the syntax of classical logic is sound -"
What does it mean for set theory to prove that logic is sound!?
"ZF can quote formulas as structured, and encode models as sets, and then represent a finite ZF-formula which says whether or not a set of quoted formulas is true about a model. ZF can then prove that no step of classical logic goes from premises that are true inside a set-model, to premises that are false inside a set-model. In other words, ZF can represent the idea 'formula X is semantically true in model Y' and 'these syntactic rules never produce semantically false statements from semantically true statements'."
Wait, then why can't ZF prove itself consistent? If ZF believes in all the axioms of ZF, and it believes that logic never produces false statements from true statements -
"Ah, but ZF can't prove that there exists any set which is a model of ZF, so it can't prove the ZF-axioms are consistent. ZF shouldn't be able to prove that some set is a model of ZF, because that's not true in all models. Many models of ZF don't contain any individual set well-populated enough for that one set to be a model of ZF all by itself."
I'm kind of surprised in a Godelian sense that ZF can contain sets as large as the universe of ZF. Doesn't any given set have to be smaller than the whole universe?
"Inside any particular model of ZF, every set within that model is smaller than that model. But not all models of ZF are the same size; in fact, models of ZF of every size exist, by the Lowenheim-Skolem theorem. So you can have some models of ZF - some universes in which all the elements collectively obey the ZF-relations - containing individual sets which are larger than other entire models of ZF. A set that large is called a Grothendieck universe and assuming it exists is equivalent to assuming the existence of 'strongly inaccessible cardinals', sizes so large that you provably can't prove inside set theory that anything that large exists."
I agree you've shown that one experiential consequence of second-order arithmetic - namely that a machine looking for proofs of inconsistency from PA, won't be seen to halt - can be derived from first-order set theory. Can you get all the consequences of second-order arithmetic in some particular first-order theory?
"You can't get all the semantic consequences of second-order logic, taken at face value, in any theory or any computable reasoning. What about the halting problem? Taken at face value, it's a semantic consequence of second-order logic that any given Turing machine either halts or doesn't halt -"
Personally I find it rather intuitive to imagine that a Turing machine either halts or doesn't halt. I mean, if I'm walking up to a machine and watching it run, telling me that its halting or not-halting 'isn't entailed by my axioms' strikes me as not describing any actual experience I can have with the machine. Either I'll see it halt eventually, or I'll see it keep running forever.
"My point is that the statements we can derive from the syntax of current second-order logic is limited by that syntax. And by the halting problem, we shouldn't ever expect a computable syntax that gives us all the semantic consequences. There's no possible theory you can actually use to get a correct advance prediction about whether an arbitrary Turing machine halts."
Okay. I agree that no computable reasoning, on second-order logic or anything else, should be able to solve the halting problem. Unless time travel is possible, but even then, you shouldn't be able to solve the expanded halting problem for machines that use time travel.
"Right, so the syntax of second-order logic can't prove everything. And in fact, it turns out that, in terms of what you can prove syntactically using the standard syntax, second-order logic is identical to a many-sorted first-order logic."
"Suppose you have a first-order logic - one that doesn't claim to be able to quantify over all possible predicates - which does allow the universe to contain two different sorts of things. Say, the logic uses lower-case letters for all type-1 objects and upper-case letters for all type-2 objects. Like, '∀x: x = x' is a statement over all type-1 objects, and '∀Y: Y = Y' is a statement over all type-2 objects. But aside from that, you use the same syntax and proof rules as before."
"Now add an element-relation x∈Y, saying that x is an element of Y, and add some first-order axioms for making the type-2 objects behave like collections of type-1 objects, including axioms for making sure that most describable type-2 collections exist - i.e., the collection X containing just x is guaranteed to exist, and so on. What you can prove syntactically in this theory will be identical to what you can prove using the standard syntax of second-order logic - even though the theory doesn't claim that all possible collections of type-1s are type-2s, and the theory will have models where many 'possible' collections are missing from the type-2s."
Wait, now you're saying that second-order logic is no more powerful than first-order logic?
"I'm saying that the supposed power of second-order logic derives from interpreting it a particular way, and taking on faith that when you quantify over all properties, you're actually talking about all properties."
But then second-order arithmetic is no more powerful than first-order arithmetic in terms of what it can actually prove?
"2nd-order arithmetic is way more powerful than first-order arithmetic. But that's because first-order set theory is more powerful than arithmetic, and adding the syntax of second-order logic corresponds to adding axioms with set-theoretic properties. In terms of which consequences can be syntactically proven, second-order arithmetic is more powerful than first-order arithmetic, but weaker than first-order set theory. First-order set theory can prove the existence of a model of second-order arithmetic - ZF can prove there's a collection of numbers and sets of numbers which models a many-sorted logic with syntax corresponding to second-order logic - and so ZF can prove second-order arithmetic consistent."
But first-order logic, including first-order set theory, can't even talk about the standard numbers!
"Right, but first-order set theory can syntactically prove more statements about 'numbers' than second-order arithmetic can prove. And when you combine that with the semantic implications of second-order arithmetic not being computable, and with any second-order logic being syntactically identical to a many-sorted first-order logic, and first-order logic having neat properties like the Completeness Theorem... well, you can see why some mathematicians would want to give up entirely on this whole second-order business."
But if you deny second-order logic you can't even say the word 'finite'. You would have to believe the word 'finite' was a meaningless noise.
"You'd define finiteness relative to whatever first-order model you were working in. Like, a set might be 'finite' only on account of the model not containing any one-to-one mapping from that set onto a smaller subset of itself -"
But that set wouldn't actually be finite. There wouldn't actually be, like, only a billion objects in there. It's just that all the mappings which could prove the set was infinite would be mysteriously missing.
"According to some other model, maybe. But since there is no one true model -"
How is this not crazy talk along the lines of 'there is no one true reality'? Are you saying there's no really smallest set of numbers closed under succession, without all the extra infinite chains? Doesn't talking about how these theories have multiple possible models, imply that those possible models are logical thingies and one of them actually does contain the largest powerset and the smallest integers?
"The mathematicians who deny second-order logic would see that reasoning as invoking an implicit background universe of set theory. Everything you're saying makes sense relative to some particular model of set theory, which would contain possible models of Peano arithmetic as sets, and could look over those sets and pick out the smallest in that model. Similarly, that set theory could look over a proposed model for a many-sorted logic, and say whether there were any subsets within the larger universe which were missing from the many-sorted model. Basically, your brain is insisting that it lives inside some particular model of set theory. And then, from that standpoint, you could look over some other set theory and see how it was missing subsets that your theory had."
Argh! No, damn it, I live in the set theory that really does have all the subsets, with no mysteriously missing subsets or mysterious extra numbers, or denumerable collections of all possible reals that could like totally map onto the integers if the mapping that did it hadn't gone missing in the Australian outback -
"But everybody says that."
Screw set theory. I live in the physical universe where when you run a Turing machine, and keep watching forever in the physical universe, you never experience a time where that Turing machine outputs a proof of the inconsistency of Peano Arithmetic. Furthermore, I live in a universe where space is actually composed of a real field and space is actually infinitely divisible and contains all the points between A and B, rather than space only containing a denumerable number of points whose existence can be proven from the first-order axiomatization of the real numbers. So to talk about physics - forget about mathematics - I've got to use second-order logic.
"Ah. You know, that particular response is not one I have seen in the previous literature."
Yes, well, I'm not a pure mathematician. When I ask whether I want an Artificial Intelligence to think in second-order logic or first-order logic, I wonder how that affects what the AI does in the actual physical universe. Here in the actual physical universe where times are followed by successor times, I strongly suspect that we should only expect to experience standard times, and not experience any nonstandard times. I think time is connected, and global connectedness is a property I can only talk about using second-order logic. I think that every particular time is finite, and yet time has no upper bound - that there are all finite times, but only finite times - and that's a property I can only characterize using second-order logic.
"But if you can't ever tell the difference between standard and nonstandard times? I mean, local properties of time can be described using first-order logic, and you can't directly see global properties like 'connectedness' -"
But I can tell the difference. There are only nonstandard times where a proof-checking machine, running forever, outputs a proof of inconsistency from the Peano axioms. So I don't expect to experience seeing a machine do that, since I expect to experience only standard times.
"Set theory can also prove PA consistent. If you use set theory to define time, you similarly won't expect to see a time where PA is proven inconsistent - those nonstandard integers don't exist in any model of ZF."
Why should I anticipate that my physical universe is restricted to having only the nonstandard times allowed by a more powerful set theory, instead of nonstandard times allowed by first-order arithmetic? If I then talk about a nonstandard time where a proof-enumerating machine proves ZF inconsistent, will you say that only nonstandard times allowed by some still more powerful theory can exist? I think it's clear that the way you're deciding which experimental outcomes you'll have to excuse, is by secretly assuming that only standard times exist regardless of which theory is required to narrow it down.
"Ah... hm. Doesn't physics say this universe is going to run out of negentropy before you can do an infinite amount of computation? Maybe there's only a bounded amount of time, like it stops before googolplex or something. That can be characterized by first-order theories."
We don't know that for certain, and I wouldn't want to build an AI that just assumed lifespans had to be finite, in case it was wrong. Besides, should I use a different logic than if I'd found myself in Conway's Game of Life, or something else really infinite? Wouldn't the same sort of creatures evolve in that universe, having the same sort of math debates?
"Perhaps no universe like that can exist; perhaps only finite universes can exist, because only finite universes can be uniquely characterized by first-order logic."
You just used the word 'finite'! Furthermore, taken at face value, our own universe doesn't look like it has a finite collection of entities related by first-order logical rules. Space and time both look like infinite collections of points - continuous collections, which is a second-order concept - and then to characterize the size of that infinity we'd need second-order logic. I mean, by the Lowenheim-Skolem theorem, there aren't just denumerable models of first-order axiomatizations of the reals, there's also unimaginably large cardinal infinities which obey the same premises, and that's a possibility straight out of H. P. Lovecraft. Especially if there are any things hiding in the invisible cracks of space."
"How could you tell if there were inaccessible cardinal quantities of points hidden inside a straight line? And anything that locally looks continuous each time you try to split it at a describable point, can be axiomatized by a first-order schema for continuity."
That brings up another point: Who'd really believe that the reason Peano arithmetic works on physical times, is because that whole infinite axiom schema of induction, containing for every Φ a separate rule saying...
(Φ(0) ∧ (∀x: Φ(x) → Φ(Sx))) → (∀n: Φ(n))
...was used to specify our universe? How improbable would it be for an infinitely long list of rules to be true, if there wasn't a unified reason for all of them? It seems much more likely that the real reason first-order PA works to describe time, is that all properties which are true at a starting time and true of the successor of any time where they're true, are true of all later times; and this generalization over properties makes induction hold for first-order formulas as a special case. If my native thought process is first-order logic, I wouldn't see the connection between each individual formula in the axiom schema - it would take separate evidence to convince me of each one - they would feel like independent mathematical facts. But after doing scientific induction over the fact that many properties true at zero, with succession preserving truth, seem to be true everywhere - after generalizing the simple, compact second-order theory of numbers and times - then you could invent an infinite first-order theory to approximate it.
"Maybe that just says you need to adjust whatever theory of scientific induction you're using, so that it can more easily induct infinite axiom schemas."
But why the heck would you need to induct infinite axiom schemas in the first place, if Reality natively ran on first-order logic? Isn't it far more likely that the way we ended up with these infinite lists of axioms was that Reality was manufactured - forgive the anthropomorphism - that Reality was manufactured using an underlying schema in which time is a connected series of events, and space is a continuous field, and these are properties which happen to require second-order logic for humans to describe? I mean, if you picked out first-order theories at random, what's the chance we'd end up inside an infinitely long axiom schema that just happened to look like the projection of a compact second-order theory? Aren't we ignoring a sort of hint?
"A hint to what?"
Well, I'm not that sure myself, at this depth of philosophy. But I would currently say that finding ourselves in a physical universe where times have successor times, and space looks continuous, seems like a possible hint that the Tegmark Level IV multiverse - or the way Reality was manufactured, or whatever - might look more like causal universes characterizable by compact second-order theories than causal universes characterizable by first-order theories.
"But since any second-order theory can just as easily be interpreted as a many-sorted first-order theory with quantifiers that can range over either elements or sets of elements, how could using second-order syntax actually improve an Artificial Intelligence's ability to handle a reality like that?"
Good question. One obvious answer is that the AI would be able to induct what you would call an infinite axiom schema, as a single axiom - a simple, finite hypothesis.
"There's all sorts of obvious hacks to scientific induction of first-order axioms which would let you assign nonzero probability to computable infinite sequences of axioms -"
Sure. So beyond that... I would currently guess that the basic assumption behind 'behaving as if' second-order logic is true, says that the AI should act as if only the 'actually smallest' numbers will ever appear in physics, relative to some 'true' mathematical universe that it thinks it lives in, but knows it can't fully characterize. Which is roughly what I'd say human mathematicians do when they take second-order logic at face value; they assume that there's some true mathematical universe in the background, and that second-order logic lets them talk about it.
"And what behaviorally, experimentally distinguishes the hypothesis, 'I live in the true ultimate math with fully populated powersets' from the hypothesis, 'There's some random model of first-order set-theory axioms I happen to be living in'?"
Well... one behavioral consequence is suspecting that your time obeys an infinitely long list of first-order axioms with induction schemas for every formula. And then moreover believing that you'll never experience a time when a proof-checking machine outputs a proof that Zermelo-Fraenkel set theory is inconsistent - even though there's provably some models with times like that, which fit the axiom schema you just inducted. That sounds like secretly believing that there's a background 'true' set of numbers that you think characterizes physical time, and that it's the smallest such set. Another suspicious behavior is that as soon as you suspect Zermelo-Fraenkel set theory is consistent, you suddenly expect not to experience any physical time which ZF proves isn't a standard number. You don't think you're in the nonstandard time of some weaker theory like Peano arithmetic. You think you're in the minimal time expressible by any and all theories, so as soon as ZF can prove some number doesn't exist in the minimal set, you think that 'real time' lacks such a number. All of these sound like behaviors you'd carry out if you thought there was a single 'true' mathematical universe that provided the best model for describing all physical phenomena, like time and space, which you encounter - and believing that this 'true' backdrop used the largest powersets and smallest numbers.
"How exactly do you formalize all that reasoning, there? I mean, how would you actually make an AI reason that way?"
Er... I'm still working on that part.
"That makes your theory a bit hard to criticize, don't you think? Personally, I wouldn't be surprised if any such formalized reasoning looked just like believing that you live inside a first-order set theory."
I suppose I wouldn't be too surprised either - it's hard to argue with the results on many-sorted logics. But if comprehending the physical universe is best done by assuming that real phenomena are characterized by a 'true' mathematics containing the powersets and the natural numbers - and thus expecting that no mathematical model we can formulate will ever contain any larger powersets or smaller numbers than those of the 'true' backdrop to physics - then I'd call that a moral victory for second-order logic. In first-order logic we aren't even supposed to be able to talk about such things.
"Really? To me that sounds like believing you live inside a model of first-order set theory, and believing that all models of any theories you can invent must also be sets in the larger model. You can prove the Completeness Theorem inside ZF plus the Axiom of Choice, so ZFC already proves that all consistent theories have models which are sets, although of course it can't prove that ZFC itself is such a theory. So - anthropomorphically speaking - no model of ZFC expects to encounter a theory that has fewer numbers or larger powersets than itself. No model of ZFC expects to encounter any quoted-model, any set that a quoted theory entails, which contains larger powersets than the ones in its own Powerset Axiom. A first-order set theory can even make the leap from the finite statement, 'P is true of all my subsets of X', to believing, 'P is true of all my subsets of X that can be described by this denumerable collection of formulas' - it can encompass the jump from a single axiom over 'all my subsets', to a quoted axiom schema over formulas. I'd sum all that up by saying, 'second-order logic is how first-order set theory feels from the inside'."
Maybe. Even in the event that neither human nor superintelligent cognition will ever be able to 'properly talk about' unbounded finite times, global connectedness, particular infinite cardinalities, or true spatial continuity, it doesn't follow that Reality is similarly limited in which physics it can privilege.
Part of the sequence Highly Advanced Epistemology 101 for Beginners
Previous post: "Godel's Completeness and Incompleteness Theorems"
Followup to: Standard and Nonstandard Numbers
So... last time you claimed that using first-order axioms to rule out the existence of nonstandard numbers - other chains of numbers besides the 'standard' numbers starting at 0 - was forever and truly impossible, even unto a superintelligence, no matter how clever the first-order logic used, even if you came up with an entirely different way of axiomatizing the numbers.
How could you, in your finiteness, possibly know that?
"Have you heard of Godel's Incompleteness Theorem?"
Of course! Godel's Theorem says that for every consistent mathematical system, there are statements which are true within that system, which can't be proven within the system itself. Godel came up with a way to encode theorems and proofs as numbers, and wrote a purely numerical formula to detect whether a proof obeyed proper logical syntax. The basic trick was to use prime factorization to encode lists; for example, the ordered list <3, 7, 1, 4> could be uniquely encoded as:
23 * 37 * 51 * 74
And since prime factorizations are unique, and prime powers don't mix, you could inspect this single number, 210,039,480, and get the unique ordered list <3, 7, 1, 4> back out. From there, going to an encoding for logical formulas was easy; for example, you could use the 2 prefix for NOT and the 3 prefix for AND and get, for any formulas Φ and Ψ encoded by the numbers #Φ and #Ψ:
¬Φ = 22 * 3#Φ
Φ ∧ Ψ = 23 * 3#Φ * 5#Ψ
It was then possible, by dint of crazy amounts of work, for Godel to come up with a gigantic formula of Peano Arithmetic (p, c) meaning, 'P encodes a valid logical proof using first-order Peano axioms of C', from which directly followed the formula c, meaning, 'There exists a number P such that P encodes a proof of C' or just 'C is provable in Peano arithmetic.'
Godel then put in some further clever work to invent statements which referred to themselves, by having them contain sub-recipes that would reproduce the entire statement when manipulated by another formula.
And then Godel's Statement encodes the statement, 'There does not exist any number P such that P encodes a proof of (this statement) in Peano arithmetic' or in simpler terms 'I am not provable in Peano arithmetic'. If we assume first-order arithmetic is consistent and sound, then no proof of this statement within first-order arithmetic exists, which means the statement is true but can't be proven within the system. That's Godel's Theorem.
"No. I've heard rumors that Godel's Incompleteness Theorem is horribly misunderstood in your Everett branch. Have you heard of Godel's Completeness Theorem?"
Is that a thing?
"Yes! Godel's Completeness Theorem says that, for any collection of first-order statements, every semantic implication of those statements is syntactically provable within first-order logic. If something is a genuine implication of a collection of first-order statements - if it actually does follow, in the models pinned down by those statements - then you can prove it, within first-order logic, using only the syntactical rules of proof, from those axioms."
Followup to: Logical Pinpointing
"Oh! Hello. Back again?"
Yes, I've got another question. Earlier you said that you had to use second-order logic to define the numbers. But I'm pretty sure I've heard about something called 'first-order Peano arithmetic' which is also supposed to define the natural numbers. Going by the name, I doubt it has any 'second-order' axioms. Honestly, I'm not sure I understand this second-order business at all.
"Well, let's start by examining the following model:"
"This model has three properties that we would expect to be true of the standard numbers - 'Every number has a successor', 'If two numbers have the same successor they are the same number', and '0 is the only number which is not the successor of any number'. All three of these statements are true in this model, so in that sense it's quite numberlike -"
And yet this model clearly is not the numbers we are looking for, because it's got all these mysterious extra numbers like C and -2*. That C thing even loops around, which I certainly wouldn't expect any number to do. And then there's that infinite-in-both-directions chain which isn't corrected to anything else.
"Right, so, the difference between first-order logic and second-order logic is this: In first-order logic, we can get rid of the ABC - make a statement which rules out any model that has a loop of numbers like that. But we can't get rid of the infinite chain underneath it. In second-order logic we can get rid of the extra chain."
The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2.
-- James R. Newman, The World of Mathematics
Previous meditation 1: If we can only meaningfully talk about parts of the universe that can be pinned down by chains of cause and effect, where do we find the fact that 2 + 2 = 4? Or did I just make a meaningless noise, there? Or if you claim that "2 + 2 = 4"isn't meaningful or true, then what alternate property does the sentence "2 + 2 = 4" have which makes it so much more useful than the sentence "2 + 2 = 3"?
Previous meditation 2: It has been claimed that logic and mathematics is the study of which conclusions follow from which premises. But when we say that 2 + 2 = 4, are we really just assuming that? It seems like 2 + 2 = 4 was true well before anyone was around to assume it, that two apples equalled two apples before there was anyone to count them, and that we couldn't make it 5 just by assuming differently.
Speaking conventional English, we'd say the sentence 2 + 2 = 4 is "true", and anyone who put down "false" instead on a math-test would be marked wrong by the schoolteacher (and not without justice).
But what can make such a belief true, what is the belief about, what is the truth-condition of the belief which can make it true or alternatively false? The sentence '2 + 2 = 4' is true if and only if... what?
In the previous post I asserted that the study of logic is the study of which conclusions follow from which premises; and that although this sort of inevitable implication is sometimes called "true", it could more specifically be called "valid", since checking for inevitability seems quite different from comparing a belief to our own universe. And you could claim, accordingly, that "2 + 2 = 4" is 'valid' because it is an inevitable implication of the axioms of Peano Arithmetic.
And yet thinking about 2 + 2 = 4 doesn't really feel that way. Figuring out facts about the natural numbers doesn't feel like the operation of making up assumptions and then deducing conclusions from them. It feels like the numbers are just out there, and the only point of making up the axioms of Peano Arithmetic was to allow mathematicians to talk about them. The Peano axioms might have been convenient for deducing a set of theorems like 2 + 2 = 4, but really all of those theorems were true about numbers to begin with. Just like "The sky is blue" is true about the sky, regardless of whether it follows from any particular assumptions.
So comparison-to-a-standard does seem to be at work, just as with physical truth... and yet this notion of 2 + 2 = 4 seems different from "stuff that makes stuff happen". Numbers don't occupy space or time, they don't arrive in any order of cause and effect, there are no events in numberland.
Meditation: What are we talking about when we talk about numbers? We can't navigate to them by following causal connections - so how do we get there from here?
Suppose a general-population survey shows that people who exercise less, weigh more. You don't have any known direction of time in the data - you don't know which came first, the increased weight or the diminished exercise. And you didn't randomly assign half the population to exercise less; you just surveyed an existing population.
The statisticians who discovered causality were trying to find a way to distinguish, within survey data, the direction of cause and effect - whether, as common sense would have it, more obese people exercise less because they find physical activity less rewarding; or whether, as in the virtue theory of metabolism, lack of exercise actually causes weight gain due to divine punishment for the sin of sloth.
The usual way to resolve this sort of question is by randomized intervention. If you randomly assign half your experimental subjects to exercise more, and afterward the increased-exercise group doesn't lose any weight compared to the control group , you could rule out causality from exercise to weight, and conclude that the correlation between weight and exercise is probably due to physical activity being less fun when you're overweight . The question is whether you can get causal data without interventions.
For a long time, the conventional wisdom in philosophy was that this was impossible unless you knew the direction of time and knew which event had happened first. Among some philosophers of science, there was a belief that the "direction of causality" was a meaningless question, and that in the universe itself there were only correlations - that "cause and effect" was something unobservable and undefinable, that only unsophisticated non-statisticians believed in due to their lack of formal training:
"The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm." -- Bertrand Russell (he later changed his mind)
"Beyond such discarded fundamentals as 'matter' and 'force' lies still another fetish among the inscrutable arcana of modern science, namely, the category of cause and effect." -- Karl Pearson
The famous statistician Fisher, who was also a smoker, testified before Congress that the correlation between smoking and lung cancer couldn't prove that the former caused the latter. We have remnants of this type of reasoning in old-school "Correlation does not imply causation", without the now-standard appendix, "But it sure is a hint".
This skepticism was overturned by a surprisingly simple mathematical observation.
A friend asked me to get her Bill O'Reilly's new book Killing Lincoln for Christmas. I read its reviews on Amazon, and found several that said it wasn't as good as another book about the assassination, Blood on the Moon. This seemed like a believable conclusion to me. Killing Lincoln has no footnotes to document any of its claims, and is not in the Ford's Theatre national park service bookstore because the NPS decided it was too historically inaccurate to sell. Nearly 200 books have been written about the Lincoln assassination, including some by professional Lincoln scholars. So the odds seemed good that at least one of these was better than a book written by a TV talk show host.
But I was wrong. To many people, this was not a believable conclusion.
(This is not about the irrationality of Fox network fans. They are just a useful case study.)
In this article I invite LessWrong users to learn the very basic math of something that is useful to both our community's goal of making better thinkers as well as many of the unrelated discussions that we often have here. I also present resources for further study to those interested. I made it based on the karma feedback given to this post in the monthly open thread.
Recently there has been a series of contributions made in main that serve more as introductory and logistic material than novel contributions. Because of this and because I hope It will grab more attention from newer members, I posted this in main rather than discussion section.
What is "game theory"?
Game theory is a mathematical method for analyzing calculated circumstances, such as in games, where a person’s success is based upon the choices of others. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory.
Game theory attempts to mathematically capture behaviour in strategic situations, in which an individual's success in making choices depends on the choices of others.
From both definitions it should be clear how this relates to the art of refining human rationality. Besides the general admonition that rationalist should win, for us humans being the social animals that we are, there few things in our lives that do not depend at least partially on the choices of others. Game theory is extensively used in and connected to fields as disparate as economics, psychology, political science, logic, sports and evolutionary biology.
As many have argued before, it is an important part of the map of the real world:
Again and again, I’ve undergone the humbling experience of first lamenting how badly something sucks, then only much later having the crucial insight that its not sucking wouldn’t have been a Nash equilibrium.
You may not know it yet, but it is impossible to read this site for a extended period of time without running into concepts that are intimately tied to this field of study. Nash equilibrium, Pareto optimal, Prisoners Dilemma, non-zero sum, zero sum, the Decision theory talk that breaks out every now and then,...
You can take the concepts one at a time, reading up on a few lines from a dictionary like definition and trying to assimilate them without doing any of the connected mathematics. I wouldn't want to discourage you from that, its better than guessing! But this approach has its limitations, one risks misunderstanding something or even more subtly just failing to appreciate nuance and running into practical difficulties when trying to apply this knowledge in the real world. At the very least guessing the teachers password is a problem. Those of you that looked up these phrases and concepts on-line probably realized that they fit into a wider framework, a framework I hope you can now begin to explore with simple math, even if only with just a few tentative steps.
So what are the videos I should watch?
This fall (2011) there has been an ongoing class offered by two Stanford professors, Sebastian Thrun and Peter Norvig called "Introduction to Artificial intelligence". It has been talked about extensively on LW in several threads here, here and here. Many LWers have showed interest, quite a few signed up and several of us are now preparing for its final exam. Among the material covered is a introduction to game theory. I've been on live lectures about the subject and even watched some recorded ones and in comparison this is one of the better short introductions I've seen so far. I especially like how each of the videos is a self-contained unit just a few minutes in length. Instead of having to commit to watching a 40 or 60 minutes lecture, you just need to commit 2-5 minutes at a time.
The relevant Units of the material that cover this are 13. Games and 14. Game Theory. These units are presented by Peter Norvig. They are not recordings of a professor presenting something to a class in front of a blackboard, but rather aim towards the feeling of having a private tutor sitting down with you and explaining a few things with the help of a pen and a few pieces of paper (reminiscent of the style seen on Khan Academy). Currently you can still go directly to the site and view these videos logged in as a visitor (recommended). But just to avoid a trivial inconvenience and in case the youtube videos outlast the current state of the website I'm going to link directly to the youtube videos and write down any relevant comments and missing information as well. Unit 13 especially, assumes some previous knowledge you probably don't have, it deals primarily with complexity of games and how computationally demanding it is to find solutions. It can be useful for getting to know some terminology, but is otherwise skippable.
Don't worry. If you look up or feel you know what an agent or player is and what utility is, the missing exotic stuff (ala POMDPs) that isn't explained as you go along doesn't matter much for our purposes.
13. Games (optional)
- Technologies Question ? (Solution) [One choice per row]
- Games Question ? (Solution) [Multiple choice per row]
- Single Player Game
- Two Player Game
- Two Player Function
- Time Complexity Question ? (Solution)
- Space Complexity Question ? (Solution)
- Chess Question ? (Solution)
- Complexity Reduction Question ? (Solution)
- Review Question ? (Solution)
- Reduce B
- Reduce B Question ? (Solution)
- Reduce M
- Computing State Values
- Complexity Reduction Benefits
- Pacman Question ? (Solution)
- Chance Question ? (Solution)
- Terminal State Question ? (Solution)
- Game Tree Question 1 ? (Solution)
- Game Tree Question 2 ? (Solution)
14. Game Theory
- Dominant Strategy Question ? (Solution) [This is where you learn about the famous Prisoners dilemma!]
- Pareto Optimal Question ? (Solution) [rot13 after solving: Gur dhvm vapbeerpgyl vqragvsvrf bayl gur obggbz evtug bhgpbzr nf Cnergb bcgvzny, ohg obgu gur hccre evtug naq obggbz yrsg ner nyfb Cnergb bcgvzny. Va gur hccre evtug ab bgure bhgpbzr vf zber cersreerq ol O. Yvxrjvfr sbe gur ybjre yrsg ab bgure bhgpbzr vf zber cersreerq ol N.]
- Equilibrium Question ? (Solution)
- Game Console Question 1 ? (Solution)
- Game Console Question 2 ? (Solution)
- 2 Finger Morra
- Tree Question ? (Solution)
- Mixed Strategy
- Solving the Game
- Mixed Strategy Issues
- 2x2 Game Question 1 ? (Solution) [Please enter probabilities and not percentages.]
- 2x2 Game Question 2 ? (Solution)
- Geometric Interpretation
- Game Theory Strategies
- Fed vs Politicians Question ? (Solution)
- Mechanism Design
- Auction Question ? (Solution)
At any point feel free to ask questions here in the comment section, I'm sure someone will gladly help you. Also the AI class reddit may be a good resource. Once you are done with the short series of lectures test your knowledge with these assignments.
- Max Min Question ? (Solution)
- Game Tree Question ? (Solution) [Unit 13 material. You should check children of pruned nodes as being pruned as well.]
- Strategy Question ? (Solution)
Note: I present this material in the form of a link to the video, followed by a "?" question mark if there is an answerable question that has a solution video posted. The link to the solution are posted as "(Solution)". Any additional comments made as corrections to the videos or some information that may be otherwise missing in this format, will be added in square brackets "[...]". I encourage people who are solving this via the links rather than the site to not watch the solutions straight away but first work out what they think the answer should be, don't worry if you get it wrong, sometimes the questions are unlikely to be answered correctly with the knowledge you have at that point, their role is to make you better remember and engage the material, not gauge your performance. The exception to this are the videos that come after Unit 14.
"I don't get it." or "It's not working." or "I didn't bother to watch more than a few."
First off for those who didn't for whatever reason like the lectures given here or find them dull or over your head, don't despair! If you feel you don't understand something, ask questions, I can guarantee that either me or someone else will answer it. To those of you who feel they are understanding the material but just don't like the videos or the lecturer, don't worry there are several other ways to approach the field. To just point you on your way here is a wide variety of quality alternatives, some of which may have approaches you prefer:
- Academic Earth site has several related classes, including an introductory one. They include additional non-video material.
- 2012 Game Theory online Stanford class (one of the many interesting classes inspired by "Introduction to AI")
I will keep this list updated and add any quality recommendations proposed by fellow LWers.
Unfortunately for those wanting just the introduction and most basic approach, many of these are more in depth and longer (this is also fortunate for those wanting a bit more). So if you just watch, comprehend and learn to use the information presented in the first lecture or two in one of these recommendations, you have done as much or more as someone who completed Unit 13 and 14. If you don't like video format in general and learn better from written material or live interaction... well this is mostly the wrong article for you. But I do present some additional non-video material in the next section you may find useful.
I watched the lectures and I think I understood them, where do I go from here?
Cool! Well check out some of the alternative videos and classes listed above, most of them are quite extensive. Try to complete one! If you'd like and try to take one ask around the comment section, maybe enough people would be interested to start a study group. Also MIT open course-ware has some material you may be interested even if you don't feel like doing the full classes.
A good AI textbook might be something you would like to explore. LessWrong has a great article with recommendations for a variety of textbooks for several interesting subjects (all recommendations must be made by people who've read at least two other titles on the subject)... but none for game theory. :/
In the thread Bgesop requested a recommendation:
Unfortunately it was the plea went unanswered. I'd love to just recommend you the textbook I first learned the subject from, but most readers are probably English speakers, so that's a no go. I'm not familiar with that many of them. I did skim Game Theory 2nd edition by Guillermo Owen, and it seemed ok. Hopefully me pointing this out will prompt someone to come up with a good recommendation. When they do I'll update this post accordingly, and lukeprog's great list can get another good textbook.
Note: this is intended to be a friendly math post, so apologies to anyone for whom this is all old hat. I'm deliberately staying elementary for the benefit of people who are new to the ideas. There are no proofs: this is long enough as it is.
Here's a rather deep problem in philosophy: how do we come up with categories? What's the difference between a horror movie and a science fiction movie? Or the difference between a bird and a mammal? Are there such things as "natural kinds," or are all such ideas arbitrary?
We can frame this in a slightly more mathematical way as follows. Objects in real life (animals, moving pictures, etc.) are enormously complicated and have many features and properties. You can think of this as a very high dimensional space, one dimension for each property, and each object having a value corresponding to each property. A grayscale picture, for example, has a color value for each pixel. A text document has a count for every word (the word "flamingo" might have been used 7 times, for instance.) A multiple-choice questionnaire has an answer for each question. Each object is a point in a high-dimensional featurespace. To identify which objects are similar to each other, we want to identify how close points are in featurespace. For example, two pictures that only differ at one pixel should turn out to be similar.
We could then start to form categories if the objects form empirical clusters in featurespace. If some animals have wings and hollow bones and feathers, and some animals have none of those things but give milk and bear live young, it makes sense to distinguish birds from mammals. If empirical clusters actually exist, then there's nothing arbitrary about the choice of categories -- the categories are appropriate to the data!
There are a number of mathematical techniques for assigning categories; all of them are basically attacking the same problem, and in principle should all agree with each other and identify the "right" categories. But in practice they have different strengths and weaknesses, in computational efficiency, robustness to noise, and ability to classify accurately. This field is incredibly useful -- this is how computers do image and speech recognition, this is how natural language processing works, this is how they sequence your DNA. It also, I hope, will yield insights into how people think and perceive.
These techniques attempt to directly find clusters in observations. A common example is the K-means algorithm. The goal here is, given a set of observations x1...xn, to partition them into k sets so as to minimize the within-cluster sum of squared differences:
TL;DR Summary: Mathematical truths can be cashed out as combined claims about 1) the common conception of the rules of how numbers work, and 2) whether the rules imply a particular truth. This cashing-out keeps them purely about the physical world and eliminates the need to appeal to an immaterial realm, as some mathematicians feel a need to.
Background: "I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true." -- Eliezer Yudkowsky
This is the problem I will address here: how should a rationalist regard the status of mathematical truths? In doing so, I will present a unifying approach that, I contend, elegantly solves the following related problems:
- Eliminating the need for a non-physical, non-observable "Platonic" math realm.
- The issue of whether "math was true/existed even when people weren't around".
- Cashing out the meaning of isolated claims like "2+2=4".
- The issue of whether mathematical truths and math itself should count as being discovered or invented.
- Whether mathematical reasoning alone can tell you things about the universe.
- Showing what it would take to convince a rationalist that "2+2=3".
- How the words in math statements can be wrong.
View more: Next