This phrase confuses me:
and that some single large ordinal is well-ordered.
Every definition I've seen of ordinal either includes well-ordered or has that as a theorem. I'm having trouble imagining a situation where it's necessary to use the well-orderedness of a larger ordinal to prove it for a smaller one.
*edit- Did you mean well-founded instead of well-ordered?
This argument strikes me as boiling down to: "I can't think of any bad attractors besides the anti-inductive prior, therefore I'm going to assume I don't need to worry about them".
I'm a bit skeptical of this minimalism (if "induction works" needs to get explicitly stated, I'm afraid all sorts of other things---like "deduction works"---also do).
But while we're at it, I don't think you need to take any mathematical statements on faith. To the extent that a mathematical statement does any useful predictive work, it too can be supported by the evidence. Maybe you could say that we should include it on a technicality (we don't yet know how to do induction on mathematical objects), but if you don't think that you can do induction over mathematical facts, you've got more problems than not believing in large ordinals!
being exposed to ordered sensory data will rapidly promote the hypothesis that induction works
Promote it how? By ways of inductive reasoning, to which Bayesian inference belongs. It seems like there's a contradiction between the initially small prior of "induction works" (which is different from inductive reasoning, but still related) and "promote that low-probability hypothesis (that induction works) by ways of inductive reasoning".
If you see no tension there, wouldn't you still need to state the basis for "inductive reasoning works", at least such that its use can be justified (initially)?
To get to Bayes, don't you also need to believe not just that probability theory is internally consistent (your well-ordered ordinal gives you that much) but also that it is the correct system for deducing credences from other credences? That is, you need to believe Cox's assumptions, or equivalently (I think) Jayes' desiderata (consistent, non-ideological, quantitative). Without these, you can do all the probability theory you want but you'll never be able to point at the number at the end of a calculation and say "that is now my credence for the sun rising tomorrow".
I think you also need faith in "wanting something". I mean, it's not absolutely essential, but if you don't want anything, it's unlikely that you'll make any use of your shiny induction and ordinal.
I don't think this got mentioned, but I assume that it's really difficult (as in, nobody has done it yet) to go from "induction works" to "a large ordinal is well-ordered". That would reduce the number of things you have faith in from two to one.
to a general audience it is not transparent
Not transparent? This general audience has no idea what all this even means.
Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works
Not if the alternative hypothesis assigns about the same probability to the data up to the present. For example, an alternative hypothesis to the standard "the sun rises every day" is "the sun rises every day, until March 22, 2015", and the alternative hypothesis assigns the same probability to the data observed until the present as the standard one does.
You also have to trust your memory and your ability to compute Solomonoff induction, both of which are demonstrably imperfect.
(No, this is not the "tu quoque!" moral equivalent of starting out by assigning probability 1 that Christ died for your sins.)
Can someone please explain this?
I understand many religious people claim to just 'have faith' in Christ, with absolute certainty. I think the standard argument would run "well, you say I shouldn't have faith in Christ, but you have faith in 'science' / 'non-neglible probability on induction and some single well ordered large ordinal' so you can't argue against faith".
What is Eliezer saying here?
Addendum: By which I mean, can someone give a clear explanation of why they are not the same?
Does induction state a fact about the territory or the map? Is it more akin to "The information processing influencing my sensory inputs actually has to a processor in which P(0) & [P(0) & P (1) & ... & P(n) -> P(n+1)] for all propositions P and natural n?" Or is it "my own information processor is one for which P(0) & [P(0) & P (1) & ... & P(n) -> P(n+1)] for all propositions P and natural n?"
It seems like the second option is true by definition (by the authoring of the AI, we simply make it so b...
In the real world inductions seems to work for some problems but not for others.
The turkey who gets feed by humans can update every day he's fed on the thesis that humans are benelovent. When he get's slaughtered at thanksgiving, he's out of luck.
I've long claimed to not have faith in anything. I'm certainly don't have "faith" in inductive inference. I don't see why anyone would have "faith" in something which they are uncertain about. The need for lack of certainty about induction has long been understood.
You only need faith in two things: ...that some single large ordinal is well-ordered.
I'm confused. What do you mean by faith in... well, properties of abstract formal systems? That some single large ordinal must exist in at least one of your models for it to usefully model reality (or other models)?
Work is ongoing on eliminating the requirement for faith in these two remaining propositions. For example, we might be able to describe our increasing confidence in ZFC in terms of logical uncertainty and an inductive prior which is updated as ZFC passes various tests that it would have a substantial subjective probability of failing, even given all other tests it has passed so far, if ZFC were inconsistent.
Would using the length of the demonstration of a contradiction work? Under the Curry-Howard correspondence, a lengthy proof should correspond to a lengthy program, which under Solomonoff induction should have less and less credit.
You only need faith in two things: That "induction works" has a non-super-exponentially-tiny prior probability, and that some single large ordinal is well-ordered. Anything else worth believing in is a deductive consequence of one or both.
(Because being exposed to ordered sensory data will rapidly promote the hypothesis that induction works, even if you started by assigning it very tiny prior probability, so long as that prior probability is not super-exponentially tiny. Then induction on sensory data gives you all empirical facts worth believing in. Believing that a mathematical system has a model usually corresponds to believing that a certain computable ordinal is well-ordered (the proof-theoretic ordinal of that system), and large ordinals imply the well-orderedness of all smaller ordinals. So if you assign non-tiny prior probability to the idea that induction might work, and you believe in the well-orderedness of a single sufficiently large computable ordinal, all of empirical science, and all of the math you will actually believe in, will follow without any further need for faith.)
(The reason why you need faith for the first case is that although the fact that induction works can be readily observed, there is also some anti-inductive prior which says, 'Well, but since induction has worked all those previous times, it'll probably fail next time!' and 'Anti-induction is bound to work next time, since it's never worked before!' Since anti-induction objectively gets a far lower Bayes-score on any ordered sequence and is then demoted by the logical operation of Bayesian updating, to favor induction over anti-induction it is not necessary to start out believing that induction works better than anti-induction, it is only necessary *not* to start out by being *perfectly* confident that induction won't work.)
(The reason why you need faith for the second case is that although more powerful proof systems - those with larger proof-theoretic ordinals - can prove the consistency of weaker proof systems, or equivalently prove the well-ordering of smaller ordinals, there's no known perfect system for telling which mathematical systems are consistent just as (equivalently!) there's no way of solving the halting problem. So when you reach the strongest math system you can be convinced of and further assumptions seem dangerously fragile, there's some large ordinal that represents all the math you believe in. If this doesn't seem to you like faith, try looking up a Buchholz hydra and then believing that it can always be killed.)
(Work is ongoing on eliminating the requirement for faith in these two remaining propositions. For example, we might be able to describe our increasing confidence in ZFC in terms of logical uncertainty and an inductive prior which is updated as ZFC passes various tests that it would have a substantial subjective probability of failing, even given all other tests it has passed so far, if ZFC were inconsistent.)
(No, this is *not* the "tu quoque!" moral equivalent of starting out by assigning probability 1 that Christ died for your sins.)