Sewing-Machine comments on How to Convince Me That 2 + 2 = 3 - Less Wrong

52 Post author: Eliezer_Yudkowsky 27 September 2007 11:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (390)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 14 September 2011 06:26:53AM *  10 points [-]

I don't think people really understood what I was talking about in that thread. I would have to write a sequence about

  • the difference between first-order and second-order logic
  • why the Lowenheim-Skolem theorems show that you can talk about integers or reals in higher-order logic but not first-order logic
  • why third-order logic isn't qualitatively different from second-order logic in the same way that second-order logic is qualitatively above first-order logic
  • the generalization of Solomonoff induction to anthropic reasoning about agents resembling yourself who appear embedded in models of second-order theories, with more compact axiom sets being more probable a priori
  • how that addresses some points Wei Dai has made about hypercomputation not being conceivable to agents using Solomonoff induction on computable Cartesian environments, as well as formalizing some of the questions we argue about in anthropic theory
  • why seeing apparently infinite time and apparently continuous space suggests, to an agent using second-order anthropic induction, that we might be living within a model of axioms that imply infinity and continuity
  • why believing that things like a first uncountable ordinal can contain reality-fluid in the same way as the wavefunction, or even be uniquely specified by second-order axioms that pin down a single model up to isomorphism the way that second-order axioms can pin down integerness and realness, is something we have rather less evidence for, on the surface of things, than we have evidence favoring the physical existability of models of infinity and continuity, or the mathematical sensibility of talking about the integers or real numbers.
Comment author: [deleted] 14 September 2011 06:33:15AM 3 points [-]

Lowenheim-Skolem, maybe?

But that does not imply that you can't talk about integers or reals in first order logic. And indeed you can talk about integers and real numbers using first-order logic, people do so all the time.

Comment author: Eliezer_Yudkowsky 14 September 2011 06:39:19AM 1 point [-]

Only in the same sense that you can talk about kittens by saying "Those furry things!" There'll always be some ambiguity over whether you're talking about kittens or lions, even though kittens are in fact furry and have all the properties that you can deduce to hold true of furry things.

Comment author: [deleted] 14 September 2011 07:02:33AM 3 points [-]

Not in the same sense at all. All of the numbers that you have ever physically encountered were nameable, definable, computable. Moreover they came to you with algorithms for verifying that one of them was equal to another.

Comment author: Vladimir_Nesov 14 September 2011 10:00:39AM *  2 points [-]

Yes, and that's OK. I suspect you can't do qualitatively better than that (viz ambient set-theoretic universe for second-order logic), but it's still possible (necessary?) to work under this apparent lack of absolute control over what it is you are dealing with. Even though (first order) PA doesn't know what "integers" are, it's still true that the statements it believes valid are true for "integers", it's useful that way (just as AIs or humans are useful for making the world better). It is a device that perceives some of the properties of the object we study, but not all, not enough to rebuild it completely. (Other devices can form similarly imperfect pictures of the object of study and its relationship with the device perceiving it, or of themselves perceiving this process, or of the object of study being affected by behavior of some of these devices.)

Likewise, we may fail to account for all worlds that we might be affecting by our decisions, but we mostly care about (or maybe rather have non-negligible consequentialist control over) "real world" (or worlds), whatever this is, and it's true that our conclusions capture some truth about this "real world", even if it's genuinely impossible for us to ever know completely what it is. (We of course "know" plenty more than was ever understood, and it's a big question how to communicate to a FAI what we do know.)