# The Pascal's Wager Fallacy Fallacy

26 [deleted] 18 March 2009 12:30AM

Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models.  The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.

So I observed that:

1. Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken.  (I go into some detail on this possibility below the cutoff.)
2. If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

And the one said, "Isn't that a form of Pascal's Wager?"

I'm going to call this the Pascal's Wager Fallacy Fallacy.

You see it all the time in discussion of cryonics.  The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life."  And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large.  This is not where the flaw in the reasoning comes from.  That is not the problematic step.  The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).

However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.

And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.

But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.

Yet instead we have reasoning that runs like this:

1. Cryonics has a large payoff;
2. Therefore, the argument carries even if the probability is tiny;
3. Therefore, the probability is tiny;
4. Therefore, why bother thinking about it?

(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)

Further details:

Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here).

See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.

In our current model of physics, time is infinite, and so the collection of real things is infinite.  Each time state has a successor state, and there's no particular assertion that time returns to the starting point.  Considering time's continuity just makes it worse - now we have an uncountable set of real things!

But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter.  We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.

The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.

On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded.  So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations.  Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".

So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.

And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person).  There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Sort By: Old
Comment author: 18 March 2009 12:58:05AM 4 points [-]

You reference a popular idea, something like "The integers are countable, but the real number line is uncountable." I apologize for nitpicking, but I want to argue against philosophers (that's you, Eliezer) blindly repeating this claim, as if it was obvious or uncontroversial.

Yes, it is strictly correct according to current definitions. However, there was a time when people were striving to find the "correct" definition of the real number line. What people ended up with was not the only possibility, and Dedekind cuts (or various other things) are a pretty ugly, arbitrary construction.

The set containing EVERY number that you might, even in principle, name or pick out with a definition is countable (because the set of names, or definitions, is a subset of the set of strings, which is countable).

The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves).

You don't get magical powers of infinity just from claiming to have them. Standard mathematical talk is REALLY WEIRD from a computer science perspective.

Comment author: 27 April 2011 12:53:17PM 4 points [-]

That's not the Lowenheim-Skolem Theorem. You've confused finite with countable (i.e. finite or countably infinite). Here's a simple example of a theory that can't be satisfied with a finite model.

1. exists x forall y (x != s(y))
2. forall x,y (s(x) = s(y) -> x = y)

Any model that satisfies this must have at least 1 element by axiom 1. Call it 0. s(0) != 0 so the model must have at least 2 elements. s(s(0)) != s(0) by axiom 2. So the model has at least 3 elements.

Suppose we have n distinct elements in our model, all obtained from applying s to 0 the appropriate number of times. Then we need one more, since s(s(...(s(n))) [applied n times] != s(s(...s(n))) [applied n-1 times] (this follows from axiom 2.)

So any model that satisfies these axioms must be infinite. (Incidentally, you can get theories that specify the natural numbers with more precision: see http://en.wikipedia.org/wiki/Robinson_Arithmetic).

Comment author: 27 April 2011 03:06:56PM 8 points [-]

Mathematicians routinely use "infinite" to mean "infinite in magnitude". For example, the concept "The natural numbers" is infinite in magnitude, but I have picked it out using only 19 ascii characters. From a computer science perspective, it is a finite concept - finite in information content, the number of bits necessary to point it out.

Each of the objects in the set of the Peano integers is finite. The set of Peano integers, considered as a whole, is infinite in magnitude, but finite in information content.

Mathematician's routine speech sometimes sounds as if a generic real number is a small thing, something that you could pick up and move around. In fact, a generic real number (since it's an element of an uncountable set) is infinite in information content - they're huge, and impossible to encounter, much less pick up.

Lowenheim-Skolem allows you to transform proofs that, on a straightforward reading, claim to be manipulating generic elements of uncountable sets (picking up and moving around real numbers for example), with proofs that claim to be manipulating elements of countable sets - that is, objects that are finite in information content.

In that tranformation, you will probably introduce "objects" which are something like "the double-struck N", and those objects certainly still satisfy internal predicates like "InfiniteInMagnitude(the double-struck N)".

However, you're never forced to believe that mathematicians are routinely doing impossible things - you can always take a formalist stance, pointing out that mathematicians are actually manipulating symbols, which are small, finite-in-information-content things.

Comment author: [deleted] 27 April 2011 03:20:15PM 0 points [-]

LĂ¶wenheim-Skolem only applies to first-order theories. While there are models of the theory of real closed fields that are countable, referring to those models as "the real numbers" is somewhat misleading, because there isn't only one of them (up to model-theoretic isomorphism).

Also, if you're going to measure information content, you really need to fix a formal language first, or else "the number of bits needed to express X" is ill-defined.

Basically, learn model theory before trying to wield it.

Comment author: 01 February 2016 04:18:06AM 0 points [-]

Also, if you're going to measure information content, you really need to fix a formal language first, or else "the number of bits needed to express X" is ill-defined.

Basically, learn model theory before trying to wield it.

I don't know model theory, but isn't the crucial detail here whether or not the number of bits needed to express X is finite or infinite? If so, then it seems we can handwave the specific formal language we're using to describe X, in the same way that we can handwave what encoding for Turing Machines generally when talking about Kolmogorov complexity, even though to actually get a concrete integer K(S) representing the Kolmogorovo complexity of a string S requires us to use a fixed encoding of Turing Machines. In practice, we never actually care what the number K(S) is.

Comment author: [deleted] 02 February 2016 02:43:08AM *  0 points [-]

Let's say I have a specific model of the real numbers in mind, and lets pretend "number of bits needed to describe X" means "log2 the length of the shortest theory that proves the existence of X."

Fix a language L1 whose constants are the rational numbers and which otherwise is the language of linear orders. Then it takes a countable number of propositions to prove the existence of any given irrational number (i.e., exists x[1] such that x[1] < u[1], ..., exists y[1] such that y[1] > v[1], ..., x[1] = y[1], ... x[1] = x[2], ..., where the sequences u[n] and v[n] are strict upper and lower bounding sequences on the real number in question).

Now fix a language L2 whose constants are the real numbers. It now requires one proposition to prove the existence of any given irrational number (i.e., exists x such that x = c).

The difference between this ill-defined measure of information and Kolomogrov complexity is that Turing Machines are inherently countable, and the languages and models of model theory need not be.

(Disclaimer: paper-machine2011 knew far more about mathematical logic than paper-machine2016 does.)

Comment author: 04 February 2016 06:29:27PM 0 points [-]

lets pretend "number of bits needed to describe X" means "log2 the length of the shortest theory that proves the existence of X."

Whether a theory proves the existence of X may be an undecideable question.

Comment author: 04 February 2016 11:24:14PM 0 points [-]

How many bits it takes to describe X is an undecidable question when defined in other ways, too.

Comment author: 05 February 2016 08:29:24AM *  0 points [-]

The definition "length of the shortest program which minimizes (program length + runtime)" isn't undecideable, although you could argue that that's not what we normally mean by number of bits.

Comment author: 05 February 2016 01:51:09PM 1 point [-]

Adding program length and runtime feels to me like a type error.

Comment author: 27 April 2011 06:24:30PM *  4 points [-]

However, you're never forced to believe that mathematicians are routinely doing impossible things - you can always take a formalist stance, pointing out that mathematicians are actually manipulating symbols, which are small, finite-in-information-content things.

So, given this, what exactly is your complaint? You started off criticizing Eliezer (and whomever else) for saying "The integers are countable, but the real number line is uncountable" - I suppose on the grounds that everything in the physical universe is countable, or something. (You weren't exactly clear.) But now you point out (correctly) that there is a perfectly good interpretation of this statement which in no way depends on there being an uncountable number of physical things anywhere, or otherwise violates your (not-exactly-well-defined) philosophy. So haven't you just defeated yourself?

Comment author: 27 April 2011 07:20:22PM 2 points [-]

I have a knee-jerk response, railing against uncountable sets in general and the real numbers in particular; it's not pretty and I know how to control it better now.

Comment author: [deleted] 27 April 2011 07:29:46PM *  3 points [-]

I'm fairly confident that for your purposes you could live with the computable numbers (that is: those numbers whose decimal expansion can be computed by <fix a Turing-equivalent computational foundation here>), and as long as you didn't need anything stronger than integration amenable to quadrature, you'd be just fine.

There are people who take this route, but I can't think of any off the top of my head. Knuth once stated that he'd like to write a calculus book roughly following this path, but, well, he's got other things on his mind.

EDIT: I should point out also that the computable numbers are countable (by the usual Godel encoding of whatever machine is rattling off the digits for you), and that for all practical intents and purposes they're probably equivalent to whatever calculus-related mischief is in play at the moment.

Comment author: 27 April 2011 07:38:00PM 2 points [-]

There's some weirdnesses down that route - for example, it turns out that you can't distinguish zero from nonzero, so the step function is actually uncomputable.

My contrarian claim is that everyone could live with the nameable numbers - that is, the numbers that can be pointed out using a finite number of books to describe them. People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they'd miss.

Comment author: [deleted] 27 April 2011 07:43:40PM 3 points [-]

My contrarian claim is that everyone could live with the nameable numbers

I don't understand. Those also seem to fall prey to

it turns out that you can't distinguish zero from nonzero, so the step function is actually uncomputable.

Also,

People who really strongly care about the uncountability of the reals have a hard time coming up with a concrete example of what they'd miss.

Lebesgue measure theory, Gal(C/R) = Z/2Z, and some pathological examples in the history of differential geometry without which the current definition of a manifold would have been much more difficult to ascertain.

Off the top of my head. There are certainly other things I would miss.

Comment author: 27 April 2011 08:15:32PM 0 points [-]

Those are theories, which are not generally lost if you switch the underlying definitions aptly - and they are sometimes improved (if the new definitions are better, or if the switch demonstrates an abstraction that was not previously known).

People can't pick out specific examples of numbers that are lost by switching to using nameable numbers, they can only gesture at broad classes of numbers, like "0.10147829..., choosing subsequent digits according to no specific rule". If you can describe a specific example (using Lebesgue measure theory if you like), then that description is a name for that number.

Comment author: [deleted] 27 April 2011 08:52:06PM 1 point [-]

Those are theories, which are not generally lost...

I really wish I had the time to explicitly write out the reasons why I believe these examples are compelling reasons to use the usual model of the real numbers. I tried, but I've already spent too long and I doubt they would convince you anyway.

People can't pick out specific examples of numbers that are lost by switching to using nameable numbers,

So? Omega could obliterate 99% of the particles in the known universe, and I wouldn't be able to name a particular one. If it turns out in the future that these nameable numbers have nice theoretic properties, sure. The effort to rebuild the usual theory doesn't seem to be worth the benefit of getting rid of uncountability. (Or more precisely, one source of uncountability.)

I think I've spent enough time procrastinating on this topic. I don't see it going anywhere productive.

Comment author: 28 April 2011 11:22:28AM *  7 points [-]

I'm not sure what a "nameable number" is. Whatever countable naming scheme you invent, I can "name" a number that's outside it by the usual diagonal trick: it differs from your first nameable number in the first digit, and so on. (Note this doesn't require choice, the procedure may be deterministic.) Switching from reals to nameable numbers seems to require adding more complexity than I'm comfortable with. Also, I enjoy having a notion of Martin-LĂ¶f random sequences and random reals, which doesn't play nice with nameability.

Comment author: 28 April 2011 09:50:43PM 1 point [-]

Gal(C/R) = Z/2Z

I'm confused; this is true for any real closed field. What are you getting at with this?

Comment author: [deleted] 28 April 2011 10:05:25PM *  0 points [-]

A mistake. I was thinking of C as the so-called "generic complex numbers." You're right that if you replace C with the algebraic closure of whatever countable model's been dreamed up, then C = R[i] and that's it.

Admittedly I'm only conjecturing that Gal(C/K) will be different for some K countable, but I think there's good evidence in favor of it. After all, if K is the algebraic closure of Q, then Gal(C/K) is gigantic. It doesn't seem likely that one could "fix" the other "degrees of freedom" with only countably many irrationals.

Comment author: 28 April 2011 09:51:52PM *  1 point [-]

Of course, whether a number is definable or not depends on the surrounding theory. Stick to first-order theory of the reals and only algebraic numbers will be definable! Definable in ZF? Or what?

EDIT Apr 30: Oops! Obviously definability depends only on the ambient language, not the actual ambient theory... no difference here between ZF and ZFC...

Comment author: 13 May 2011 03:11:52PM *  0 points [-]

Sorry for what might be a silly question, but what do you mean by â€śgeneric real numberâ€ť? In the sense of â€śone number picked at random from the setâ€ť, a â€śgeneric natural numberâ€ť would also be a huge and impossible to encounterâ€”almost all natural numbers would need more bits then are Plank volumes in the universe to representâ€”and it doesnâ€™t seem that youâ€™re trying to say that.

Comment author: 16 May 2011 10:58:26AM 3 points [-]

If you start selecting things at random, then you need a probability distribution. Many routinely used probability distributions over the natural numbers give you a nontrivial chance of being able to fit the number on your computer.

There are, of course, corresponding probability distributions over the reals (take a probability distribution over the natural numbers and give zero probability to anything else). However, the routinely used probability distributions on the reals give zero probability to the output being a natural number, a rational number, describable with a finite algebraic equation, or in fact, being able to fit the number on your computer.

One of the problems with real numbers is that if someone trying to do Bayesian analysis of a sensor that reads 2.000..., or 3.14159... using one of these real number distributions as their prior, cannot conclude that the quantity measured probably is 2 or pi, no matter how many digits of precision the sensor goes out to.

Comment author: 28 May 2011 10:59:03PM 1 point [-]

I get that the sensor thing was only an example, but still: it doesnâ€™t seem like a real objection. I mean, youâ€™re not going to have (or need) a sensor with infinity decimals of precision. (Or perhaps Iâ€™m not understanding you?)

In terms of â€śselecting things at randomâ€ť, for any practical use I can think of youâ€™ll be selecting things like intervals, not real numbers. I donâ€™t quite see how thatâ€™s relevant to the formalism you use to reason about how and what youâ€™re calculating.

I think thereâ€™s some big understanding gap here. Could you explain (or just point to some standard text), how does one reason about trivial things like areas of circles and volumes of spheres without using reals?

Comment author: 31 May 2011 11:20:55AM 4 points [-]

Perhaps you've confused the "pi has a decimal expansion goes on forever without seeming pattern" with "a generic real number has a decimal expansion that goes on forever without pattern"? Pi does have a finite representation, "Pi". We use it all the time, and it's just as precise as "2" or "0.5".

Specifically, you could start with the rationals, and complete it by including all solutions to differential equations. Pi would be included, and many other numbers, but you'd still only have a countable set - because every number would have one or more shortest definitions - finite information content.

If you had a probability distribution over such a set, it would naturally favor hypotheses with short definitions. If it started out including pi as a possibility, and you gathered sufficient sensor data consistent with pi (a finite amount), the probability distribution would give pi as the best hypothesis. This is reasonable behavior. You have to do non-obvious mucking around with your prior to get that sort of reasonableness with standard real-number probabilities.

As others have pointed out, any specific countable system of numbers (such as the "solutions to differential equations" that I mentioned) is susceptible to diagonalization - but I see no reason to "complete" the process, as if being susceptible to diagonalization were a terrible flaw above all others. All the entities that you're actually manipulating (finite precision data and finite, stateable hypotheses like "exactly 2" and "exactly pi") are finite-information-content, and "completing" the reals against diagonalization makes essentially all the reals infinite-information-content - a cure in my mind far worse than the disease.

Comment author: 04 June 2011 09:19:34AM *  1 point [-]

(Note: Iâ€™m not arguing in this particular post, just asking clarifying questions, as you seem to have the issues much clearer in your mind than I do.)

1) It seems one can start with naturals, extend them to integers, then to rationals, then to whatever set results from including solutions to differential equations (does that have a standard name?). I imagine there are countably infinite many constructions like that, am I right? They seem to â€śdivideâ€ť the numbers â€śfinerâ€ť (Iâ€™d welcome a hint to more formal description of this), though they arenâ€™t necessarily totally ordered in terms of how â€śfineâ€ť they are, and that the limit of this process after an infinity of extensions seems to be the reals. (Am I missing something important until here? In particular, we can reach the reals much faster, is there some important property in particular the countable extensions have in general, other than their result set being countable and their individual structure?)

2) Do you have other objections to real numbers that do not involve probabilities, probability distributions, and similar information theory concepts?

3) I donâ€™t quite grok your Ď€ example. It seems to me that a finite amount of sensor data will always only be able to tell you itâ€™s consistent with all values in the interval Ď€Â±Îµ; if youâ€™re using a sufficiently â€śdenseâ€ť set, even just the rationals, youâ€™ll have an infinity of values in that interval, while using the reals youâ€™ll have an uncountable one. In the countable case youâ€™ll have to have probabilities for the countable infinity of consistent values, which could result in Ď€ being the most probable one, and in the uncountable one youâ€™ll need a probability distribution function, which could as well have Ď€ as the most probable. (In particular, I canâ€™t see a reason why you couldnâ€™t find a the probability distribution function that has exactly the same value as your probability function when applied to the values in your Ď€-containing countably-infinite set and is â€śwell-behavedâ€ť in some sense on the reals between them; but Iâ€™m likely to miss something here.)

I sort-of get that picking Ď€ in a countable set can be a finite-information operation and an infinite-information one in an uncountable set (though Iâ€™m not quite clear if or why that works on sets at least as â€śfinely dividedâ€ť as the rationals). But that seems to be a trick of picking the right countable set to contain the value youâ€™re looking for:

If you started estimating Ď€ (letâ€™s say, the ratio of diameter to circumference in an euclidean universe) with, say, just the rationals, you may or may not get a â€śmost likelyâ€ť hypothesis, but it wouldnâ€™t be Ď€; youâ€™ll only estimate that one if you happened to start with a set that contained it. And if you use a set that contains Ď€, there would always be some kind of other number that fits in a â€śfinerâ€ť-but-countable set you arenâ€™t using that you might need to estimate (assuming thereâ€™s a lot of such sets as I speculate in point 1 above).

Of course, using the reals doesnâ€™t save you from that: you still need an infinite amount of information to find an arbitrary real. But by using probability distributionsâ€”even if you construct them by picking a probability function on a countable set and then extending it to the reals somehowâ€”it forces you to think about the parts outside that countable set (i.e., other even â€śfinerâ€ť countable sets). In a way, this feels like reminding you of things you didnâ€™t think of.

OK, what am I missing?

Comment author: 04 June 2011 04:57:18PM *  5 points [-]

1) Yes, there are countably many constructions of various kinds of numbers. The construction can presumably be written down, and strings are finite-information-content entities. Yes, they're normally understood to form a set-theoretic lattice - the integers are a subset of the gaussian integers, and the integers are a subset of the rationals, and both the gaussian and rationals are a subset of the complex plane.

However, the reals are not in any well-defined sense "the" limit of that lattice - you could create a contrived argument that they are, but you could also create an argument that the natural limit is something else, either stopping sooner, or continuing further to include infinities and infinitiesimals or (salient to mathematicians) the complex plane.

Defenders of the reals as a natural concept will use the phrase "the complete ordered field", but when you examine the definition of "complete" they are referencing, it uses a significant amount of set theory (and an ugly Dedekind cuts construction) to include everything that it wants to include, and exclude many other things that might seem to be included.

2) Yes. I think the reals are a history-laden concept; they were built in fear of set-theoretic and calculus paradoxes, and without awareness of the computational approach - information theory and Godel's incompleteness. They are useful primarily in the way that C++ is useful to C++ programmers - as a thicket or swamp of complexity that defends the residents from attack. Any mathematician doing useful work in a statistical, calculus-related, or topological field who casually uses the reals will need someone else, a numerical analyst or computer scientist, to laboriously go through their work and take out the reals, replacing them with a computable (and countable) alternative notion - different notions for different results. Often, this effort is neglected, and people use IEEE floats where the mathematician said "real", and get ridiculous results - or worse, dangerously reasonable results.

3) You're right that the finite amount of sensor data will only say it is consistent with this interval. As you point out, if there are an uncountable set within that interval, then it's entirely possible for there to be no single value that is a maximum of the probability distribution function. (That's an excellent example of some of the ridiculous circumlocutions that come from using uncountable sets, when you actually want the system to come up with one or a few best hypotheses, each of which is stateable.)

Pi is always a finite-information entity. Everything nameable is. It doesn't become an infinitely large in information content just because you consider it as an element of the reals.

Yes - if you use a probability distribution over the rationals as your prior, and the actual value is irrational, then you can get bad results. I think this is a serious problem, and we should think hard about what bayesian updating with misspecified models looks like (I know Cosma Shalizi has done some work on this), so that we have some idea what failure looks like. We should also think carefully about what we would consider to be a reasonable hypothesis, one that we might eventually come to rest on.

However, it's a false fork to argue "We wouldn't use the rationals therefore we should use the reals". As I've been trying to say, the reals are a particular, large, complicated, and deeply historical construction, and we should not expect to encounter them "out in the world".

Andrej Bauer has implemented actual real number arithmetic (not IEEE nonsense, or "computable reals" which are interesting, but not reals). Peano integers, in his (Ocaml-based) language, RZ, would probably be five or ten lines. (Commutative groups are 13 lines). In contrast, building from commutative groups up to defining reals as sequences of nested intervals takes five pages; see the appendix: http://math.andrej.com/wp-content/uploads/2007/04/rzreals.pdf

Regarding "reminding you of things you didn't think of", I think Cosma Shalizi and Andrew Gelman have convincingly argued that Bayesian philosophy/methodology is flawed - we don't just pick a prior, collect data, do an update, and believe the results. If we were magical uncomputable beasties (Solomonoff induction), possibly that is what we would do. In the real world, there are other steps, including examining the data, including the residual errors, to see if it suggests hypotheses that weren't included in the original prior. http://www.stat.columbia.edu/~gelman/research/unpublished/philosophy.pdf

Comment author: 11 June 2011 09:58:05PM *  1 point [-]

Hi John! Thank you very much for taking the time to answer at such length. The links you included were also very interesting, thanks.

I think I got a bit of insight into the original issue (way up in the comments, when I interjected in your chat with Patrick).

With respect to the points closer in this thread, itâ€™s become more like teaching than an actual discussion. Iâ€™m much too little educated in the subject, so I could contribute mostly with questions (many inevitably naĂŻve) rather than insights. Iâ€™ll stop here then; though I am interested, Iâ€™m not interested enough right now to educate myself, so I wonâ€™t impose on your time any longer.

(That is, not unless you want to. I can continue if for some reason youâ€™d take pleasure in educating me further.)

Thank you again for sharing your thoughts :-)

Comment author: 29 April 2011 10:44:28AM 5 points [-]

The Lowenheim-Skolem theorem says (loosely interpreted) that even if you CLAIM to be talking about uncountably infinite things, there's a perfectly self-consistent interpretation of your talk that refers only to finite things (e.g. your definitions and proofs themselves).

Only in first-order logic. In second-order logic, you can actually talk about the natural numbers as distinguished from any other collection, and the uncountable reals.

Amusingly, if you insist that we are only allowed to talk in first-order logic, it is impossible for you to talk about the property "finite", since there is no first-order formula which expresses this property. (Follows from the Compactness Theorem for first-order logic - any set of first-order formulae which are true of unboundedly large finite collections also have models of arbitrarily large infinite cardinality.) Without second-order logic there is no way to talk about this property of "finiteness", or for that matter "countability", which you seem to think is so important.

Comment author: 29 April 2011 11:11:36AM 3 points [-]

Yes, that's my understanding as well.

Proof theory for second-order logic seems to be problematic, and I have a formalist stance towards mathematics in general, which leads me to suspect that the standard definitions of second-order logic are somehow smuggling in uncountable infinities, rather than justifying them.

But I admit second-order logic is not something I've studied in depth.

Comment author: 29 April 2011 11:47:58AM *  8 points [-]

Yeah, second-order logic is basically set theory in disguise. I'm not sure why Eliezer likes it. Example from the Wikipedia page:

There is a finite second-order theory whose only model is the real numbers if the continuum hypothesis holds and which has no model if the continuum hypothesis does not hold. This theory consists of a finite theory characterizing the real numbers as a complete Archimedean ordered field plus an axiom saying that the domain is of the first uncountable cardinality. This example illustrates that the question of whether a sentence in second-order logic is consistent is extremely subtle.

Comment author: 29 April 2011 12:04:44PM *  4 points [-]

You can capture the property "finite" with a first-order sentence over the "standard integers", I think. This leaves open the mystery of what exactly the "standard integers" are, which looks lightly less mysterious than the mystery of "sets" required for second-order logic.

Comment author: [deleted] 29 April 2011 03:40:53PM 0 points [-]

Amusingly, if you insist that we are only allowed to talk in first-order logic, it is impossible for you to talk about the property "finite", since there is no first-order formula which expresses this property.

An equivalent (and in my opinion less misleading) way of putting this is to say that there's no first-order formula which expresses the property of being infinite.

Comment author: 18 March 2009 01:02:42AM 0 points [-]

There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.

Expected utility is the product of two things, probability and utility. Saying the probability is smaller is not a complete argument.

Comment author: 18 March 2009 01:43:43AM 7 points [-]

"There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities."

That doesn't seem at all obvious to me. First, our current society doesn't allow people to die, although today law enforcement is spotty enough that they can't really prevent it. I assume far future societies will have excellent law enforcement, including mind reading and total surveillance (unless libertarians seriously get their act together in the next hundred years). I don't see any reason why the taboo on suicide *must* disappear. And any society advanced enough to revive me has by definition conquered death, so I can't just wait it out and die of old age. I place about 50% odds on not being able to die again after I get out.

I'm also less confident the future wouldn't be a dystopia. Even in the best case scenario the future's going to be scary through sheer cultural drift (see: legalized rape in Three Worlds Collide). I don't have to tell *you* that it's easier to get a Singularity that goes horribly wrong than one that goes just right, and even if we restrict the possibilities to those where I get revived instead of turned into paperclips, they could still be pretty grim (what about some well-intentioned person hard-coding in "Promote and protect human life" to an otherwise poorly designed AI, and ending up with something that resurrects the cryopreserved...and then locks them in little boxes for all eternity so they don't consume unnecessary resources.) And then there's just the standard fears of some dictator or fundamentalist theocracy, only this time armed with mind control and total surveillance so there's no chance of overthrowing them.

The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. You could change my mind if you had a utopian post-singularity society that completely mastered Fun Theory. But when I compare the horrible possibility of being forced to live forever either in a dystopia or in a world no better or worse than our own, to the good possibility of getting to live between thousand years and forever in a Fun Theory utopia that can keep me occupied...well, the former seems both more probable and more extreme.

Comment author: 22 January 2011 04:35:29AM 1 point [-]

The threat of dystopia stresses the importance of finding or making a trustworthy, durable institution that will relocate/destroy your body if the political system starts becoming grim.

Of course there is no such thing. Boards can become infiltrated. Missions can drift. Hostile (or even well-intentioned) outside agents can act suddenly before your guardian institution can respond.

But there may be measures you can take to reduce fell risk to acceptable levels (i.e: levels comparable to current risk of exposure to, as Yudkowsky mentioned, secret singularity-in-a-basement):

1. You could make contracts with (multiple) members of the younger generation of cryonicists, on condition that they contract with their younger generation, etc. to guard your body throughout the ages.

2. You can hide a very small bomb in your body that continues to countdown slowly even while frozen (don't know if we have the technology yet, but it doesn't sound too sophisticated) so as to limit the amount of divergence from now that you are willing to expose yourself to [explosion small enough to destroy your brain, but not the brain next to you].

3. You can have your body hidden and known only to cryonicist leaders.

4. You can have your body's destruction forged.

I don't think any combination of THESE suggestions will suffice. But it is worth very much effort inventing more (and not necessarily sharing them all online), and making them possible if you are considering freezing yourself.

Comment author: 18 May 2013 08:24:05PM *  0 points [-]

There is a minuscule probability that during the next 10 seconds, nanomachines produced by a fresh GAI sweep in through your window and capture you for infinite life and thus, by your argument, infinite hell. Building on your argumentation, the case can be made that you should strive to minimize the probability of that outcome. Therefore, suicide.

Edit: My point has already been made by Eliezer. Lets see how this retracting thingy works.

Comment author: 18 March 2009 01:52:46AM 9 points [-]

"that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God)." Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing. A better argument is that wasting time thinking about Christianity will distract you from more probable weird-physics and Simulation Hypothesis Wagers.

A more important criticism is that humans just physiologically don't have any emotions that scale linearly. To the extent that we approximate utility functions, we approximate ones with bounded utility, although utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences, i.e. they have a bounded interest in 'shutting up and multiplying.'

Comment author: 24 April 2010 04:35:46AM 6 points [-]

utilitarians have a bounded concern with acting or aspiring to act or believing that they aspire to act as though they have concern with good consequences that is close to linear with the consequences

I know this is not what you were suggesting, but this made me think of goal systems of the form "take the action that I think idealized agent X is most likely to take," e.g. WWAIXID.

A huge problem with these goal systems is that the idealized agent will probably have very low-entropy probability distributions, while your own beliefs have very high entropy. So you'll end up acting as if you believed with near-certainty the single most likely scenario you can think of.

Another problem, of course, is that you'll take actions that only make sense for an agent much more competent than you are. For example, AIXI would be happy to bet \$1 million that it can beat Cho Chikun at Go.

Comment author: 15 February 2015 10:24:06PM 0 points [-]

In the relevant circumstances, I too might be happy to bet \$1M that AIXI can beat Cho Chikun at go.

Comment author: 18 March 2009 02:18:15AM 1 point [-]

Johnicholas:

I agree with your sentiment, however:

There is a perfectly good description of the real numbers that is not ugly. Namely, the real numbers are a complete Archimedean ordered field.

To actually construct them, I think using (Cauchy) convergent sequences of rational numbers would be much less ugly than using Dedekind cuts.

Also, the LĂśwenheimâ€“Skolem theorem only applies to first-order logic, not second-order logic. Why are you constraining me to use only first-order logic? You have to explain that first.

Comment author: 18 March 2009 02:50:43AM 1 point [-]

"first-order logic cannot, in general, distinguish finite models from infinite models."

Specifically, if a fist order theory had arbitrarily large finite models, then it has an infinite one.

Comment author: 18 March 2009 02:53:25AM 2 points [-]

There is no first-order sentence which is true in all and only finite models and not in any infinite models.

Sketch of conventional proof: The compactness theorem says that if a collection of first-order sentences is inconsistent, then a finite subset of those first-order sentences is inconsistent.

To a sentence or theory true of all finite sets, adjoin the infinite series of statements "This model has at least one element", "This model has at least two elements" (that is, there exist a and b with a != b), "This model has at least three elements" (the finite sentence: exists a, b, c, and a != b, b != c, a != c), and so on.

No finite subset of these statements is inconsistent with the original theory, therefore by compactness the set as a whole is consistent with the original theory. Therefore the original theory possesses an infinite model. QED.

Comment author: 18 March 2009 04:13:36AM 3 points [-]

Yvain wrote: "The deal-breaker is that I really, really don't want to live forever. I might enjoy living a thousand years, but not forever. "

I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self?

As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time.

Comment author: 18 March 2009 06:03:55AM 7 points [-]

The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.

Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcomingbias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.

If these arguments tended merely to be about factual matters (“Pascal’s wager can’t be true, because, um, the moon moves in such-and-such a manner”), attempts to dismiss Pascal’s wager without solid argument would perhaps not be all that problematic . But in the specific case of Pascal’s wager, ad hoc rationalizations for its dismissal tend to center around methodological claims: people dislike the conclusion, and so make claims against expected value calculations, expected value calculations’ applicability to high payoffs, or other inference or decision theoretic methodologies. This is exactly dark side epistemology; someone dislikes a conclusion, does not have a principled derivation of why the conclusion should be false, and so seizes on some methodology or other to bolster their dismissal -- and then imports that methodology into the rest of their life, with harmful consequences (e.g., avoiding cryonics).

I’m not endorsing Pascal’s wager. Carl’s critique (above, and also in the original thread) strikes me as valid. It’s just that we need to be really really careful about making up rationalizations and importing those rationalized methodologies elsewhere; the rationalizations can hurt us even in cases where the rationalized conclusion turns out to be true. And Pascal’s wager is such an easy situation in which to make methodological rationalizations -- it sounds absurd, it involves religion, which is definitely uncool, and its claimed conclusion threatens things many of us care about, such as how we live our lives and form beliefs. We might need “be specially on guard!” routines for such situations.

Comment author: 18 March 2009 06:28:45AM 2 points [-]

The fallacious arguments against Pascal's Wager are usually followed by motivated stopping.

Comment author: 18 March 2009 07:28:30AM 0 points [-]

Utilitarian would rightly attack this, since the probabilities almost certainly won't wind up exactly balancing.

Utilitarian's reply seems to assume that probability assignments are always precise. We may plausibly suppose, however, that belief states are sometimes vague. Granted this supposition, we cannot infer that one probability is higher than the other from the fact that probabilities do now wind up exactly balancing.

Comment author: 18 March 2009 07:48:29AM 2 points [-]

Pablo,

Vagueness might leave you unable to subjectively distinguish probabilities, but you would still expect that an idealized reasoner using Solomonoff induction with unbounded computing power and your sensory info would not view the probabilities as exactly balancing, which would give infinite information value to further study of the question.

The idea that further study wouldn't unbalance estimates in humans is both empirically false in the cases of a number of smart people who have undertaken it, and looks like another rationalization.

Comment author: 18 March 2009 08:59:20AM 4 points [-]

Eliezer, it seems to me that you may be being unfair to those who respond "Isn't that a form of Pascal's wager?". In an exchange of the form

Cryonics Advocate: "The payoff could be a thousand extra years of life or more!"

Cryonics Skeptic: "Isn't that a form of Pascal's wager?"

I observe that CA has made handwavy claims about the size of the payoff, hasn't said anything about how the utility of a long life depends on its length (there could well be diminishing returns), and hasn't offered anything at all like a probability calculation, and has entirely neglected the downsides (I think Yvain makes a decent case that they aren't obviously dominated by the upside). So, here as in the original Pascal's wager, we have someone arguing "put a substantial chunk of your resources into X, which has uncertain future payoff Y" on the basis that Y is obviously very large, and apparently ignoring the three key subtleties, namely how to get from Y to the utility-if-it-works, what other low-probability but high-utility-delta possibilities there are, and just what the probability-that-it-works is. And, here as with the original wager, if the argument does work then its consequences are counterintuitive to many people (presumably including CS).

That wouldn't justify saying "That is just Pascal's wager, and I'm not going to listen to you any more." But what CS actually says is "Isn't that a form of Pascal's wager?". It doesn't seem to me an unreasonable question, and it gives CA an opportunity to explain why s/he thinks the utility really is very large, the probability not very small, etc.

I think the same goes for your infinite-physics argument.

I don't see any grounds for assuming (or even thinking it likely) that someone who says "Isn't that just a form of Pascal's wager?" has made the bizarrely broken argument you suggest that they have. If they've made a mistake, it's in misunderstanding (or failing to listen to, or not guessing correctly) just what the person they're talking to is arguing.

Therefore: I think you've committed a Pascal's Wager Fallacy Fallacy Fallacy.

Comment author: 18 March 2009 09:17:19AM 1 point [-]

g,

This is based on the diavlog with Tyler Cowen, who did explicitly say that decision theory and other standard methodologies doesn't apply well to Pascalian cases.

Comment author: 18 March 2009 02:28:34PM 0 points [-]

@Yvain: Don't look at the future as containing you, ask what can the future do worse or better, if it's in possession of the information about you. It can reconstruct you-alive using that information, and let the future you enjoy the life in the future, or it could reconstruct you-alive and torture it for eternity. But in which of these cases the future will actually get better or worse, depending on whether you give the future the information about your structure? Is the torture-people future going to get better because you don't give them specifically the information about your brain? That torture-people future must be specifically evil if it cares so much about creating torture experience especially for the real people who lived in the past, as opposed to, say, paperclipping the universe with the torture chambers full of randomly generated people. Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it.

Comment author: 18 March 2009 02:56:11PM -1 points [-]

these posts are useful to calibrate the commitment and self incentive biases. based on the probabilities espoused (80%, bad outcomes are 'exotic') i say the impact is 1000x. the world looks pretty utopian from the a/c cooled academics labs in US in anno domini 2009.

Comment author: 18 March 2009 02:56:31PM 0 points [-]

My question is very specific, can you elaborate on what you mean by "holographic limits on quantum entanglement"? I did a search but all I got was woo-woo websites.

Thank you.

Comment author: 18 March 2009 04:48:41PM 2 points [-]

Vladimir, hell is only one bit away from heaven (minus sign in the utility function). I would hope though that any prospective heaven-instigators can find ways to somehow be intrinsically safe wrt this problem.

Comment author: 18 March 2009 05:31:27PM 0 points [-]

Steven, even the minus-utility hell won't get worse because it has information useful for the positive-utility eutopia. Only and specifically the positive-utility eutopia could have a use for such information. You win from providing this information in case of a good outcome, and you don't lose in case of a bad outcome.

Comment author: 18 March 2009 06:02:41PM 0 points [-]

Comment author: 18 March 2009 07:01:02PM 1 point [-]

Carl, it clearly isn't based *only* on that since Eliezer says "You see it all the time in discussion of cryonics".

Comment author: 18 March 2009 09:01:10PM 0 points [-]

Eliezer, thanks I've found material on the holographic principle and did some reading myself. it's an intriguing idea, but an idea so far that has no experimental basis yet. Aside from unconfirmed source of noise in a gravitational wave experiment, it's not known if holographic principle/cosmological information bound actually plays a role. Why did you include that in your post, were you just including another possible example of how universe seems to conspire against our ambitions.

Comment author: 18 March 2009 09:24:11PM 0 points [-]

Pascal Wager != Pascal Wager Fallacy. If original Pascal wager didn't depend on a highly improbable proposition (existence of a particular version of god), it would be logically sound (or at least more sound then it is). So, I don't see a problem comparing cryonics advocacy logic with Pascal's wager.

On the other hand, I find some of the probability estimates cryonics advocates make to be unsound, so for me, this way of cryonics advocacy does look like a Pascal Wager Fallacy. In particular, I don't see why cryonics advocates put high probability values on being revived in the future (number 3 in Robert Hanson's post) and liking the future enough to want to live there (look at Yvain's comment to this post). Also, putting unconditional high utility value on long life span seems to be a doubtful proposition. I am not sure that life of torture is better than non-existence.

Comment author: 18 March 2009 09:48:20PM 1 point [-]

What if we phrase a Pascal's Wager-like problem like this:

If every winner of a certain lottery receives \$300 million, a ticket costs \$1, the chances of winning are 1 in 250 million, and you can only buy one ticket, would you buy that ticket?

There's a positive expected value in dollars, but 1 in 250 million is basically not gonna happen (to you, at least).

Comment author: 18 March 2009 10:21:22PM -2 points [-]

@ doug S

I defeat your version of the PW by asserting there is no rational lottery operator who goes forth with the business plan to straight up lose \$50million. thus the probability of your scenario, as w the christian god, is zero.

Comment author: 18 March 2009 11:04:36PM 2 points [-]

vroman, see the post on Less Wrong about least-convenient possible worlds. And the analogue in Doug's scenario of the existence of (Pascal's) God isn't the reality of the lottery he proposes -- he's just asking you to accept that for the sake of argument -- but your winning the lottery.

Comment author: 18 March 2009 11:10:20PM 4 points [-]

I think a heuristic something like this is often involved: "If someone claims a high benefit (at any probability) for some costly implausible course of action, there's a good chance they're (a) consciously trying to exploit me, (b) infected by a parasitic meme, or (c) getting off on the delusion that they have a valuable Cause. In any of those cases, they'll probably have plenty of persuasive invalid arguments; if I try to analyze these, I may be convinced in spite of myself, so I'd better find whatever justification I can to stop thinking."

vroman: See The Least Convenient Possible World.

Carl: Islam and Christianity may not balance, but what about Christianity and anti-Christianity?

Comment author: 18 March 2009 11:10:31PM 0 points [-]

vroman: Two words - rollover jackpots.

Comment author: 19 March 2009 01:57:01AM 1 point [-]

I read and understood the Least convenient possible world post. given that, then let me rephrase your scenario slightly

If every winner of a certain lottery receives \$X * 300 million, a ticket costs \$X, the chances of winning are 1 in 250 million, you can only buy one ticket, and \$X represents an amount of money you would be uncomfortable to lose, would you buy that ticket?

answer no. If the ticket price crosses a certain threshold, then I become risk averse. if it were \$1 or some other relatively inconsequential amount of money, then I would be rationally compelled to buy the nearly-sure loss ticket.

Comment author: 07 May 2013 03:04:38PM 0 points [-]

If you'd be rationally compelled to buy one low-cost ticket, then after you've bought the ticket you should be rationally compelled to buy a ticket. And then rationally compelled to buy a ticket.

Sure, at each step you're approaching the possibility with one fewer dollar, but by your phrasing, the number of dollars you have does not influence your decision to buy a ticket (unless you're broke enough that \$1 is not longer a relatively inconsequential amount of money). This method seems to require an injunction against iteration.

Comment author: 19 March 2009 02:04:53AM 2 points [-]

Nick,

"Islam and Christianity may not balance, but what about Christianity and anti-Christianity?" Why would you think that Christianity and anti-Christianity plausibly balance exactly? Spend some time thinking about the distribution of evolved minds and what they might simulate, and you'll get divergence.

Comment author: 19 March 2009 02:19:44AM 1 point [-]

Why would you think that Christianity and anti-Christianity plausibly balance exactly?

Because I've been thinking about algorithmic complexity, not the actions of agents. Good point.

Comment author: 19 March 2009 02:23:38AM 0 points [-]

Specifically, thinking of the algorithmic complexity of the religion - if I were to use priors here, I should be thinking about utility(belief)*prior probability of algorithms computing functions from beliefs to reward or punishment.

Comment author: 19 March 2009 03:01:09AM 6 points [-]

Ask yourself if you would want to revive someone frozen 100 years ago. Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc. They'd be awakened into a world that they don't understand, a world that judges them by mysterious criteria. It would be worse than being foreign, because the new culture's values were formed at least partially in reaction to the perceived problems of the past.

Comment author: 19 March 2009 07:03:07AM 23 points [-]

Ask yourself if you would want to revive someone frozen 100 years ago.

Yes. They don't deserve to die. Kthx next.

Comment author: 19 March 2009 12:00:10PM 5 points [-]

Ask yourself if you would want to revive someone frozen 100 years ago.

Yes. They don't deserve to die. Kthx next.

I wish that this were on Less Wrong, so that I could vote this up.

Comment author: 23 September 2011 04:46:27PM 2 points [-]

It is now.

Comment author: 30 March 2013 11:39:00AM 1 point [-]

Very well. Upvoted now!

Comment author: 19 March 2009 02:33:36PM 1 point [-]

Does nobody want to address the "how do we know U(utopia) - U(oblivion) is of the same order of magnitude as U(oblivion) - U(dystopia)" argument? (I hesitate to bring this up in the context of cryonics, because it applies to a lot of other things and because people might be more than averagely emotionally motivated to argue for the conclusion that supports their cryonics opinion, but you guys are better than that, right? right?)

Carl, I believe the point is that until I know of a specific argument why one is more likely than the other, I have no choice but to set the probability of christianity equal to the probability of anti-christianity, even though I don't doubt such arguments exist. (Both irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers, so it's moot as far as I can tell.)

Vladimir, your argument doesn't apply to moralities with an egoist component of some sort, which is surely what we were discussing even though I'd agree they can't be justified philosophically.

I stand by all the arguments I gave against Pascal's wager in the comments to Utilitarian's post, I think.

Comment author: 19 March 2009 02:36:09PM 1 point [-]

"Most Americans of the time were unabashedly racist, had little concept of electricity and none of computing, had vaguely heard of automobiles, etc."

So if you woke up in a strange world with technologies you don't understand (at first) and mainstream values you disagree with (at first), you would rather commit suicide than try to learn about this new world and see if you can have a pleasant life in it?

Comment author: 19 March 2009 03:31:39PM 0 points [-]

Steven,

Information value.

Comment author: 19 March 2009 03:44:16PM 0 points [-]

irrationality-punishers and immorality-punishers seem far less unlikely than nonchristianity-punishers

If you mean "in rough proportion to the algorithmic complexity of Christianity", nonmajoritarianism-punishers, and presumably plenty of other simple entities, would effectively be nonchristianity-punishers. Probably still true, though.

Comment author: 19 March 2009 03:57:12PM 0 points [-]

Steven, to account for the especially egoist morality, all you need to do is especially value future-you. I don't see how it changes my points.

Comment author: 19 March 2009 07:02:20PM 1 point [-]

Nick, Christians are not a majority (and if they were, an alternative course would be to try to shift majority opinions to something easier to believe, preferably before you died but it has to get done...)

I'm not claiming that U(utopia) - U(oblivion) ~ U(oblivion) - U(dystopia + revival + no suicide), but the question is whether the factor describing the relative interval, is greater than the factor of diminished probability for U(dystopia + revival + no suicide), which seems large. Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present.

Comment author: 19 March 2009 08:30:45PM 4 points [-]

"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."

Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I should probably think harder before I become certain that I can make this prediction with something more complicated like my life. I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things.

"Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it."

In all of human society-space, not just the ones that have existed but every possible combination of social structures that could exist, I interpret only a vanishingly small number (the ones that contain large amounts of freedom, for example) as non-evil. Looking over all of human history, the number of societies I would have enjoyed living in are pretty minimal. I'm not just talking about Dante's Hell here. Even modern day Burma/Saudi Arabia, or Orwell's Oceania would be awful enough to make me regret not dying when I had the chance.

I don't think it's so hard to get a Singularity that leaves people alive but is still awful. If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.

"Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else."

I think that's false. In most cases I imagine, torturing people is not the terminal value of the dystopia, just something they do to people who happen to be around. In a pre-singularity dystopia, it will be a means of control and they won't have the resources to 'create' people anyway, (except the old-fashioned way). In a post-singularity dystopia, resources won't much matter and the AI's more likely to be stuck under injunctions to protect existing people than trying to create new ones (unless the problem is the Mere Addition Paradox). Though I admit it would be a very specific subset of rogue AIs that view frozen heads as "existing people".

"Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present."

I'm glad you hesitated to point it out. Luckily, I'm not as rationalist as I like to pretend :) More seriously, I currently have a lot of things preventing me from suicide. I have a family, a debt to society to pay off, and the ability to funnel enough money to various good causes to shape the future myself instead of passively experience it. And less rationally but still powerfully, I have the self-preservation urge pretty strongly that would probably kick in if I tried anything. Someday when the Singularity seems very near, I really am going to have to think about this more closely. If I think a dictator's about to succeed on an AI project, or if I've heard about the specifics of the a project's code and the moral system seems likely to collapse, I do think I'd be sitting there with a gun to my head and my finger on the trigger.

Comment author: 19 March 2009 08:46:11PM 2 points [-]

One more thing: Eliezer, I'm surprised to be on the opposite side as you here, because it's your writings that convinced me a catastrophic singularity, even one from the small subset of catastrophic singularities that keep people alive, is so much more likely than a good singularity. If you tell me I'm misinterpreting you, and you assign high probability to the singularity going well, I'll update my opinion (also, would the high probability be solely due to the SIAI, or do you think there's a decent chance of things going well even if your own project fails?)

Comment author: 19 March 2009 09:18:23PM 0 points [-]

Nick, I'm now sitting here being inappropriately amused at the idea of Hal Finney as Dark Lord of the Matrix.

Eliezer, thanks for responding to that. I'm never sure how much to bring up this sort of morbid stuff. I agree as to what the question is.

Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else.

It was Vladimir who pointed that out, I just said it doesn't apply to egoists. I actually don't agree that it applies to altruists either; presumably most anything that cared that much about torturing newly created people would also use cryonauts for raw materials. Also, maybe there are "people who are still alive" considerations.

Comment author: 19 March 2009 09:24:48PM 0 points [-]

I don't have to tell *you* that it's easier to get a Singularity that goes horribly wrong than one that goes just right

Don't the acceleration-of-history arguments suggest that there will be another singularity, a century or so after the next one? And another one shortly after that, etc?

What are the chances that they will all go exactly right for us?

Comment author: 19 March 2009 09:36:50PM 2 points [-]

If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.

For whatever relief it's worth, someone who thought that was a good idea would have a good chance of building a paperclipper instead. "There is a limit to how competent you can be, and still be that stupid."

Comment author: 19 March 2009 09:42:21PM 5 points [-]

Yvain, while it's hard to get a feel on what exactly happens when one of the meddling dabblers tries to give their AI a goal system, I would mostly expect those AIs to end up as paperclip maximizers, or at most, tiling the universe with tiny molecular smiley-faces. Nothing sentient.

Most AIs gone wrong are just going to dissassemble you, not hurt you. I think I've emphasized this a number of times, which is why it's surprising that I've seen both you and Robin Hanson, respectable rationalists both, go on attributing the opposite opinion to me.

Comment author: 19 March 2009 09:55:53PM 5 points [-]

Eliezer, "more AIs are in the hurting class than in the disassembling class" is a distinct claim from "more AIs are in the hurting class than in the successful class", which is the one I interpreted Yvain as attributing to you.

Comment author: 19 March 2009 10:27:26PM 1 point [-]

Isn't there already a good deal of experience regarding the attitudes/actions of the most intelligent entity known (in current times, humans) towards cryonically suspended potential sentient beings (frozen embryos)?

Comment author: 20 March 2009 04:36:42AM 0 points [-]

Yvain, people seem to have a hedonic set point. If you currently prefer life to non-life, I highly doubt you would not if you lived in Saudi Arabia or Burma.

Comment author: 01 June 2009 10:37:27PM *  0 points [-]

"If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence." Doesn't this arbitrarily favor future events? But future-self isn't current-self, it's literally a different person. Distinguishing between desirable outcomes is tautological, your values precede evaluation.

Comment author: 18 May 2010 01:31:15AM *  2 points [-]

It's odd that the article author shows as [deleted] (Eliezer is the author).

Comment author: 18 May 2010 01:37:17AM 1 point [-]

I assume it appears that way because the article's been deleted - it doesn't appear under its tags, for example.

Comment author: 05 May 2011 09:40:17PM 0 points [-]

If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.

The problem with Pascal's Wager is that it allows absurdly large utilities into the equation. If I'm looking at a nice fresh apple, and it's 11:45am just before lunch, and breakfast was at 7am, then suppose the utility increment from eating that apple is X. I'd subjectively estimate that my utility for the best possible future (Heaven for Pascal's wager, the infinite wonderful future in the scenario quoted above) is a utility increment less than one trillion times X, probably less than a billion, perhaps more than a million, definitely more than a thousand. If we make the increment much more, say 3^^^3 times X, then we get into Pascal's Wager problems.

Comment author: 06 May 2011 03:27:57AM 4 points [-]

No one pointed this out but Muslims consider Christians to be people of the Book and allow them to go to heaven assuming they are good Christians.

Further, Hindu and Buddhists believe in reincarnation and believe that if one is a good Christian one will become reincarnated possibly as a Hindu or Buddhist next time around so it is safe to ignore them in calculating Pascals wager. Also, the Hindu's have a claim that Christians, Islam, and Judaism all worship the Hindu Brahmam.

Catholics also since Vatican II believe that it is possible for everyone that is not Catholic to go to heaven and in particular accept most other Christian baptisms as being valid.

If one were to look at pascals wager from a strict perspective of what each religion believes about other religions going to Hell or Heaven then one would be left with only a relatively few possible choices such that believing in one of them will send you to hell from the others perspective. If one further considers as evidence the number of believers (or other religions) that allow one the possibility to reach heaven then the groups with the most evidence would be some ultra-conservative evangelical group that thinks that all Catholics and mainstream Protestants are going to Hell but are part of a greater communion that is recognized by the Catholic Church, so like the Westboro Baptist if that group were actually Baptist.

Pascal's Wager seems like a poor way to choose a religion if one is more concerned with what is actually true. It is also a poor way to choose a religion if one actually believes that God hears and answers prayers. I am highly biased on this point though and claim additional information.

Comment author: 09 October 2011 04:42:13AM 0 points [-]

At a more practical level, Pascal's Wager's main failure is to strategically believe rather than rationally believe. Also, the notion that God would put up with a belief of that sort.

This particular failure mode applies to very few other arguments.

Comment author: 09 October 2011 04:47:28AM 1 point [-]

At a more practical level, Pascal's Wager's main failure is to strategically believe rather than rationally believe. Also, the notion that God would put up with a belief of that sort.

That isn't a failure mode. Strategic belief is a perfectly valid desiradum maximization strategy. The only time when strategic belief is an actual failure mode is when you intrinsically value correct belief. In which case you don't strategic belief and so do not fail.

Comment author: 10 January 2017 11:49:12PM 1 point [-]

How did this post get attributed to [deleted] instead of to Eliezer? I'm 99% sure this post was by him, and the comments seem to bear it out.

Comment author: 11 January 2017 03:23:28AM 1 point [-]

I see Eliezer_Yudkowsky as account that it was posted from. Unsure what you are seeing.

Comment author: 11 January 2017 04:41:39AM 0 points [-]

Additional data point: I see [deleted].

Comment author: 11 January 2017 08:35:59AM *  0 points [-]

Me, as well.

(Edit: looking at Internet Archive's cached snapshots, all of them that I checked look that way to me too.)

(Edit2: it has looked that way to others as well for quite some time. I wouldn't worry about it.)

Comment author: 11 January 2017 05:31:58PM 1 point [-]

Certainly not worth worrying about. It seems just to be a consequence of the article being deleted. But I wonder why it was deleted.