Today at lunch I was discussing interesting facets of second-order logic, such as the (known) fact that first-order logic cannot, in general, distinguish finite models from infinite models. The conversation branched out, as such things do, to why you would want a cognitive agent to think about finite numbers that were unboundedly large, as opposed to boundedly large.
So I observed that:
- Although the laws of physics as we know them don't allow any agent to survive for infinite subjective time (do an unboundedly long sequence of computations), it's possible that our model of physics is mistaken. (I go into some detail on this possibility below the cutoff.)
- If it is possible for an agent - or, say, the human species - to have an infinite future, and you cut yourself off from that infinite future and end up stuck in a future that is merely very large, this one mistake outweighs all the finite mistakes you made over the course of your existence.
And the one said, "Isn't that a form of Pascal's Wager?"
I'm going to call this the Pascal's Wager Fallacy Fallacy.
You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"
The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).
However, what we have here is the term "Pascal's Wager" being applied solely because the payoff being considered is large - the reasoning being perceptually recognized as an instance of "the Pascal's Wager fallacy" as soon as someone mentions a big payoff - without any attention being given to whether the probabilities are in fact small or whether counterbalancing anti-payoffs exist.
And then, once the reasoning is perceptually recognized as an instance of "the Pascal's Wager fallacy", the other characteristics of the fallacy are automatically inferred: they assume that the probability is tiny and that the scenario has no specific support apart from the payoff.
But infinite physics and cryonics are both possibilities that, leaving their payoffs entirely aside, get significant chunks of probability mass purely on merit.
Yet instead we have reasoning that runs like this:
- Cryonics has a large payoff;
- Therefore, the argument carries even if the probability is tiny;
- Therefore, the probability is tiny;
- Therefore, why bother thinking about it?
(Posted here instead of Less Wrong, at least for now, because of the Hanson/Cowen debate on cryonics.)
Further details:
Pascal's Wager is actually a serious problem for those of us who want to use Kolmogorov complexity as an Occam prior, because the size of even the finite computations blows up much faster than their probability diminishes (see here).
See Bostrom on infinite ethics for how much worse things get if you allow non-halting Turing machines.
In our current model of physics, time is infinite, and so the collection of real things is infinite. Each time state has a successor state, and there's no particular assertion that time returns to the starting point. Considering time's continuity just makes it worse - now we have an uncountable set of real things!
But current physics also says that any finite amount of matter can only do a finite amount of computation, and the universe is expanding too fast for us to collect an infinite amount of matter. We cannot, on the face of things, expect to think an unboundedly long sequence of thoughts.
The laws of physics cannot be easily modified to permit immortality: lightspeed limits and an expanding universe and holographic limits on quantum entanglement and so on all make it inconvenient to say the least.
On the other hand, many computationally simple laws of physics, like the laws of Conway's Life, permit indefinitely running Turing machines to be encoded. So we can't say that it requires a complex miracle for us to confront the prospect of unboundedly long-lived, unboundedly large civilizations. Just there being a lot more to discover about physics - say, one more discovery of the size of quantum mechanics or Special Relativity - might be enough to knock (our model of) physics out of the region that corresponds to "You can only run boundedly large Turing machines".
So while we have no particular reason to expect physics to allow unbounded computation, it's not a small, special, unjustifiably singled-out possibility like the Christian God; it's a large region of what various possible physical laws will allow.
And cryonics, of course, is the default extrapolation from known neuroscience: if memories are stored the way we now think, and cryonics organizations are not disturbed by any particular catastrophe, and technology goes on advancing toward the physical limits, then it is possible to revive a cryonics patient (and yes you are the same person). There are negative possibilities (woken up in dystopia and not allowed to die) but they are exotic, not having equal probability weight to counterbalance the positive possibilities.
"I'm curious to know how you know that in advance? Isn't it like a kid making a binding decision on its future self? As Aubrey says, (I'm paraphrasing): "If I'm healthy today and enjoying my life, I'll want to wake up tomorrow. And so on." You live a very long time one day at a time."
Good point. I usually trust myself to make predictions of this sort. For example, I predict that I would not want to eat pizza every day in a row for a year, even though I currently like pizza, and this sort of prediction has worked in the past. But I should probably think harder before I become certain that I can make this prediction with something more complicated like my life. I know that many of the very elderly people I know claim they're tired of life and just want to die already, and I predict that I have no special immunity to this phenomenon that will let me hold out forever. But I don't know how much of that is caused by literally being bored with what life has to offer already, and how much of it is caused by decrepitude and inability to do interesting things.
"Evil is far harder than good or mu, you have to get the future almost right for it to care about people at all, but somehow introduce a sustainable evil twist to it."
In all of human society-space, not just the ones that have existed but every possible combination of social structures that could exist, I interpret only a vanishingly small number (the ones that contain large amounts of freedom, for example) as non-evil. Looking over all of human history, the number of societies I would have enjoyed living in are pretty minimal. I'm not just talking about Dante's Hell here. Even modern day Burma/Saudi Arabia, or Orwell's Oceania would be awful enough to make me regret not dying when I had the chance.
I don't think it's so hard to get a Singularity that leaves people alive but is still awful. If the problem is a programmer who tried to give it a sense of morality but ended up using a fake utility function or just plain screwing up, he might well end with a With Folded Hands scenario or Parfit's Mere Addition Paradox (I remember Eliezer saying once - imagine if we get an AI that understands everything perfectly except freedom) . And that's just the complicated failure - the simple one is that the government of Communist China develops the Singularity AI and programs it to do whatever they say.
"Also, steven points out for the benefit of altruists that if it's not you who's tortured in the future dystopia, the same resources will probably be used to create and torture someone else."
I think that's false. In most cases I imagine, torturing people is not the terminal value of the dystopia, just something they do to people who happen to be around. In a pre-singularity dystopia, it will be a means of control and they won't have the resources to 'create' people anyway, (except the old-fashioned way). In a post-singularity dystopia, resources won't much matter and the AI's more likely to be stuck under injunctions to protect existing people than trying to create new ones (unless the problem is the Mere Addition Paradox). Though I admit it would be a very specific subset of rogue AIs that view frozen heads as "existing people".
"Though I hesitate to point this out, the same logic against cryonic suspension also implies that egoists, but not altruists, should immediately commit suicide in case someone is finishing their AI project in a basement, right now. A good number of arguments against cryonics also imply suicide in the present."
I'm glad you hesitated to point it out. Luckily, I'm not as rationalist as I like to pretend :) More seriously, I currently have a lot of things preventing me from suicide. I have a family, a debt to society to pay off, and the ability to funnel enough money to various good causes to shape the future myself instead of passively experience it. And less rationally but still powerfully, I have the self-preservation urge pretty strongly that would probably kick in if I tried anything. Someday when the Singularity seems very near, I really am going to have to think about this more closely. If I think a dictator's about to succeed on an AI project, or if I've heard about the specifics of the a project's code and the moral system seems likely to collapse, I do think I'd be sitting there with a gun to my head and my finger on the trigger.