One of our most controversial posts ever was "Torture vs. Dust Specks".  Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE.  (Yay student!)

In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.

I'll start by briefly introducing Parfit's Repugnant Conclusion, sort of a little brother to the main dilemma.  Parfit starts with a world full of a million happy people - people with plenty of resources apiece.  Next, Parfit says, let's introduce one more person who leads a life barely worth living - but since their life is worth living, adding this person must be a good thing.  Now we redistribute the world's resources, making it fairer, which is also a good thing.  Then we introduce another person, and another, until finally we've gone to a billion people whose lives are barely at subsistence level.  And since (Parfit says) it's obviously better to have a million happy people than a billion people at subsistence level, we've gone in a circle and revealed inconsistent preferences.

My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living.  In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person.  Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness.  If we would be saddened to hear the news that such a person existed, we shouldn't kill them, but we should not voluntarily create such a person in an otherwise happy world.  So each time we voluntarily add another person to Parfit's world, we have a little celebration and say with honest joy "Whoopee!", not, "Damn, now it's too late to uncreate them."

And then the rest of the Repugnant Conclusion - that it's better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating - is just "repugnant" because of standard scope insensitivity.  The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations.  Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.

But now we introduce the Repugnant Conclusion's big sister, the Lifespan Dilemma, which - at least in my own opinion - seems much worse.

To start with, suppose you have a 20% chance of dying in an hour, and an 80% chance of living for 1010,000,000,000 years -

Now I know what you're thinking, of course.  You're thinking, "Well, 10^(10^10) years may sound like a long time, unimaginably vaster than the 10^15 years the universe has lasted so far, but it isn't much, really.  I mean, most finite numbers are very much larger than that.  The realms of math are infinite, the realms of novelty and knowledge are infinite, and Fun Theory argues that we'll never run out of fun.  If I live for 1010,000,000,000 years and then die, then when I draw my last metaphorical breath - not that I'd still have anything like a human body after that amount of time, of course - I'll go out raging against the night, for a life so short compared to all the experiences I wish I could have had.  You can't compare that to real immortality.  As Greg Egan put it, immortality isn't living for a very long time and then dying.  Immortality is just not dying, ever."

Well, I can't offer you real immortality - not in this dilemma, anyway.  However, on behalf of my patron, Omega, who I believe is sometimes also known as Nyarlathotep, I'd like to make you a little offer.

If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years.  That's 99.9999% of 80%, so I'm just shaving a tiny fraction 10-6 off your probability of survival, and in exchange, if you do survive, you'll survive - not ten times as long, my friend, but ten to the power of as long.  And it goes without saying that you won't run out of memory (RAM) or other physical resources during that time.  If you feel that the notion of "years" is ambiguous, let's just measure your lifespan in computing operations instead of years.  Really there's not much of a difference when you're dealing with numbers like 10^(1010,000,000,000).

My friend - can I call you friend? - let me take a few moments to dwell on what a wonderful bargain I'm offering you.  Exponentiation is a rare thing in gambles.  Usually, you put $1,000 at risk for a chance at making $1,500, or some multiplicative factor like that.  But when you exponentiate, you pay linearly and buy whole factors of 10 - buy them in wholesale quantities, my friend!  We're talking here about 1010,000,000,000 factors of 10!  If you could use $1,000 to buy a 99.9999% chance of making $10,000 - gaining a single factor of ten - why, that would be the greatest investment bargain in history, too good to be true, but the deal that Omega is offering you is far beyond that!  If you started with $1, it takes a mere eight factors of ten to increase your wealth to $100,000,000.  Three more factors of ten and you'd be the wealthiest person on Earth.  Five more factors of ten beyond that and you'd own the Earth outright.  How old is the universe?  Ten factors-of-ten years.  Just ten!  How many quarks in the whole visible universe?  Around eighty factors of ten, as far as anyone knows.  And we're offering you here - why, not even ten billion factors of ten.  Ten billion factors of ten is just what you started with!  No, this is ten to the ten billionth power factors of ten.

Now, you may say that your utility isn't linear in lifespan, just like it isn't linear in money.  But even if your utility is logarithmic in lifespan - a pessimistic assumption, surely; doesn't money decrease in value faster than life? - why, just the logarithm goes from 10,000,000,000 to 1010,000,000,000.

From a fun-theoretic standpoint, exponentiating seems like something that really should let you have Significantly More Fun.  If you can afford to simulate a mind a quadrillion bits large, then you merely need 2^(1,000,000,000,000,000) times as much computing power - a quadrillion factors of 2 - to simulate all possible minds with a quadrillion binary degrees of freedom so defined.  Exponentiation lets you completely explore the whole space of which you were previously a single point - and that's just if you use it for brute force.  So going from a lifespan of 10^(10^10) to 10^(10^(10^10)) seems like it ought to be a significant improvement, from a fun-theoretic standpoint.

And Omega is offering you this special deal, not for a dollar, not for a dime, but one penny!  That's right!  Act now!  Pay a penny and go from a 20% probability of dying in an hour and an 80% probability of living 1010,000,000,000 years, to a 20.00008% probability of dying in an hour and a 79.99992% probability of living 10^(1010,000,000,000) years!  That's far more factors of ten in your lifespan than the number of quarks in the visible universe raised to the millionth power!

Is that a penny, friend?  - thank you, thank you.  But wait!  There's another special offer, and you won't even have to pay a penny for this one - this one is free!  That's right, I'm offering to exponentiate your lifespan again, to 10^(10^(1010,000,000,000)) years!  Now, I'll have to multiply your probability of survival by 99.9999% again, but really, what's that compared to the nigh-incomprehensible increase in your expected lifespan?

Is that an avaricious light I see in your eyes?  Then go for it!  Take the deal!  It's free!

(Some time later.)

My friend, I really don't understand your grumbles.  At every step of the way, you seemed eager to take the deal.  It's hardly my fault that you've ended up with... let's see... a probability of 1/101000 of living 10^^(2,302,360,800) years, and otherwise dying in an hour.  Oh, the ^^?  That's just a compact way of expressing tetration, or repeated exponentiation - it's really supposed to be Knuth up-arrows, ↑↑, but I prefer to just write ^^.  So 10^^(2,302,360,800) means 10^(10^(10^...^10)) where the exponential tower of tens is 2,302,360,800 layers high.

But, tell you what - these deals are intended to be permanent, you know, but if you pay me another penny, I'll trade you your current gamble for an 80% probability of living 1010,000,000,000 years.

Why, thanks!  I'm glad you've given me your two cents on the subject.

Hey, don't make that face!  You've learned something about your own preferences, and that's the most valuable sort of information there is!

Anyway, I've just received telepathic word from Omega that I'm to offer you another bargain - hey!  Don't run away until you've at least heard me out!

Okay, I know you're feeling sore.  How's this to make up for it?  Right now you've got an 80% probability of living 1010,000,000,000 years.  But right now - for free - I'll replace that with an 80% probability (that's right, 80%) of living 10^^10 years, that's 10^10^10^10^10^10^10^1010,000,000,000 years.

See?  I thought that'd wipe the frown from your face.

So right now you've got an 80% probability of living 10^^10 years.  But if you give me a penny, I'll tetrate that sucker!  That's right - your lifespan will go to 10^^(10^^10) years!  That's an exponential tower (10^^10) tens high!  You could write that as 10^^^3, by the way, if you're interested.  Oh, and I'm afraid I'll have to multiply your survival probability by 99.99999999%.

What?  What do you mean, no?  The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it!  Well, that and the penny, of course.  If you turn down this offer, what does it say about that whole road you went down before?  Think of how silly you'd look in retrospect!  Come now, pettiness aside, this is the real world, wouldn't you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years?  Those arrows suppress a lot of detail, as the saying goes!  If you can't have Significantly More Fun with tetration, how can you possibly hope to have fun at all?

Hm?  Why yes, that's right, I am going to offer to tetrate the lifespan and fraction the probability yet again... I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that... oh, don't make that face at me, if you want to refuse the whole garden path you've got to refuse some particular step along the way.

Wait!  Come back!  I have even faster-growing functions to show you!  And I'll take even smaller slices off the probability each time!  Come back!

...ahem.

While I feel that the Repugnant Conclusion has an obvious answer, and that SPECKS vs. TORTURE has an obvious answer, the Lifespan Dilemma actually confuses me - the more I demand answers of my mind, the stranger my intuitive responses get.  How are yours?

Based on an argument by Wei Dai.  Dai proposed a reductio of unbounded utility functions by (correctly) pointing out that an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan.  I looked at this and realized that there existed an obvious garden path, which meant that denying the conclusion would create a preference reversal.  Note also the relation to the St. Petersburg Paradox, although the Lifespan Dilemma requires only a finite number of steps to get us in trouble.

The Lifespan Dilemma
New Comment
220 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]gwern300

“Only the tailor, sir, with your little bill,” said a meek voice outside the door.
“Ah, well, I can soon settle his business,” the Professor said to the children, “if you’ll just wait a minute. How much is it, this year, my man?” The tailor had come in while he was speaking.
“Well, it’s been a-doubling so many years, you see,” the tailor replied, a little grufy, “and I think I’d like the money now. It’s two thousand pound, it is!”
“Oh, that’s nothing!” the Professor carelessly remarked, feeling in his pocket, as if he always carried at least that amount about with him. “But wouldn’t you like to wait just another year and make it four thousand? Just think how rich you’d be! Why, you might be a king, if you liked!”
“I don’t know as I’d care about being a king,” the man said thoughtfully. “But it dew sound a powerful sight o’ money! Well, I think I’ll wait-“
“Of course you will!” said the Professor. “There’s good sense in you, I see. Good-day to you, my man!”
“Will you ever have to pay him that four thousand pounds?” Sylvie asked as the door closed on the departing creditor.
“Never, my child!” the Professor replied emphatically. “He’ll go on doubling it till he dies. You see, it’s always worth while waiting another year to get twice as much money!”

--Sylvie and Bruno, Lewis Carroll

[-]R0k0290

I think that the answer to this conundrum is to be found in Joshua Greene's dissertation. On page 202 he says:

"The mistake philosophers tend to make is in accepting rationalism proper, the view that our moral intuitions (assumed to be roughly correct) must be ultimately justified by some sort of rational theory that we’ve yet to discover ... a piece of moral theory with justificatory force and not a piece of psychological description concerning patterns in people’s emotional responses."

When Eliezer presents himself with this dilemma, the neural/hormonal processes in his mind that govern reward and decisionmaking fire "Yes!" on each of a series of decisions that end up, in aggregate, losing him $0.02 for no gain.

Perhaps this is surprising because he implicitly models his "moral intuition" as sampling true statements from some formal theory of Eliezer morality, which he must then reconstruct axiomatically.

But the neural/hormonal decisionmaking/reward processes in the mind are just little bits of biology that squirt hormones around and give us happy or sad feelings according to their own perfectly lawful operation. It is just that if you interpret those... (read more)

If you are not Roko, you should change your username to avoid confusion.

While I feel that the Repugnant Conclusion has an obvious answer, and that SPECKS vs. TORTURE has an obvious answer,

I wonder if the reason you think your answers are obvious is that you learned about scope insensitivity, see the obvious stupidity of that, and then jumped to the opposite conclusion, that life must be valued without any discounting whatsoever.

But perhaps there is a happy middle ground between the crazy kind of moral discounting that humans naively do, and no discounting. And even if that's not the case, if the right answer really does lie on the extreme of the space of possibilities instead of the much larger interior, I don't see how that conclusion could be truly obvious.

In general, your sense of obviousness might to turned up a bit too high. As evidence of that, there were times when I apparently convinced you of the "obvious" correctness or importance of some idea before I'd convinced myself.

5Vladimir_Nesov
Notice that, as was mentioned, torture vs. specks also works for average utilitarianism: in that case, the negative effect of torture is effectively divided by the huge number of people, making it negligible in comparison with a speck, in contrast with total utilitarianism that multiplies the effect of a speck by the number of people, making it huge in comparison with that of torture. So, it's not so much about the extent of discounting, as about asymmetric discounting, which would make the problem depend on who the tortured person is.
0Jonathan_Graehl
I misread Vladimir's comment as "torture vs. specks only works for average" ... when in fact he said "also works". So what I said was in fact already obvious to him. My apologies. ---------------------------------------- Avg utility for torture is (Nk-T)/N. Avg utility for dust specs is (Nk-bD)/N where n is the number (3^^3^^3) who'd get dust specks, N>n is the total number of people, and per person: k is the mean utility, and T and D are the (negative) utilities of torture and a single dust speck, respectively. For the total utility, just remove the "/(n+b)" part. There's no difference in which you should prefer under avg. vs total.
3Johnicholas
A small change - the differences between average and total utility occur in decisions on whether to create a person or not. Average utilitarians create people if their utility would be higher than average, while toatl utilitarians create people if their utility would be positive.
7Alicorn
And if they would not, in existing, decrease anyone else's utility by enough to offset their own.
1Jonathan_Graehl
That's true. I was only addressing dust specks vs. torture, where people are neither created nor destroyed. Just saying that would have been sufficient; it's a generally sufficient condition for the preferred outcome to be the same under avg. vs total.

My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to hear the news that such a person existed, we shouldn't kill them, but we should not voluntarily create such a person in an otherwise happy world. So each time we voluntarily add another person to Parfit's world, we have a little celebration and say with honest joy "Whoopee!", not, "Damn, now it's too late to uncreate them."

And then the rest of the Repugnant Conclusion - that it's better to have a million lives very worth celebrating, than a billion lives slightly worth celebrating - is just "repugn

... (read more)
3MixedNuts
Where can I find the theorems?
9utilitymonster
Gustaf Arrhenius is the main person to look at on this topic. His website is here. Check out ch. 10-11 of his dissertation Future Generations: A Challenge for Moral Theory (though he has a forthcoming book that will make that obsolete). You may find more papers on his website. Look at the papers that contain the words "impossibility theorem" in the title.
2steven0461
Do average utilitarians have a standard answer to the question of what is the average welfare of zero people? The theory seems consistent with any such answer. If you're maximizing the average welfare of the people alive at some future point in time, and there's a nonzero chance of causing or preventing extinction, then the answer matters, too.
9utilitymonster
Usually, average utilitarians are interested in maximizing the average well-being of all the people that ever exist, they are not fundamentally interested in the average well-being of the people alive at particular points of time. Since some people have already existed, this is only a technical problem for average utilitarianism (and a problem that could not even possibly affect anyone's decision). Incidentally, not distinguishing between averages over all the people that ever exist and all the people that exist at some time leads some people to wrongly conclude that average utilitarianism favors killing off people who are happy, but less happy than average.
7CarlShulman
A related commonly missed distinction is between maximizing welfare divided by lives, versus maximizing welfare divided by life-years. The second is more prone to endorsing euthanasia hypotheticals.
1Ghatanathoah
I think that the Sadistic Conclusion is correct. I argue here that it is far more in line with typical human moral intuitions than the repugnant one. If you take the underlying principle of the Sadistic Conclusion, but change the concrete example to something smaller scale and less melodramatic than "Create lives not worth living to stop the addition of lives barely worth living," you will find that it is very intuitively appealing. For instance, if you ask people if they should practice responsible family planning or spend money combating overpopulation they agree. But (if we assume that the time and money spent on these efforts could have been devoted to something more fun) this is the same principle. The only difference is that instead creating a new life not worth living we are instead subtracting an equivalent amount of utility from existing people.

OMEGA: If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years

HUMAN: That sounds like an awful lot of time. Would you mind to write it as a decimal number

OMEGA: Here it is... Of course don't expect to read this number in less than 10 ^ 9999999990 years.

HUMAN: Nevermind... So It's such a mind boggling amount of time. If I would get bored or otherwise distressed, loose my lust for life. Am I allowed to kill myself?

OMEGA: Not really. If I'd allow that and assuming that the probability of killing yourself would be 0.000000001 in 10^10 years, then it would be almost sure that you kill yourself by the end of 10^(10^(10^10)) years

HUMAN: This sounds depressing. So my decision has the potential to confining me to grillions of years of suffering, if I'd lost my lust for life.

OMEGA: OK, I see your point, I also offer you some additional drugs to make you happy whenever you would have any distress. I also promise you to modify your brain that you will never even wish to kill yourself during these few eons.

HUMAN: Sounds great,, but I also enjoy your company very much, can I hope you to entertain me from... (read more)

7Eliezer Yudkowsky
Drugs? You don't need drugs. You just need FUN! Hey, there's a reason why I wrote that, you know.

"Drug" was just a catchy phrase for omega's guarantee to cure you out from any psychological issues the could cause you any prolonged distress.

You could insist that it is entirely impossible that you'd need it.

Would not it be a bit overconfident to make any statements on what is possible to some insanely complex and alien future self of you over a period of time which is measured by a number (in years) that takes billion to the power of billions of your current lifetime just to read?

6Nick_Tarleton
Assuming independence, which is unreasonable.
2Christian_Szegedy
With even very slowly growing estimates p(suicide in t years) = log ( log ... ( log (log t))) would give the human enough incentives refuse the offer at some point (after accepting some) without an extra guarantee of not dieing earlier due to suicide. Therefore, at that point omega will have to make this offer if he wants to convince the human.
1Nick_Tarleton
The limit as t->infinity of p(suicide in t years) is probably considerably less than 1; I think that averts your concern.
4Christian_Szegedy
This is highly subjective... and not the point anyways. The point is that there are too many unclear points and one can come up with a lot of questions that were not specified in the OP. For example: it is not even clear whether you die with 100% certainty once your agreed upon lifetime expires or is there still a chance that some other offer comes by? etc. Your estimted probability of suicide, omega's guarantee on that, guarantees on the quality of life, bayesian evidence on Omega, etc. These are all factors that could influence the decision,... And once one realizes that these were all there, hidden, doubts would arise that whether a human mind should at all attempt to make such high stake decisions based on so little evidence for so much ahead in time.
5Dagon
Actually, by the terms of the first bet, you are PREVENTED from taking the second. You're not allowed to suicide or take any action with nonzero chance of death.

Omega seems to run into some very fundamental credibility problem:

Let us assume the premise of the OP, that lifetime can be equated with discrete computational operations. Furthermore, also assume that the universe (space-time/mulltiverse/whatever) can be modeled result of a computation of n operations (let us say for simplicity n=10^100, we could also assume 10^^100 or any finite number, we will just need a bit more iterations of offers then).

... after some accepted offers ... :

OMEGA: ... I'll replace your p% chance of living for 10^n years, with a 0.9999999p% chance of living 10^(10^n) years...

AGENT: Sounds nice, but I already know, what I would do first with 10^n years.

OMEGA: ???

AGENT: I will simulate my previous universe up to the current point. Inclusive this conversation.

OMEGA: What for?

AGENT: Maybe I am a nostalgic type. But even if I would not be, Given so much computational resources, the probability that I would not do it accidentally would be quite negligible.

OMEGA: Yes, but you could do even more simulations if you would take my next offer. AGENT: Good point, but how can I tell that this conversation is not already taking place in that simulation? Whatever you would t... (read more)

Doesn't many-worlds solve this neatly? Thinking of it as 99.9999999% of the mes sacrificing ourselves so that the other 0.00000001% can live a ridiculously long time makes sense to me. The problem comes when you favor this-you over all the other instances of yourself.

Or maybe there's a reason I stay away from this kind of thing.

[-]gwern190

There's an easier solution to the posed problem if you assume MWI. (Has anyone else suggested this solution? It seems too obvious to me.)

Suppose you are offered & accept a deal where 99 out of 100 yous die, and the survivor gets 1000x his lifetime's worth of computational resources. All the survivor has to do is agree to simulate the 99 losers (and obviously run himself) for a cost of 100 units, yielding a net profit of 900 units.

(Substitute units as necessary for each ever more extreme deal Omega offers.)

No version of yourself loses - each lives - and one gains enormously. So isn't accepting Omega's offers, as long as each one is a net profit as described, a Pareto-improving situation? Knowing this is true at each step, why would one then act like Eliezer and pay a penny to welsh on the entire thing?

4Aurini
I was thinking of this the other day... Suppose that a scientist approached you and wanted to pay you $1000 to play the role of Schrödinger's cat in an open-mike-night stage performance he's putting together. Take as given that the trigger for the vial of poison will result in a many-worlds timeline split;(1) the poison is painless and instantaneous;(2) and there is nobody left in the world who would be hurt by your death (no close friends or family). You can continue performing, for $1000 a night, for as long as you want. Personally I can't think of a reason not to do this. (1) I'm 83% confidant that I said something stupid about Many Worlds there. (2) No drowning or pain for your other self like in The Prestige.
7Sebastian_Hagen
That's actually an extremely strong precondition. People in modern society play positive-sum games all the time; in most interactions where people exchange one good or service for another (such as in selling their time or buying a material object for money), that leaves both participants better off. A productive member of society killing themselves - even if they have no friends and are unlikely to make any - leaves the average surviving member of that society worse off. Many unproductive members of society (politicians come to mind) could probably become productive if they really wanted to; throwing your life away in some branches is still a waste. None of this applies if you're a perfect egoist, of course.
1Larks
The opportunity cost of dying is the utility you could be netting by remaining alive. Unless you only value the rest of your life at less than £1000, you should go for life (presuming the decay is at 50:50, adjust as required) The result applies to MWs too, I think- taking the bet results in opportunity cost for all the future yous who die/never exist, reducing your average utility across all future worlds. It is possible that this sort of gamble on quantum immortality will maximise utility, but it is unlikely for such a small quantity of money,
1Aurini
I'd argue that it's reasonable to place a $0 utility on my existence in other Everett branches; while theoretically I know they exist, theoretically there is something beyond the light-barrier at the edge of the visible universe. It's existence is irrelevant, however, since I will never be able to interact with it. Perhaps a different way of phrasing this - say I had a duplicating machine. I step into Booth B, and then an exact duplicate is created in booths A and C, while the booth B body is vapourized. For reasons of technobabble, the booth can only recreate people, not gold bullion, or tasty filet mignons. I then program the machine to 'dissolve' the booth C version into three vats of the base chemicals which the human body is made up of, through an instantaneous and harmless process. I then sell these chemicals for $50 on ebay. (Anybody with enough geek-points will know that the Star Trek teleporters work on this principle). Keep in mind that the universe wouldn't have differentiated into two distinct universes, one where I'm alive and one where I'm dead, if I hadn't performed the experiment (technically it would still have differentiated, but the two results would be anthropically identical). Does my existence in another Everett branch have moral significance? Suffering is one thing, but existence? I'm not sure that it does.
4Z_M_Davis
I think this depends on the answers to problems in anthropics and consciousness (the subjects that no one understands). The aptness of your thought experiment depends on Everett branching being like creating a duplicate of yourself, rather than dividing your measure) or "degree-of-consciousness" in half. Now, since I only have the semipopular (i.e., still fake) version of QM, there's a substantial probability that everything I believe is nonsense, but I was given to understand that Everett branching divides up your measure, rather than duplicating you: decoherence is a thermodynamic process occuring in the universal wavefunction; it's not really about new parallel universes being created. Somewhat disturbingly, if I'm understanding it correctly, this seems to suggest that people in the past have more measure than we do, simply by virtue of being in the past ... But again, I could just be talking nonsense.
5pengvado
One Everett branch in the past has more measure than one Everett branch now. But the total measures over all Everett branches containing humans differ only by the probability of an existential disaster in the intervening time. The measure is merely spread across more diversity now, which doesn't seem all that disturbing to me.
1Aurini
Hopefully this conversation doesn't separate into decoherence - though we may well have already jumped the shark. :) First of all, I want to clarify something: do you agree that duplicating myself with a magical cloning booth for the $50 of mineral extracts is sensible, while disagreeing with the same tactic using Everett branches? Secondly, could you explain how measure in the mathematical sense relates to moral value in unknowable realites (I confess, I remember only half of my calculus). Thirdly, following up on the second, I was under the "semipopular (i.e., still fake) version of QM" idea that differing Everett branches were as unreal as something outside of my light cone. (This is a great link regarding relativity - sorry I don't know how to html: http://www.theculture.org/rich/sharpblue/ ) For the record, I'm not entirely certain that differeing Everett branches of myself have 0 value; I wouldn't want them to suffer but if one of the two of us stopped existing, the only concern I could justify to myself would be concern over my long-suffering mother. I can't prove that they have zero value, but I can't think of why they wouldn't.
0Z_M_Davis
Well, I know that different things are going to happen to different future versions of me across the many worlds. I don't want to say that I only care about some versions of me, because I anticipate being all of them. I would seem to need some sort of weighing scheme. You've said you don't want your analogues to suffer, but you don't mind them ceasing to exist, but I don't think you can do that consistently. The real world is continuous and messy: there's no single bright line between life and death, between person and not-a-person. If you're okay with half of your selves across the many worlds suddenly dying, are you okay with them gradually dropping into a coma? &c.
0Aurini
"Well, I know that different things are going to happen to different future versions of me across the many worlds." From what I understand, the many-worlds occur due to subatomic processes; while we're certain to find billions of examples along the evolutionary chain that went A or B due to random-decaying-netronium-thing (most if not all of which will alter the present day), contemporary history will likely remain unchanged; for there to be multiple future-histories where the Nazis won (not Godwin's law!), there'd have to be trillions of possible realities, each of which is differentiated by a reaction here on earth; and even if these trillions do exist, then it still won't matter for the small subset in which I exist. The googleplex of selves which exist down all of these lines will be nearly identical; the largest difference will will be that one set had a microwave 'ping' a split-second earlier than the other. I don't know that two googleplexes of these are inherently better than a single googleplex. As for coma - is it immediate, spontaneous coma, with no probability of ressurection? If so, then it's basically equivalent to painless death.
0Z_M_Davis
It just seems kind of oddly discontinuous to care about what happens to your analogues except death. I mention comas only in an attempt to construct a least convenient possible world with which to challenge your quantum immortalist position. I mean---are you okay with your scientist-stage-magician wiping out 99.999% of your analogues, as long as one copy of you exists somewhere? But decoherence is continuous: what does it even mean, to speak of exactly one copy of you? Cf. Nick Bostrom's "Quantity of Experience" (PDF).
0Larks
Evidence to support your idea- whenever I make a choice, in another branch, 'I' made a the other decision, so if I cared equally about all future versions of myself, the I'd have no reason to choose one option over another. If correct, this shows I don't care equally about currently parallel worlds, but not that I don't care equally about future sub-branches from this one.
0pengvado
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn't close to deterministic are precisely the ones where I could just as easily have chosen the other way -- where I don't have any reason to pick one choice.) Thus the thought experiment doesn't show that I don't care about all my branches, current and future, simply proportional to their measure.
0[anonymous]
Suppose you just took the poison instead ? Isn't that just the same experiment occurring slightly earlier, since those branches would end but others wouldn't ?
0[anonymous]
Suppose you just took the poison instead ? Isn't that just the same experiment occurring slightly earlier, since those branches would end but others wouldn't ?
3JGWeissman
Not all probabilities are quantum probabilities.
2AdeleneDawner
True, I was assuming a quantum probability.
0Ghatanathoah
Whatever Omega is doing that might kill you might not be tied to the mechanism that divides universes. It might be that the choice is between huge chance of all of the yous in every universe where you're offered this choice dying, vs. tiny chance they'll all survive. Also, I'm pretty sure that Eliezer's argument is intended to test our intuitions in an environment without extraneous factors like MWI. Bringing MWI into the problem is sort of like asking if there's some sort of way to warn everyone off the tracks so no one dies in the Trolley Problem.

Based on my understanding of physics, I have no way to discriminate between a 1/10 chance of 10 simulations and a certainty of one simulation (what do I care whether the simulations are in the same Everett branch or not?). I don't think I would want to anyway; they seem identical to me morally.

Moreover, living 10x as long seems strictly better than having 10x as many simulations. Minimally, I can just forget everything periodically and I am left with 10 simulations running in different times rather than different places.

The conclusion of the garden path seems perfectly reasonable to me.

I would refuse the next step in the garden somewhere between reaching a 75% to 80% chance of not dying in an hour. Going from a 1/5 chance to a 1/4 chance of soon dying is huge in my mind. I'd likely stop at around 79%.

Can someone point me to a discussion as to why bounded utility functions are bad?

8Dagon
Why 78.5000000000000% and not 78.499999999999% (assuming this is what you meant by "around 79%")? For ORDERS OF MAGNITUDE more life, that sure seems an arbitrary limit.
[-]R0k0100

seems an arbitrary limit.

Your axiology is arbitrary. Everyone has arbitrary preferences, and arbitrary principles that generate preferences. You are arbitrary - you can either live with that or self-modify into something much less arbitrary like a fitness maximizer, and lose your humanity.

3Jordan
If you were to ask me, at two different random points in time, what odds I would take to live 10^10^10^10 years or die in an hour, and what odds I would take to live 10^10^10^10^10^10 years or die in an hour, you would likely get the same answer. I can identity that one number is bigger than the other, but the difference means about as little to me as the difference between a billion dollars and a billion and one dollars. At some point, it simply doesn't matter how much you increase the payoff, I won't take the new bet no matter how little you increase the odds against me. Where that point lies is arbitrary in the same sense as any other point where the utility of two different events times their respective probabilities balance out.
6RolfAndreassen
I think this is equivalent to my comment below about patching the utility function, but more pithily expressed. The difficulty lies in trying to reconcile human intuition, which deals well with numbers up to 7 or so, with actual math. If we could intuitively feel the difference between 10^10^10 and 10^10^10^10, in the same way we feel the difference between 5 and 6, we might well accept Omega's offers all the way down, and might even be justified in doing so. But in fact we don't, so we'll only go down the garden path until the point where the difference between the current probability and the original 80% becomes intuitively noticeable; and then either stop, or demand the money back. The paradox is that the problem has two sets of numbers: One too astronomically large to care about, one that starts out un-feelable but eventually hits the "Hey, I care about that" boundary. I think the reconciliation, short of modifying oneself to feel the astronomically large numbers, is to just accept the flaws in the brain and stop the garden path at an arbitrary point. If Omega complains that I'm not being rational, well, what do I care? I've already extracted a heaping big pile of utilons that are quite real according to my actual utility function.
4Jordan
I disagree that it's a flaw. Discounting the future, even asymptotically, is a preference statement, not a logical shortcoming. Consider this situation: Omega offers you two bets, and you must choose one. Bet #1 says you have a 50% chance of dying immediately, and a 50% chance of living 10 average lifespans. Bet #2 says you have a 100% chance of living a single average lifespan. Having lived a reasonable part of an average lifespan, I can grok these numbers quite well. Still, I would choose Bet #2. Given the opportunity, I wouldn't modify myself to prefer Bet #1. Moreover, I hope any AI with the power and necessity to choose one of these bets for me, would choose Bet #2.
0RolfAndreassen
Yes, fair enough; I should have said "accept the way the brain currently works" rather than using loaded language - apparently I'm not quite following my own prescription. :)
3knb
I must say, this is my intuition as well.

At least part of the problem might be that you believe now, fairly high confidence, as an infinite set atheist or for other reasons, that there's a finite amount of fun available but you don't have any idea what the distribution is. If that's the case, then a behavior pattern that always tries to get more life as a path to more fun eventually ends up always giving away life while not getting any more potential fun.

Another possibility is that you care somewhat about the fraction of all the fun you experience, not just about the total amount. If utilities are relative this might be inevitable, though this has serious problems too.

-2DanielLC
There's always the chance that you're wrong, right? This thing should still work just from the assumption that you're wrong.

I wonder if this might be repairable by patching the utility function? Suppose you say "my utility function in years of lifespan is logarithmic in this region, then log(log(n)) in this region, then (log(log(log(n)))..." and so on. Perhaps this isn't very bright, in some sense; but it might reflect the way human minds actually deal with big numbers and let you avoid the paradox. (Edit) More generally, you might say "My utility function is inverse whatever function you use to make big numbers." If Omega starts chatting about the Busy Bea... (read more)

3Nick_Tarleton
This (being psychologically realistic, not being my actual utility function) seems very plausible. This form of the question considers the other people's speckings to be held fixed. (What if each is willing to suffer 25 years of torture to spare the other guy 50?)
3RolfAndreassen
I didn't say that their preference should be the only criterion, just that it's something to think about. As a practical matter, I do think that not many humans are going to volunteer for 25 years of torture whatever the payoff, except perhaps parents stepping in for their children. I don't think holding other speckings constant is a bug. If you ask the 3^^^^3 people "should I choose TORTURE or SPECKS", you are basically just delegating the decision to the standard human discounting mechanisms, and likely going to get back SPECKS. That's a quite separate question from "Are you, personally, willing to suffer SPECKS to avoid TORTURE". But perhaps it can be modified a bit, like so: "Are you, personally, willing to suffer SPECKS, given that there will be no TORTURE if, and only if, at least 90% of the population answers yes?"
0Joanna Morningstar
In the payout is computational resources with unlimited storage, then patching utility doesn't work well. If utility is sublinear in experienced time, then forking yourself increases utility. This makes it difficult to avoid taking Omega up on the offer every time. For clarity, suppose Omega makes the offer to a group of 1.25M forked copies of you. If you turn it down, then on the average 10^6 of you live for 10^(10^10) years. If you all accept and fork a copy, then on the average 2.(10^6 - 1) of you live for 10^(10^(10^10))/2 years each. Clearly this is better; there are more of you living for longer. The only thing that changes on the shift to 1 initial copy of you is that the (std. dev. of utilons)/(mean utilons) increases by a factor of 10^6. Unless you place a special cost on risk, this doesn't matter. If you do place such a cost on risk, then you fail to take profitable bets. ETA: The only reason to not take the offer immediately is if you think some other Omega-esque agent is going to arrive with an even better offer, and you'd better be very sure of that before you risk loosing so much.
5RolfAndreassen
I am not certain that utility is linear in the number of copies of me. (Is that a fair rephrasing of your objection?) It seems to me that I should only anticipate one experience of however long a duration; however many copies there are, each one is experiencing exactly one subjective time-stream. Whatever satisfaction I take in knowing that there are other mes out there surely cannot be as large as the satisfaction in my own subjective experience. So it looks to me as though my anticipated utility should grow very sublinearly in the number of copies of me, perhaps even reaching flatness at some point, even though the total utility in the universe is probably linear in copies. What do I care about the total utility in the universe? Well, as an altruist I do care somewhat. But not to the point where it can realistically compete with what I, personally, can expect to experience!
1Joanna Morningstar
Fair-rephrasing. On the other hand, your patching of the utility function requires it to be bounded above as subjective time tends to infinity, or I can find a function that grows quickly enough to get you to accept 1/3^^^^3 chances. If altruistic utility from the existence of others also is bounded above by some number of subjective-you equivalents, then you are asserting that total utility is bounded above. On a related point you do need to care equally about the utility of other copies of you; otherwise you'll maximise utility if you gain 1 utilon at an overall cost of 1+epsilon to all other copies of you. You'd defect in PD played against yourself...
0RolfAndreassen
Ok, I'll bite the bullet and bound my utility function, mainly perhaps because I don't recall why that's such a problem. In a finite universe, there are finitely many ways to rearrange the quarks; short of turning yourself into Orgasmium, then, there's only so many things you can discover, rearrange, build, or have fun with. And note that this even includes such cerebral pleasures as discovering new theorems in ever more abstruse mathematics, because such theorems are represented in your brain as some particular arrangement of quarks, and so there is an upper bound to how many theorems can be expressed by the matter in the Universe. I don't understand your PD-defection dilemma; why shouldn't I defect in a PD played against a copy of myself? (Apart, that is, from all the reasons not to defect in an arbitrary PD - reputation, altruism, signalling, and so on.) What changes if you replace "a random human" with "a copy of me?" Perhaps the answer can be found in our apparently different intuitions about just how equal copies are; you say "A PD against yourself", I say "A PD against a copy of yourself". These are not quite the same thing. Perhaps you might say, "Ah, but if you reason like that, probably your copy will reason likewise, having the same brain; thus you will both defect, decreasing the total utility." Fair enough, but it cuts both ways: If I can predict my copy by looking at my own actions, then I can decide to cooperate and be confident that he will do likewise! In effect I get to set both players' actions, but I have to choose between CC or DD, and I'd be pretty stupid to choose to defect. Summary: Either the copy is sufficiently like me that whatever motivates me to cooperate will also motivate him; splendid, we both cooperate. Or else our experiences have caused us to diverge sufficiently that I cannot predict his actions by introspection; then there's no difference between him and some random human.
1Wei Dai
Suppose you know that you will be copied in the future and the copies will have to play PD against each other. Does the current you prefer that they cooperate against each other? I find it hard to believe the answer could be "no". So assuming that it's "yes" and you could do self-modification, wouldn't you modify yourself so that your future copies will cooperate against each other, no matter how far they've diverged?
0RolfAndreassen
Yes, but why should my current preferences be binding on my future selves? They presumably know more than I do. I would hate to be bound by the preferences of my 9-year-old self with regards to, say, cooties. Or, to put it differently: I have a preference in the matter, but I'm not convinced it is strong enough to require binding self-modification. I also have this problem with your scenario: Your "no matter how far" presupposes that I can put a limit on divergence: To wit, the copies cannot diverge far enough to work around my modification. This assumption may be unwarranted. It seems to amount to saying that I am able to decide "I will never defect against myself" and make it stick; but in this formulation it doesn't look anywhere near so convincing as talking of 'self-modification'. I don't think speaking of self-modification is useful here; you should rather talk of making decisions, which is a process where we have actual experience.
0Wei Dai
That's irrelevant, because their change in preference is not caused by additional knowledge, but due to a quirk in how humans make decisions. We never had mind copying in our EEA, so we make decisions mostly by valuing our own anticipated future experiences. In other words, we're more like the first picture in http://lesswrong.com/lw/116/the_domain_of_your_utility_function/ (Presumably because that works well enough when mind copying isn't possible, and is computationally cheaper, or just an easier solution for evolution to find.) I don't understand this. As long as we're talking about mind copying, why shouldn't I talk about self-modification? ETA: Perhaps you mean that I should consider the inconvenient possible world where mind copying is possible, but self-modification isn't? In that case, yes, you may not be able to make "I will never defect against myself" stick. But even in that world, you will be in competition against other minds, some of whom may be able to make it stick, and they will have a competitive advantage against you, since their copies will be able to better cooperate with each other. I don't know if that argument moves you at all.
1RolfAndreassen
No, what I mean is that you should taboo "self-modification" and see what happens to your argument. If I decide, today, that I will go on a diet, is that not a weaker form of self-modifying? It is an attempt to bind future selves to a decision made today. Granted, it is a weak binding, as we all know; but to say "self-modification" is just to dismiss that difficulty, assuming that in the future we can overcome akrasia. Well, the future will contain many wonderful things, but I'm not convinced a cure for weak-willedness is among them! So "self-modification" becomes, when tabooed, "decide, with additional future inventions making me able to overcome the natural temptation to re-decide"; and I think this is a much more useful formulation. The reason is that we all have some experience with deciding to do something, and can perhaps form some impression of how much help we're going to need from the future inventions; while we have zero experience with this magical "self-modification", and can form no intuition of how powerful it is.
0Wei Dai
We do have some experience with self-modification, in the form of self-modifying programs. That is, programs that either write directly into their own executable memory, or (since modern CPUs tend to prohibit this) write out an executable file and then call exec on it. But anyway, I think I get your point.
0Joanna Morningstar
That bullet has hidden issues for reflective consistency. You're asserting that any future you would not have wished to take Omega up on the offer again. This seems unlikely: If you're self-modifying or continually improving, then it's likely that new things will become accessible and "fun" to do, if only in terms of new deep problems to solve. It seems very likely that your conception of the bounds of utility shift up as you become more capable. The bounds that you think are on utility probably will alter given 10^^10 years to think. You shouldn't defect because you will regret it; in retrospect you'd choose to self-modify to be an agent that cooperates with copies of you. Actually, you'd choose to self-modify to cooperate with anything that implements such a no-later-regrets decision algorithm.
0RolfAndreassen
I am not certain, but I think you are confusing the pre- and post-copying selves. The pre-copy self wants to maximise utility over all the copies, because it doesn't know which one it will wake up as. Post-copying selves have additional knowledge; they know which one they are, and want to maximise their own personal utility. There doesn't seem to be any inconsistency in having preferences that change over time when additional information is added. Consider designing a feudal society which you'll then live in: If you don't know whether you're an aristocrat or a peasant, you'll give the peasants as many privileges as the economy can support, on the grounds that you're a lot more likely to wake up as a peasant. But if you then find yourself an aristocrat, you'll do your level best to raise the taxes and impose the droit d'seigneur! This is not inconsistency, it is just ordinary ignorance about the future. It's worth pointing out that I'll never experience the total utility over all my copies. However many copies are made, my anticipation ought to be waking up as one copy and experiencing one utility. Maximising the total is my best bet only so long as I don't know which one I am. I don't understand how I am making this assertion; could you please clarify?
0cousin_it
Did you mean, maximizing the average? Because your decisions could also affect how many copies get created.
0RolfAndreassen
I was considering the number of copies as fixed, which makes the two maximisations equivalent; if it is not fixed, then sure, substitute 'average' for 'total'.
0Joanna Morningstar
Sorry; it's apparent that what I wrote confused two issues. The assertion is necessary if you are reflectively consistent and you don't take Omega up on offer n. If a future copy of you is likely to regret a decision not to take Omega up again, then the decision was the very definition of reflectively inconsistent. Now we try to derive a utility function from a DT. The problem for bounded utility is that bounds on conceivable and achievable utility will only increase with time. Hence a future you will likely regret any decision you make on the basis that utility is bounded above, because your future bound on achievable utility almost certainly exceeds your current bound on conceivable utility. Hence asserting that utility is bounded above is probably reflectively inconsistent. (The "almost certainly" is, to my mind, justified by EY's posts on Fun Space) Your example suggests that you don't consider reflective consistency to be a good idea; the peasants would promptly regret the decision not to self-modify to move from a CDT (as the aristocrat is using) to a TDT/UDT/other DT which prevents defection.
1RolfAndreassen
A finite amount of mass contains a finite amount of information; this is physics, not to be overcome by Fun Theory. I may be mistaken about the amount of mass the Universe contains, in which case my upper bound on utility would be wrong; but unless you are asserting that there is infinite mass, or else that there are an infinite number of ways to arrange a finite number of quarks in a bounded space, there must exist some upper bound. My understanding of Fun Theory is that it is intended to be deployed against people who consider 1000-year lifespans and say "But wouldn't you get bored?", rather than an assertion that there is actually infinite Fun to be had. But when dealing with Omega, your thought experiment had better take the physical limits into account! As for the self-modification, I gave my thoughts on this in my exchange with Wei_Dai; briefly, try doing Rationalist Taboo on "self-modify" and see what happens to your argument. So your scenario is that I stopped at some arbitrary point in the garden path; my future self has now reached the end of his vastly extended lifespan; and he wishes he'd taken Omega up on just one more offer? Ok, that's a regret, right enough. But I invite you to consider the other scenario where I did accept Omega's next offer, the randomness did not go my way, and I have an hour left to live, and regret not stopping one offer earlier. These scenarios have to be given some sort of weighting in my decision; the one that treats the numbers as plain arithmetic isn't necessarily any better than the one that accept immediacy bias. They are both points in decision-algorithm space. The inconsistency that turns you into a money pump lies in trying to apply both.
0Joanna Morningstar
The fact that Omega is offering unbounded lifespans implies that the universe isn't going to crunch or rip in any finite time. Excluding them leaves you with a universe where the Hubble radius tends to infinity, which thus makes negentropy (information) unbounded above. Self-modification is just an optimisation process over the design space for agents and run by some agent, with the constraint that only one agent can be instantiated at any time. And regardless of what n is, only a 10^-6 portion of the (n-1)-survivors regret taking decision n. If you're in the block that's killed off by decision 1, then decisions 2,3,4,... are all irrelevant to you. Clearly attempting to apply both and applying neither consistently leads to money pumping.
0RolfAndreassen
Omega's offers are not unbounded, they are merely very large. Further, even an infinite time would not imply an infinite amount of information, because information is a property of mass; adding more time just means that some configurations of quarks are going to be repeated. You are free to argue that it's still Fun to 'discover' the same theorem for the second time, with no memory of the first time you did so, of course; but it looks to me as though that way Orgasmium lies. On the plus side, Orgasmium has no regrets. Yes, that's what I said; my prescription is to choose an arbitrary cutoff point, say where your survival probability drops to 75% - the difference between 80% and 75% seems 'feelable'. You can treat this as all one decision, and consider that 1 in 20 future yous are going to strongly regret starting down the path at all; these are numbers that our brains can work with. Failing an arbitrary cutoff, what is your alternative? Do you in fact accept the microscopic chance of a fantastically huge lifetime?
0Joanna Morningstar
Omega's offers are unbounded; 10^^n exceeds any finite bound with a finite n. If the Hubble distance (edge of the observable universe) recedes, then even with a fixed quantity of mass-energy the quantity of storable data increases. You have more potential configurations. Yes, in the hypothetical situation given; I can't consistently assert anything else. In any "real" analogue there are many issues with the premises I'd take, and would likely merely take omega up a few times with the intend of gaining Omega-style ability.
0RolfAndreassen
I believe you are confused about what 'bounded' means. Possibly you are thinking of the Busy Beaver function, which is not bounded by any computable function; this does not mean it is not bounded, merely that we cannot compute the bound on a Turing machine. Further, 'unbounded' does not mean 'infinite'; it means 'can be made arbitrarily large'. Omega, however, has not taken this procedure to infinity; he has made a finite number of offers, hence the final lifespan is finite. Don't take the limit at infinity where it is not required! Finally, you are mistaken about the effects of increasing the available space: Even in a globally flat spacetime, it requires energy to move particles apart; consequently there is a maximum volume available for information storage which depends on the total energy, not on the 'size' of the spacetime. Consider the case of two gravitationally-attracted particles with fixed energy. There is only one piece of information in this universe: You may express it as the distance between the particles, the kinetic energy of one particle; or the potential energy of the system; but the size of the universe does not matter.
0Joanna Morningstar
No, I mean quite simply that there is no finite bound that holds for all n; if the universe were to collapse/rip in a finite time t, then Omega could only offer you the deal some fixed number of times. We seem to disagree about the how many times Omega would offer this deal - I read the OP as Omega being willing to offer it as many times as desired. AFAIK (I'm only a mathematician), your example only holds if the total energy of the system is negative. In a more complicated universe, having a subset of the universe with positive total energy is not unreasonable, at which point it could be distributed arbitrarily over any flat spacetime. Consider a photon moving away from a black hole; if the universe gets larger the set of possible distances increases.
0RolfAndreassen
I think we are both confused on what "increasing the size of the Universe" means. Consider first a flat spacetime; there is no spatial limit - space coordinates may take any value. If you know the distance of the photon from the black hole (and the other masses influencing it), you know its energy, and vice-versa. Consequently the distance is not an independent variable. Knowing the initial energy of the system tells you how many states are available; all you can do is redistribute the energy between kinetic and potential. In this universe "increasing the size" is meaningless; you can already travel to infinity. Now consider a closed spacetime (and your "only a mathematician" seems un-necessarily modest to me; this is an area of physics where I wish to tread carefully and consult with a mathematician whenever possible). Here the distance between photon and black hole is limited, because the universe "wraps around"; travel far enough and you come back to your starting point. It follows that some of the high-distance, low-energy states available in the flat case are not available here, and you can indeed increase the information by decreasing the curvature. Now, a closed spacetime will collapse, the time to collapse depending on the curvature, so every time Omega makes you an offer, he's giving you information about the shape of the Universe: It becomes flatter. This increases the number of states available at a given energy. But it cannot increase above the bound imposed by a completely flat spacetime! (I'm not sure what happens in an open Universe, but since it'll rip apart in finite time I do not think we need to care.) So, yes, whenever Omega gives you a new offer he increases your estimate of the total information in the Universe (at fixed energy), but he cannot increase it without bound - your estimate should go asymptotically towards the flat-Universe limit. With that said, I suppose Omega could offer, instead or additionally, to increase the information ava
1Joanna Morningstar
I think we're talking on slightly different terms. I was thinking of the Hubble radius, which in the limit equates to Open/Flat/Closed iff there is no cosmological constant (Dark energy). This does not seem to be the case. With a cosmological constant, the Hubble radius is relevant because of results on black hole entropy, which would limit the entropy content of a patch of the universe which had a finitely bounded Hubble radius. I was referring to the regression of the boundary as the "expansion of the universe". The two work roughly similarly in cases where there is a cosmological constant. I have no formal training in cosmology. In a flat spacetime as you suggest, the number of potential states seems infinite; you have an infinite maximum distance and can have any multiple of the plank distance as a separation. In a flat universe, your causal boundary recedes at a constant c, and thus peak entropy in the patch containing your past light cone goes as t^2. It is not clear that there is a finite bound on the whole of a flat spacetime. I agree entirely on your closed/open comments. Omega could alternatively assert that the majority of the universe is open with a negative cosmological constant, which would be both stable and have the energy in your cosmological horizon unbounded by any constant. As to attacking the premises; I entirely agree.
0RolfAndreassen
No; the energy is quantized and finite, which disallows some distance-basis states. But in any case, it does seem that the physical constraint on maximum fun does not apply to Omega, so I must concede that this doesn't repair the paradox.
0Johnicholas
You said "information is a property of mass". Is this obvious? Consider two pebbles floating in space - do they indicate a distance? Could they indicate more information if they were floating further apart? Is it possible that discoveries in physics could cause you to revise the claim "information is a property of mass"?
0RolfAndreassen
Two particles floating in space, with a given energy, have a given amount of entropy and therefore information. The entropy is the logarithm of the number of states available to them at that energy; if they move further apart, that is a conversion of kinetic to potential energy (I'm assuming they interact gravitationally, but other forces do not change the argument) which is already accounted for in the entropy. Therefore, no, the distance is not an additional piece of information, it has been counted in the number of possible states. You can only change the entropy by adding energy - this is equivalent to adding mass; I've been simplifying by saying 'mass' throughout. As for discoveries in physics: I do not wish to say that this is impossible. But it would require new understandings in statistical mechanics and thermodynamics, which are by this point really well understood. You're talking about something rather more unlikely than overthrowing general relativity, here; we know GR doesn't work at all scales. In any case, I can only update on information I already have; if you bring in New Physics, you can justify anything.

I think I've got a fix for your lifespan-gamble dilemma. Omega (who is absolutely trustworthy) is offering indefinite, unbounded life extension, which means the universe will continue being capable of supporting sufficiently-human life indefinitely. So, the value of additional lifespan is not the lifespan itself, but the chance that during that time I will have the opportunity to create at least one sufficiently-similar copy of myself, which then exceeds the gambled-for lifespan. It's more of a calculus problem than a statistics problem, and involves a lot... (read more)

5Nornagest
Upvoted for cleverness, but I don't think that actually works. The expected loss grows at each step, but it's always proportional to the output of the last tetration step, which isn't enough to keep up with the next one; -1 10^10 + 10^(10^10)) is a hell of a lot smaller than -1 10^(10^10) + 10^(10^(10^10)), and it only gets worse from there. The growth rate is even large enough to swamp your losses if your utility is logarithmic in expected life years; that just hacks off one level of exponentiation at the initial step. I don't see a level of caution here that allows you to turn down this particular devil's offer without leading you to some frankly insane conclusions elsewhere. In the absence of any better ideas, and assuming that utility as a function of life years isn't bounded above anywhere and that my Mephistopheles does credibly have the diabolical powers that'd allow him to run such a scam, I think I'm going to have to bite the bullet and say that it's not a scam at all. Just highly counterintuitive.

Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.

It's worth noting that this approach has problems of its own. The Stanford Encyclopedia of Philosophy on the Repugnant Conclusion:

One proposal that easily comes to mind when faced with the Repugnant Conclusion is to reject total utilitarianism in favor of a principle prescribing that the average well-being per life in a population is maximized. Average utilitarianism and total utilitarianism are extensionally e

... (read more)
1AngryParsley
I agree with this conclusion in principle, but I need to point out some caveats. Although the argument says a world with one person would have better average quality of life, it is implied that the world would be worse due to loneliness. A world with one person would have to make up for this in some other way. More importantly, going from our current world with lots of people to a world with one person would require killing lots of people, which is unacceptable. I hadn't considered this before. The only decent rebuttal I can think of is to claim that negative utility lives (ones not worth living) are fundamentally different from positive utility lives (ones worth living). My first impulse is to maximize average positive utility but minimize total negative utility. Unfortunately, making this distinction raises questions about math between the two types of lives. I would say that minimizing negative utility lives trumps maximizing average positive utility, but I'm pretty sure that would make it hard to choose TORTURE instead of DUST SPECKS.

bwahaha. Though my initial thought is "take the deal. This seems actually easier than choosing TORTURE. If you can actually offer up those possibilities at those probabilities, well... yeah."

Unless there's some fun theoretic stuff that suggests that when one starts getting to the really big numbers, fun space seriously shrinks to the point that, even if it's not bounded, grows way way way way way slower than logarithmic... And even then, just offering a better deal would be enough to overcome that.

Again, I'm not certain, but my initial thought is... (read more)

0wedrifid
I had much the same observations.

These thought experiments all seem to require vastly more resources than the physical universe contains. Does that mean they don't matter?

As with Torture vs. Specks, the point of this is to expose your decision procedure in a context where you don't have to compare remotely commensurable utilities. Learning about the behavior of your preferences at such an extreme can help illuminate the right thing to do in more plausible contexts. (Thinking through Torture vs. Dust Specks helped mold my thinking on public policy, where it's very tempting to weigh the salience of a large benefit to a few people against a small cost to everyone.)

EDIT: It's the same heuristic that mathematicians often use when we're pondering a conjecture— we try it in extreme or limiting cases to see if it breaks.

4Eliezer Yudkowsky
What if we're wrong about the size of the universe?
2Simulacra
But we aren't wrong about the observable universe, does it really matter to us what happens outside our interaction range?
0DanielH
I haven't studied this in nearly enough detail to be sure of what I'm saying, but it is my understanding that we quite possibly ARE wrong about the observable universe's size, simply given the newness of the science saying there is an "observable universe". Newton was wrong about gravity, but mostly in edge cases (pun intended); could Hubble et. al. be wrong about the observable universe's size? Could we find a way to send messages faster than light (there are several theories and only one need work)? Or could we possibly cram more people into the universe than seems possible now due to simulations, building smaller but equivalent brains, or otherwise? If the answer to ANY of these questions could be less, then we could indeed be wrong about the size observable universe (if observable is defined in terms of light even after we develop FTL communication, travel, or observation, then that's stupid (like the current definition of clinical death) and you can replace "observable universe" with some similar phrase). Besides, it may in fact be worth considering what happens outside the observable universe. We can make some predictions already, such as similar laws of physics and the continuing existence of anything which we could previously observe but has since passed over the cosmological event horizon. If people eventually become one of the things that passes over this event horizon, I'll still care about them even though my caring can not affect them in any way. Note again that I don't know much about this, and I may be babbling nonsense for most of these points. But I do know that Hubble may be wrong, that humans keep doing things that they'd previously thought scientifically impossible, and that without an observable universe boundary there are still things which are causally unrelated to you in either direction but that you still may care about.
3RolfAndreassen
It seems to me that you can rephrase them in terms of the resources the universe actually does contain, without changing the problem. Take SPECKS: Suppose that instead of the 3^^^^3 potential SPECKing victims, we instead make as many humans as possible given the size of the universe, and take that as the victim population. Should we expect this to change the decision?
0smoofra
Yes, I think it will change the decision. You need a very large number of minuscule steps to go from specs to torture, and at each stage you need to decimate the number of people affected to justify inflicting the extra suffering on the few. It's probably fair to assume the universe can't support more than say 2^250 people, which doesn't seem nearly enough.
2RolfAndreassen
You can increase the severity of the specking accordingly, though. Call it PINPRICKS, maybe?

If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years.

I've read too many articles here, I saw where you were going before I finished this sentence...

I still don't buy the 3^^^3 dust specks dilemma; I think it's because a dust speck in the eye doesn't actually register on the "bad" scale for me. Why not switch it out for 3^^^3 people getting hangnails?

6gaffa
The point is to imagine the event that is the least bad, but still bad. If dust specks doesn't do it for you, imagine something else. What event you choose is not supposed to be the crucial part of the dilemma.
-6Simulacra
0wedrifid
You mean... you didn't get it from just reading the title?

I have a different but related dilemma.

Omega presents you with the following two choices:

1) You will live for at least 100 years from now, in your 20 year old body, perfect physical condition etc and you may live on later as long as you manage.

2) You will definitely die in this universe within 10 years, but you get a box with 10^^^10 bytes of memory/instructions capacitance. The computer can be programmed in any programming language you'd like (also with libraries to deal with huge numbers, etc.). Although the computer has a limit on the number of operatio... (read more)

2faul_sname
B, and it seems like a mind-bogglingly obvious choice (though I would want to see a demonstration of the computer first, and put in some safeguards to prevent burning through too much of m computation at any given time (i.e. only allow it 10^^(10^^10-1) operations for any instruction I give it to keep an infinite loop from making it worthless). My choice wouldn't differ, even if I didn't have function f, because that's basically a "map the genome, simulate cells directly from physics (and thus solve the protein folding problem), solve any problem where the limit is computation, and generally eliminate all suffering and solve every human problem" machine. If I wanted to, I could also run every turing machine with 10^10 or fewer states for 10^^(10^^10-1) cycles, though I'm not sure whether I'd want to. We expect people to lay down their lives immediately to save even 10 others. Why wouldn't we do so to save literally every other human on the planet and give them basically unlimited life?

The flaws in both of these dilemmas seems rather obvious to me, but maybe I'm overlooking something.

The Repugnant Conclusion

First of all, I balk at the idea that adding something barely tolerable to a collection of much more wonderful examples is a net gain. If you had a bowl of cherries (and life has been said to be a bowl of cherries, so this seems appropriate) that were absolutely the most wonderful, fresh cherries you had ever tasted, and someone offered to add a recently-thawed frozen non-organic cherry which had been sitting in the back of the fridge... (read more)

I didn't vote your comment down, but I can guess why someone else did. Contradicting the premises is a common failure mode for humans attacking difficult problems. In some cases it is necessary (for example, if the premises are somehow self-contradictory), but even so people fail into that conclusion more often than they should.

Consider someone answering the Fox-Goose-Grain puzzle with "I would swim across" or "I would look for a second boat".

http://en.wikipedia.org/wiki/Fox,_goose_and_bag_of_beans_puzzle

0woozle
Where did I contradict the premises?
4Johnicholas
Points 1 through 5. In general, you can understand any thought experiment someone proposes to be "trued". The doubting listener adds whatever additional hypotheses were not mentioned about Omega's about powers, trustworthiness, et cetera, until (according to their best insight into the original poster's thought process) the puzzle is as hard as the original poster apparently thought it was.
0woozle
I just re-read it more carefully, and I don't see where it says that I can assume that Omega is telling the truth... ...but even if it did, my questions still stand, starting with how do I know that Omega is telling the truth? I cannot at present conceive* of any circumstances under which I would believe someone making the claims that Omega makes. As I understand it, the point of the exercise is to show how our intuitive moral judgment leads us into inconsistencies or contradictions when dealing with complex mathematical situations (which is certainly true) -- so my point about context being important is still relevant. Give me sufficient moral context, and I'll give you a moral determination that is consistent -- but without that context, intuition is essentially dividing by zero to fill in the gaps. * without using my imagination to fill in some very large blanks, anyway, which means I could end up with a substantially different scenario from that intended

It's a convention about Omega that Omega's reliability is altogether beyond reproach. This is, of course, completely implausible, but it serves as a useful device to make sure that the only issues at hand are the offers Omega makes, not whether they can be expected to pan out.

2woozle
Okay... this does render moot any conclusions one might draw from this exercise about the fallibility of human moral intuition. Or was that not the point? If the question is supposed to be considered in pure mathematical terms, then I don't understand why I should care one way or the other; it's like asking me if I like the number 3 better than the number 7.
8Alicorn
The point is that Omega's statements (about Omega itself, about the universe, etc.) are all to be taken at face value as premises in the thought experiments that feature Omega. From these premises, you attempt to derive conclusions. Entertaining variations on the thought experiment where any of the premises are in doubt is cheating (unless you can prove that they contradict one another, thereby invalidating the entire experiment). Omega is a tool to find your true rejection, if you in fact reject something.
3woozle
So what I'm supposed to do is make whatever assumptions are necessary to render the questions free of any side-effects, and then consider the question... So, let me take a stab at answering the question, given my revised understanding. If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. ...with further shaving-off of survival odds in exchange for life-extension by truly Vast orders of magnitude. First off, I can't bring myself to care about the difference; both are incomprehensibly long amounts of time. Also, my natural tendency is to avoid "deal sweeteners", presumably because in the real world this would be the "switch" part of the "bait-and-switch" -- but Omega is 100% trustworthy, so I don't need to worry -- which means I need to specifically override my natural "decision hysteresis" and consider this as an initial choice to be made. Is it cheating to let the "real world" intrude in the form of the following thought?: If, by the time 10^^3 years have elapsed, I or my civilization have not developed some more controllable means of might-as-well-be-immortality, then I'm probably not going to care too much how long I live past the end of my civilization, much less the end of the universe. ...or am I simply supposed to think of "years of life" as a commodity, like money? (The ensuing monetary analogies would seem to imply this...) Too much of anything, though -- money or time -- becomes meaningless when multiplied further.: Time: Do I assume my friends get to come with me, and that together we will find some way to survive the inevitable maximization of entropy? Money: After I've bought the earth, and the rights to the rest of the solar system and any other planets we're able to find with the infinite improbability drive developed by the laboratories I paid for, what do we do with the other $0.99999 x 10^^whatever? (And how do I spend the first part of that money
6UnholySmoke
Please stop allowing your practical considerations get in the way of the pure, beautiful counterfactual! Seriously though, either you allow yourself to suspend practicalities and consider pure decision theory, or you don't. This is a pure maths problem, you can't equate it to 'John has 4 apples.' John has 3^^^3 apples here, causing your mind to break. Forget the apples and years, consider utility!
1woozle
As I said somewhere earlier (points vaguely upward), my impression was that this was not actually intended as a pure mathematical problem but rather an example of how our innate decisionmaking abilities (morality? intuition?) don't do well with big numbers. If this is not the case, then why phrase the question as a word problem with a moral decision to be made? Why not simply ask it in pure mathematical terms?
1nazgulnarsil
this was my initial reaction as well, ask if I can go the other way until we're at, say, 1000 years. but if you truly take the problem at face value (we're negotiating with omega, the whole point of omega is that he neatly lops off alternatives for the purposes of the thought experiment) and are negotiating for your total lifespan +- 0 then yes, I think you'd be forced to come up with a rule.
-2woozle
I think my "true rejection", then, if I'm understanding the term correctly, is the idea that we live in a universe where such absolute certainties could exist -- or at least where for-all-practical-purposes certainties can exist without any further context.
1matt
This problem seems to have an obvious "shut up and multiply" answer (take the deal), but our normal intuitions scream out against it. We can easily imagine some negligible chance of living through the next hour, but we just can't imagine trusting some dude enough to take that chance, or (properly) a period longer than some large epoch time. Since our inability to properly grok these elements of the problem is the fulcrum on which our difficulty balances it seems more reasonable than usual to question Omega & her claims. (This problem seems as easy to me as specks vs torture: in both cases you need to shut up and multiply, and in both cases you need to quiet your screaming intuitions - they were trained against different patterns.)
0Christian_Szegedy
I think this one of the biggest problems with these examples. It is theoretically impossible that (assuming your current life history has finite Kolmogorov complexity) you could hoard enough evidence to trust someone completely. To me it seems like a fundamental (and mathematically quantifiable!) about these hypothetical situations: if a rational agent (one that uses Occam's razor to model the reality) encounters a really complicated god-like being that does all kind of impossible looking things, then the agent would rather conclude that his brain is not working properly (or maybe that he is a Boltzmann brain) which would still be a simpler explanation than the assuming the reality of Omega.
-4Richard_Kennaway
Failing to question them is another. In the political world, the power to define the problem trumps the power to solve it. Within the terms of this problem, one is supposed to take Omega's claims as axiomatically true. p=1, not 1-epsilon for even an unimaginably small epsilon. This is unlike Newcomb's problem, where an ordinary, imaginable sort of confidence is all that is required. Thinking outside that box, however, there's a genuine issue around the question of what it would take to rationally accept Omega's propositions involving such ginormous numbers. I notice that Christian Szegedy has been voted up for saying that in more technical language. These are answers worth giving, especially by someone who can also solve the problem on its own terms.
0woozle
Side note: ya know, it would be really nice if there was some way for a negative vote to be accompanied by some explanation of what the voter didn't like. My comment here got one negative vote, and I have no idea at all why -- so I am unable to take any corrective action either with regard to this comment or any future comments I may make. (I suppose the voter could have replied to the comment to explain what the problem was, but then they would have surrendered their anonymity..)
1AndrewKemendo
That assumes those people down voting are doing so with some well thought out intention.

I mean, most finite numbers are very much larger than that.

Does that actually mean anything? Is there any number you can say this about where it's both true and worth saying?

It's true of any number, which is why it's funny.

2CronoDAS
It's only true if you're counting positive integers. If you allow rational numbers, for any X greater than zero, there are as many rational numbers between zero and X as there are rational numbers greater than X.
3Alicorn
That's the point where my limited mathematical skills sputter in disbelief. It seems to me that however many rational numbers there are between, say, zero and one, there are exactly as many between one and two, and having completely accounted for the space between zero and one thus, you can move on to numbers two and up (of which there are a great many).
5Cyan
The trick is that there are an infinite number of rational numbers between zero and one. When dealing with infinite sets, one way to count their members is to put them into one-to-one correspondence with some standard set, like the set of natural numbers or the set of real numbers. These two sets (i.e., the naturals and the reals) have different sizes: it turns out that the set of natural numbers cannot be put into one-to-one correspondence with the real numbers. No matter how one tries to do it, there will be a real number that has been left out. In this sense, there are "more" real numbers than natural numbers, even though both sets are infinite. Thus, a useful classification for infinite sets is as "countable" (can be put into one-to-one correspondence with the naturals) or "uncountable" (too big to be put into one-to-one correspondence with the naturals). The rational numbers are countable, so any infinite subset of rationals is also countable. When CronoDAS says that there are as many rationals between zero and X as there are greater than X, he means that both such sets are countable.
2CronoDAS
That doesn't quite work when comparing infinite sets. It might seem surprising, but indeed, there are exactly as many rational numbers between zero and one as there are between zero and two. The short version of the explanation: Two infinite sets are the same size if you can construct a one-to-one correspondence between them. In other words, if you can come up with a list of pairs (x,y) of members of sets X and Y such that every member of set X corresponds to exactly one member of set Y, and vice versa, then sets X and Y are the same size. For example, the set of positive integers and the set of positive even integers are the same size, because you can list them like this: (1,2), (2,4), (3,6), (4,8), and so on. Each positive integer appears exactly once on the left side of the list, and each positive even integer appears exactly once on the right side of the list. You can use the same function I used here, f(x)=2x, to map the rational numbers between zero and one to the rational numbers between zero and two. (As it turns out, you can map the positive integers to the rational numbers, but you can't map them to the real numbers...)
3Alicorn
You are not the first person to try to explain this to me, but it doesn't seem "surprising", it seems like everybody is cooperating at pulling my leg. Since I'm aware that such a conspiracy would be impractical and that I am genuinely terrible at math, I don't think that's actually happening, but the fact remains that I just do not get this (and, at this point, no longer seriously entertain the hope of learning to do so). It is only slightly less obvious to me that there are more numbers between 0 and 2 than 0 and 1, than it is that one and one are two. To put it a little differently, while I can understand the proofs that show how you may line up all the rationals in a sensible order and thereby assign an integer to each, it's not obvious to me that that is the way you should count them, given that I can easily think of other ways to count them where the integers will be used up first. Nothing seems to recommend the one strategy over the other except the consensus of people who don't seem to share my intuitions anyway.
[-]saturn300

Imagine A is the set of all positive integers and B is the set of all positive even integers. You would say B is smaller than A. Now multiply every number in A by two. Did you just make A become smaller without removing any elements from it?

9Alicorn
...Okay, that's weird! Clearly that shouldn't work. Thanks for the counterexample.
0DanielH
It gets even worse than that if you want to keep your intuitions (which are actually partially formalized as the concept natural density). Imagine that T is the set of all Unicode text strings. Most of these strings, like "🂾⨟ꠗ∧̊⩶🝍", are gibberish, while some are valid sentences in various languages (such as "The five boxing wizards jump quickly.", "print 'Hello, world!'", "ἔσχατος ἐχθρὸς καταργεῖται ὁ θάνατος·", or "וקראתם בשם אלהיכם ואני אקרא בשם יהוה והיה האלהים אשר יענה באש הוא האלהים ויען כל העם ויאמרו טוב הדבר"). The interesting strings for this problem are things like "42", "22/7", "e", "10↑↑(10↑↑10)", or even "The square root of 17". These are the strings that unambiguously describe some number (under certain conventions). As we haven't put a length limit on the elements of T, we can easily show that every natural number, every rational number, and an infinite number of irrational numbers are each described by elements of T. As some elements of T don't unambiguously describe some number, our intuitions tell us that there are more text files than there are rational numbers. However, a computer (with arbitrarily high disk space) would represent these strings encoded as sequences of bytes. If we use a BOM in our encoding, or if we use the Modified UTF-8 used in Java's DataInput interface, then every sequence of bytes encoding a string in T corresponds to a different natural number. However, given any common encoding, not every byte sequence corresponds to a string, and therefore not every natural number corresponds to a string. As encoding strings like this is the most natural way to map strings to natural numbers, there must intuitively be more natural numbers than strings. We have thus shown that there are more strings than rational numbers, and more natural numbers than strings. Thus, any consistent definition of "bigger" that works like this can't be transitive, which would rule out many potential applications of such a concept. EDIT: Fixed an error ari
7orthonormal
I think that part of the difficulty (and part of the reason that certain people call themselves infinite set atheists) stems from the fact that we have two very basic intuitions about the quantity of finite sets, and it is impossible to define quantity for infinite sets in a way that maintains both intuitions. Namely, you can have a notion of quantity for which (A) sets that can be set in some 1-to-1 correspondence will have the same quantity, OR a notion of quantity for which (B) a set that strictly contains another set will have a strictly larger quantity. As it turns out, given the importance of functions and correspondences in basic mathematical questions, the formulation (cardinality) that preserves (A) is very natural for doing math that extends and coheres with other finite intuitions, while only a few logicians seem to toy around with (B). So it may help to realize that for mainstream mathematics and its applications, there is no way to rescue (B); you'll just need to get used to the idea that an infinite set and a proper subset can have the same cardinality, and the notion that what matters is the equivalence relation of there existing some 1-to-1 correspondence between sets.
1Cyan
(B) is roughly measure theory, innit?
0Johnicholas
Yes, for some value of "roughly".
0Cyan
(A value of "roughly" that encompasses sets of measure zero is what I had in mind.)
0Alicorn
My problem doesn't arise only when comparing sets such that one strictly contains another. I can "prove" to myself that there are more rational numbers between any two integers than there are natural numbers, because I can account for every last natural number with a rational between the two integers and have some rationals left over. I can also read other people "proving" that the rationals (between two integers or altogether, it hardly matters) are "countably infinite" and therefore not more numerous than the integers, because they can be lined up. I get that the second way of arranging them exists. It's just not at all clear why it's a better way of arranging things, or why the answer it generates about the relative sizes of the sets in question is a better answer.
7Johnicholas
If you come up with a different self-consistent definition of how to compare sizes of sets ("e.g. alicorn-bigger"), that would be fine. Both definitions can live happily together in the happy world of mathematics. Note that "self-consistent definition" is harder than it sounds. There are cases where mainstream mathematical tradition was faced with competing definitions. Currently, the gamma function is the usual extension of the factorial function to the reals, but at one time, there were alternative definitions competing to be standardized. http://www.luschny.de/math/factorial/hadamard/HadamardsGammaFunction.html Another example: The calculus was motivated by thought experiments involving infinitesimals, but some "paradoxes" were discovered, and infinitistic reasoning was thought to be the culprit. By replacing all of the arguments with epsilon-delta analogs, the main stream of mathematics was able to derive the same results while avoiding infinitistic reasoning. Eventually, Abraham Robinson developed non-standard analysis, showing an alternative, and arguably more intuitive, way to avoid the paradoxes. http://en.wikipedia.org/wiki/Non-standard_analysis
1Cyan
Thanks for that super-interesting link about factorial-interpolating functions!
2orthonormal
The trouble is that with a little cleverness, you can account for all of the rationals by using some of the natural numbers (once each) and still have infinitely many natural numbers left over. (Left as an exercise to the reader.) That's why your intuitive notion isn't going to be self-consistent.
6CronoDAS
Well, that was the short explanation. The long one makes a little more sense. (By the way, the technical term for the number of members in a set is the cardinality of a set.) Let's try this from a different angle. If you have two sets, X and Y, and you can map X to a subset of Y and still have some members of Y left over, then X can't be a bigger set than Y is. In other words, a set can't "fit inside" a set that's smaller than itself. For example, {1,2,3} can fit inside {a,b,c,d}, because you can map 1 to a, 2 to b, 3 to c, and still have "d" left over. This means that {1,2,3} can't be bigger than {a,b,c,d}. It shouldn't matter how you do the mapping, because we only care about whether or not the whole thing fits. Am I making sense here? Now, because you can map the positive even integers to a subset of the positive integers (for example, by mapping each positive integer to itself) and still have positive integers left over (all the odd ones), the set of positive even integers fits inside the set of positive integers, and so it can't be bigger than the set of positive integers. On the other hand, the positive integers can also fit inside the positive even integers. Just map every positive integer n to the positive even integer 2*(n+1). You get the list (1,4), (2,6), (3,8), and so on. You've used up every positive integer, but you still have a positive even integer - 2 - left over. So, because the positive integers fit inside the positive even integers, so they're not bigger, either. If the positive even integers aren't bigger than the positive integers, and the positive integers aren't bigger than the positive even integers, then the only way that could happen is if they are both exactly the same size. (Which, indeed, they are.)
2Alicorn
So in fact, we count them both ways, get both answers, and conclude that since each answer says that it is not the case that the one set is bigger than the other, they must be the same size? Congratulations! I think I have, if not a perfect understanding of this, at least more of one than I had yesterday! Thanks :)
2CronoDAS
You're welcome. I like to think that I'm good at explaining this kind of thing. ;) To give credit where credit is due, it was the long comment thread with DanArmak that helped me see what the source of your confusion was. And, indeed, all the ways of counting them matter. Mathematicians really, really hate it when you can do the same thing two different ways and get two different answers. I learned about all this from a very interesting book I once read, which has a section on Georg Cantor, who was the one who thought up these ways of comparing the sizes of different infinite sets in the first place.
1cousin_it
Sounds like you want measure) instead of cardinality. Unfortunately, any subset of the rationals has measure 0, and I'm not pulling your leg either.
0Alicorn
I don't even understand the article on measure...
1cousin_it
The main takeaway should be that counting, or one-to-one mapping, isn't a complete approach to comparing the "sizes" of infinite sets of numbers. For example, there are obviously as many prime numbers as there are naturals, because the number N may correspond to the Nth prime and vice versa; also see this Wikipedia article. For the same reason there are as many points between 0 and 1 as there are between 0 and 2, so to compare those two intervals we need something more than counting/cardinality. This "something more" is the concept of measure, which takes into account not only how many numbers a set contains, but also where and how they're laid out on the line. Unfortunately I don't know any non-mathematical shortcut to a rigorous understanding of measure; maybe others can help.
5Alicorn
You are guaranteed to lose me if you say things like this, especially if you put in "obviously". It's obvious to me (if false, in some freaky math way) that there are more natural numbers than prime numbers. The opposite of this statement is therefore not obvious to me.
5cousin_it
The common-sense concept of "as many" or "as much" does not have a unique counterpart in mathematics: there are several formalizations for different purposes. In one widely used formalization (cardinality) there are as many primes as there are naturals, and this is indeed obvious for that formalization. If we take some other way of assigning sizes to number sets, like natural density, our two sets won't be equal any longer. And tomorrow you could invent some new formula that would give a third, completely different answer :-) It's ultimately pointless to argue which idea is "more intuitive"; the real test is what works in applications and what yields new interesting theorems.
2DanArmak
Cardinality compares two sets using one-to-one mappings. If such a mapping exists, the two sets are equal in cardinality. In this sense, there are as many primes as there are natural numbers. Proof: arrange the primes as an infinite series of increasing numbers. Map each prime in the series to its index in the series, which is a natural number. This definition is mathematically simple. On the other hand, the intuitive concept of "size" where the size of the real line segment [0,1] is smaller than that of [0,2] and there are fewer primes than naturals, is much more complex to define mathematically. It is handled by measure theory, but one of the intuitive problems with measure theory is that some subsets simply can't be measured. If I understand correctly, there really are no actual infinities in the universe, at least not inside a finite volume (and therefore not in interaction due to speed of light limits). And as far as I can make out (someone please correct me if I'm wrong), there aren't infinitely many Everett branches arising from a quantum fork in the sense that we can't physically measure the difference between sufficiently similar outcomes, and there are finitely many measurement results we can see. So the mathematical handling of infinities shouldn't ever directly map to actual events in a non-intuitive sense.
0Alicorn
Yes, I get that you can do that! I get that you can do that - I just don't know why you should do that, instead of doing it the way that seems like the sensible way to do it in my head. What recommends this arrangement over any other arrangement?
2komponisto
Nothing at all, except that it shows there exists such a correspondence -- which is the only question of interest when talking about the "size" (cardinality) of sets. (EDIT: Perhaps I should add that this question of existence is interesting because the situation can be quite different for different pairs of sets: while there exists a 1-1 correspondence between primes and natural numbers, there does not exist any such correspondence between primes and real numbers. In that case, all mappings will leave out some real numbers -- or duplicate some primes, if you're going the other way.) All the other ways of "counting" that you're thinking of are just as "valid" as mathematical ideas, for whatever other purposes they may be used for. Here's an example, actually: the fact that you can think of a way of making primes correspond in a one-to-one fashion to a proper subset of the natural numbers (not including all natural numbers) succeeds in showing that the set of primes is no larger than the set of natural numbers.
2SarahNibs
It is obvious only if you've had the oddities of infinite sets hammered into you. Here's why our intuitions are wrong (the common ones I hear): "Clearly there are more natural numbers than prime numbers. Prime numbers are a strict subset of natural numbers!" --> the strict subset thing works when everything is finite. But why? Because you can count out all the smaller set, then you have more left over in the larger set, so it's bigger. For infinite sets, though, you can't "count out all the smaller set" or equivalent. "Okay, but if I choose an integer uniformly at random, there's a 50% chance it's a natural number and a < 50% chance it's a prime number. 50 > <50, so there are more natural numbers." --> You can't choose an integer uniformly at random. "Really?" --> Yes, really. There are an infinite number of them, so with what probability is 42 selected? Not 0, 'cause then it won't be selected. Not >0, 'cause then the probabilities don't add to 1. "Fine, if I start counting all the natural numbers and prime numbers (1: 1,0. 2: 2,1. 3: 3,2. 4: 4,2.) I'll find that the number of naturals is always greater than the number of primes." --> You've privileged an order, why? Instead let's start at 2, then 3, then 5, then 7, etc. Now they're equal. "Something's still fishy." --> Yes, all of these are fine properties to think about. They happen to be equivalent for finite sets and not for infinite sets. We choose cousin_it's correspondence thing to be "size" for infinite sets, because it turns out to make the most sense. But the other properties could be interesting too.
1Alicorn
Well, no, but there are finite sets I can't actually count either. I can, however, specify a way to translate an integer (or whatever) into something else, and as long as that algorithm can in principle be applied to any integer (or whatever), I consider myself to have in so doing accounted for all of them. For instance, when comparing the set of primes to the set of naturals, I say to myself, "okay, all of the primes will account for themselves. All of the naturals will account for themselves. There are naturals that don't account for primes, but no primes that don't account for naturals. Why, looks like there must be more naturals than primes!"
2DanArmak
This reasoning is intuitive (because it arises by extension from finite sets) but unfortunately leads to inconsistent results. Consider two different mappings ('accountings') of the naturals. In the first, every integer stands for itself. In the second, every integer x maps to 2*x, so we get the even numbers. By your logic, you would be forced to conclude that the set of naturals is "bigger" than itself.
0Alicorn
But in that case something does recommend the first accounting over the second. The second one gives an answer that does not make sense, and the first one gives an answer that does make sense. In the case of comparing rationals to integers, or any of the analogous comparisons, it's the accounting that People Good At Math™ supply that makes no sense (to me).
2DanArmak
If you know which answer makes sense a priori, then you don't need an accounting at all. When you don't know the answer, then you need a formalization. The formalization you suggest gives inconsistent answers: it would be possible to prove for any two infinite sets (that have the same cardinality, e.g. any two infinite collections of integers) both that A>B, and B>A, and A=B in size. Edit: suppose you're trying to answer the question: what is there "more" of, rational numbers in [0,1], or irrational numbers in [0,1]? I know I don't have any intuitive sense of the right answer, beyond that there are clearly infinitely many of both. How would you approach it, so that when your approach is generalized to comparing any two sets, it is consistent with your intuition for those pairs of sets where you can sense the right answer? Mathematics gives several answers, and they're not consistent with one another or with most people's natural intuitions (as evolved for finite sets). We just use whichever one is most useful in a given context.
0Alicorn
I haven't suggested anything that really looks to me like "a formalization". My basic notion is that when accounting for things, things that can account for themselves should. How do you make it so that this notion yields those inconsistent equations/inequalities?
0DanArmak
Mathematical formalization is necessary to make sure we both mean the same thing. Can you state your notion in terms of sets and functions and so on? Because I can see several different possible formalizations of what you just wrote and I really don't know which one you mean. Edit: possibly one thing that made the intuitive-ness of the primes vs. naturals problem is that the naturals is a special set (both mathematically and intuitively). How would you compare P={primes} and A=P+{even numbers}? A still strictly contains P, so intuitively it's "bigger", but now you can't say that every number in A "accounts" for itself if you want to build a mapping from A to the naturals (i.e. if you want to arrange A as a series with indexes). Question 2: how do you compare the even numbers and the odd numbers? Intuitively there is the same amount of each. But neither is a subset of the other.
0Alicorn
Probably not very well. Please keep in mind that the last time I took a math class, it was intro statistics, this was about four years ago, and I got a C. The last math I did well in was geometry even longer ago. This thread already has too much math jargon for me to know for sure what we're talking about, and me trying to contribute to that jungle would be unlikely to help anyone understand anything.
1DanArmak
Then let's take the approach of asking your intuition to answer different questions. Start with the one about A and P in the edit to my comment above. The idea is to make you feel the contradiction in your intuitive decisions, which helps discard the intuition as not useful in the domain of infinite sets. Then you'll have an easier time learning about the various mathematical approaches to the problem because you'll feel that there is a problem.
0Alicorn
A seems to contain the number 2 twice. Is that on purpose? In that case the sensible thing to do seems to me to pair every number with one adjacent to it. For instance, one can go with two, and three can go with four, etc.
0DanArmak
In the mathematical meaning of a 'set', it can't contain a member twice, it either contains it or not. So there's no special meaning to specifying 2 twice. Here you're mapping the sets to one another directly instead of mapping each of them to the natural numbers. So when does your intuition tell you not to do this? For instance, how would you compare all multiples of 2 with all multiples of 3?
0Alicorn
Half of the multiples of 3 are also multiples of 2. Those can map to themselves. The multiples of 3 that are not also multiples of 2 can map to even numbers between the adjacent two multiples of 3. For instance, 6 maps to itself and 12 maps to itself. 9 can map to a number between 6 and 12; let's pick 8. That leaves 10 unaccounted for with no multiples of 3 going back to deal with it later; therefore, there are more multiples of 2 than of 3.
7Cyan
Essentially, you've walked up the natural numbers in order and noted that you encounter more multiples of 2 than multiples of 3. But there's no reason to privilege that particular way of encountering elements of the two sets. For instance, instead of mapping multiples of 3 to a close multiple of 2, we could map each multiple of 3 to two-thirds of itself. Then every multiple of 2 is accounted for, and there are exactly as many multiples of 2 as of 3. Or we could map even multiples of 3 to one third their value, and then the the odd multiples of 3 are unaccounted for, and we have more multiples of 3 than of 2.
1DanArmak
Your intuition seems to correspond to the following: if, in any large enough but finite segment of the number line, there are more members of set A than of set B; then |A|>|B|. The main problem with this is that it contradicts another intuition (which I hope you share, please say so explicitly if you don't): if you take a set A, and map it one-to-one to a different set B (a complete and reversible mapping) - then the two sets are equal in size. After all, in some sense we're just renaming the set members. Anything we can say about set B, we can also say about set A by replacing references to members of B with members of A using our mapping. But I can build such a mapping between multiples of 2 and of 3: for every integer x, map 2x to 3x. This implies the two sets are equal contradicting your intuition.
0[anonymous]
The definition of equal cardinality is that there is a one-to-one correspondence between the objects. It doesn't matter what that correspondence is.
0[anonymous]
Sure, you can line them up such that the integers run out first. You can even line them up so the rationals line up first. There are an infinite number of ways to line them up. In order to satisfy the definition of them being the same size we only require that ONE of the ways of lining them up leads to them corresponding exactly. It's merely a definition, though. The kicker is that the definition is useful and consistent.
-1[anonymous]
If you think [0,1] has fewer elements than [0,10], then how come each number x in [0,10] can find a unique partner x/10 in [0,1]? It might seem unusual that the set [0,10] can be partnered with a proper subset of itself. But in fact, this property is sufficient to define the concept of an "infinite set" in standard axiomatic set theory.
2wedrifid
And either way, it still means very little.

And then the rest of the Repugnant Conclusion - that it's better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating - is just "repugnant" because of standard scope insensitivity.

I think I've come up with a way to test the theory that the Repugnant Conclusion is repugnant because of scope insensitivity.

First we take the moral principle the RC is derived from, the Impersonal Total Principle (ITP); which states that all that matters is the total amount of utility* in the world, factors like how it is... (read more)

I read the fun theory sequence, although I don't remember it well. Perhaps someone can re-explain it to me or point me to the right paragraph.

How did you prove that we'll never run out of fun? You did prove that we'll never run out of challenges of the mathematical variety. But challenges are not the same as fun. Challenges are fun when they lead to higher quality of life. I must be wrong, but as far as I understood, you suggested that it would be fun to invent algorithms that sort number arrays faster or prove theorems that have no applications other than... (read more)

The problem goes away if you allow a finite present value for immortality. In other words, there should be a probability level P(T) s.t. I am indifferent between living T periods with probability 1, and living infinitely with probability P(T). If immortality is infinitely valued, then you run into all sorts of ugly reducto ad absurdum arguments along the lines of the one outlined in your post.

In economics, we often represent expected utility as a discounted stream of future flow utilities. i.e.

V = Sum (B^t)(U_t)

In order for V to converge, we need B to b... (read more)

[-][anonymous]20

OMEGA: Wait! Come back! I have even faster-growing functions to show you! And I'll take even smaller slices off the probability each time! Come back!

HUMAN: Ahem... If you answer some questions first...

OMEGA: Let's try

HUMAN: Is it really true that the my experience on this universe can be described by a von Neumann machine with some fixed program of n bytes + m bytes of RAM, for some finite values of n and m?

OMEGA: Uh-huh (If omega answers no, then he becomes inconsistent with his previous statement that lifetime and computational resources are equivale... (read more)

I have moral uncertainty, and am not sure how to act under moral uncertainty. But I put a high credence that I would take the EV of year-equivalent (the concept of actual-years probably breaks down when you're a Jupiter Brain). I also put some credence that finite lives are valueless.

Eliezer said:

Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place

that seems like quite a big sacrifice to make in order to resolve Parfit's repugnant conclusion; you have abandoned consequentialism in a really big way.

You can get off parfit's conclusion by just rejecting aggregative consequentialism.

5Vladimir_Nesov
Think of the goal being stated in terms of world-histories rather than world-states. It makes more sense this way. Then, you can say that your preference for world-histories where a person is created (leading to the state of the world X) is different than for world-histories where a person is killed (starting from a different state, but leading to the same state X).
0SforSingularity
Sure, you can be a histories-preferer, and also a consequentialist. In fact you have preferences over histories anyway, really.
0Vladimir_Nesov
Hmm... Then, in what sense can you mean the top-level comment while keeping this in mind?
0SforSingularity
I meant it in a hypothetical way. I don't actually like state-consequentialism - trivially, human experiences are only meaningful as a section of the history of the universe.

This situation gnaws at my intuitions somewhat less than the dust specks.

You're offering me the ability to emulate every possible universe of the complexity of the one we observe, and then some. You're offering to make me a God. I'm listening. What are these faster-growing functions you've got and what are your terms?

0Wei Dai
Did you miss that before Omega gave you the offer, you already had "the ability to emulate every possible universe of the complexity of the one we observe, and then some"? By accepting Omega's offers, you're almost certainly giving up those 10^10,000,000,000 years of life, in exchange for some tiny probability of a longer but still finite lifespan.
0wedrifid
No, but I evidently forgot it by the time I tacked on the 'offering to make me a God' drivel. Nevertheless, as with (I believe it was) Pys-Kosh my intuition tells me to take the deals and fast, before Omega changes his mind.

I couldn't resist posting a rebuttal to Torture vs. Dust Specks. Short version: the two types of suffering are not scalars that have the same units, so comparing them is not merely a math problem.

-2John_Maxwell
My own problem with the torture versus dust specks problem is that I'm not sure dust specks are bad at all. I don't remember ever being irritated by a dust speck. Actually as I've started thinking about this I've been slightly irritated by several, but they've never before registered in my consciousness (and therefore aren't bad).
4pre
Oh no! Does just mentioning the problem cause people to notice dust-specs that would have otherwise gone unnoticed? If we ask 3^^^3 people the question are we in fact causing more trouble than torturing a man for 50 years? If you ask one person and expect them to ask the advice of two others, who do the same in turn.... Seems best to stay quiet! Pre..........

Uhm - obvious answer: "Thank you very much for the hint that living forever is indeed a possibility permitted by the fundamental laws of the universe. I think I can figure it out before the lifespan my current odds give me are up, and if I cant, I bloody well deserve to die. Now kindly leave, I really do not intend to spend what could be the last hour of my life haggling."

Mostly, this paradox sets off mental alarm bells that someone is trying to sell us a bill of goods. A lot of the paradoxes that provoke paradoxical responses have this quality.

I would definitely take the first of these deals, and would probably swallow the bullet and continue down the whole garden path . I would be interested to know if Eliezer's thinking has changed on this matter since September 2009.

However, if I were building an AI which may be offered this bet for the whole human species, I would want it to use the Kelly criterion and decline, under the premise that if humans survive the next hour, there may well be bets later that could increase lifespan further. However, if the human species goes extinct at any point, the... (read more)

Summary of this retracted post:

Omega isn't offering an extended lifespan; it's offering an 80% chance of guaranteed death plus a 20% chance of guaranteed death. Before this offer was made, actual immortality was on the table, with maybe a one-in-a-million chance.

[This comment is no longer endorsed by its author]Reply
0MugaSofer
How about if you didn't have that one-in-a-million chance? After all, life is good for more than immortality research.
2player_03
One-in-a-million is just an estimate. Immortality is a tough proposition, but the singularity might make it happen. The important part is that it isn't completely implausible. I'm not sure what you mean, otherwise. Are you suggesting that Omega takes away any chance of achieving immortality even before making the offer? In that case, Omega's a jerk, but I'll shut up and multiply. Or are you saying that 10^10,000,000,000 years could be used for other high-utility projects, like making simulated universes full of generally happy people? Immortality would allow even more time for that.
-2MugaSofer
#1, although I was thinking in terms of someone from a civilization with no singularity in sight. Thanks for clarifying!
1player_03
Ok, yeah, in that case my response is to take as many deals as Omega offers. AdeleneDawner and gwern provide a way to make the idea more palatable - assume MWI. That is, assume there will be one "alive" branch and a bunch of "dead" branches. That way, your utility payoff is guaranteed. (Ignoring the grief of those around you in all the "dead" branches.) Without that interpretation, the idea becomes scarier, but the math still comes down firmly on the side of accepting all the offers. It certainly feels like a bad idea to accept that probability of death, no matter what the math says, but as far as I can tell that's scope insensitivity talking. With that in mind, my only remaining objection is the "we can do better than that" argument presented above. My feeling is, why not use a few of those 10^10,000,000,000 years to figure out a way to live even longer? Omega won't allow it? Ok, so I don't want to get involved with Omega in the first place; it's not worth losing my (admittedly slim) chances at actual immortality. Too late for that? Fine, then I'll sit down, shut up, and multiply.

I have a hunch that 'altruism' is the mysterious key to this puzzle.

I don't have an elegant fix for this, but I came up with a kludgy decision procedure that would not have the issue.

Problem: you don't want to give up a decent chance of something good, for something even better that's really unlikely to happen, no matter how much better that thing is.

Solution: when evaluating the utility of a probabilistic combination of outcomes, instead of taking the average of all of them, remove the top 5% (this is a somewhat arbitrary choice) and find the average utility of the remaining outcomes.

For example, assume utility is proport... (read more)

I could choose the arbitrary cut-off like 75%, which still buys me 10^^645385211 years of life (in practice, I would be more likely to go with 60%, but that's really a personal preference). Of course, I lose out on the tetration and faster-growth options from refusing that deal, but Omega never mentioned those and I have no particular reason to expect them.

Of course, this starts to get into Pascal's mugging territory, because it really comes down to how much you trust Omega, or the person representing Omega. I can't think of any real-life observations I co... (read more)

[-][anonymous]00

Each offer is one I want to accept, but I eventually have to turn one down in order to gain from it. Let's say I don't trust my mind to truly be so arbitrary though, so I use actual arbitrariness. Each offer multiplies my likelihood of survival by 99.9999%, so let's say each time I give myself a 99.999% chance of accepting the next offer (the use of 1 less 9 is deliberate). I'm likely to accept a lot of offers without significant decrease in my likelihood of survival. But wait, I just did better by letting probability choose than if I myself had chosen. Pe... (read more)

What I'm doing wrong? I think that one obviously should be happy with 1<(insert ridiculous amount of zeros)> years for 1 - 1:10^1000 chance of dying within an hour. In a simplistic way of thinking. I could take into account things like "What's going to happen to the rest of all sentient beings", "what's up with humanity after that", and even more importantly, If this offer were to be available for every sentient being, I should assign huge negative utility for chance of all life being terminated due to ridiculously low chance of a... (read more)

[-]gjm00

My own analysis of the Repugnant Conclusion [...]

... is, I am gratified to see, the same as mine.

When TORTURE v DUST SPECKS was discussed before, some people made suggestions along the following lines: perhaps when you do something to N people the resulting utility change only increases as fast as (something like) the smallest program it takes to output a number as big as N. (No one put it quite like that, which is perhaps just as well since I'm not sure it can be made to make sense. But, e.g., Tom McCabe proposed that if you inflict a dust speck on 3^^... (read more)

2Wei Dai
As long as your U(live n years) is unbounded, then my reductio holds. With the discounting scheme you're proposing, Omega will need to offer you uncomputable amounts of lifespan to induce you to accept his offers, but you'll still accept them and end up with a 1/3^^^3 chance of a finite lifespan.
0gjm
How is he going to describe to me what these uncomputable amounts of lifespan are, and how will he convince me that they're big enough to justify reducing the probability of getting them?
1Wei Dai
By using non-constructive notation, like BusyBeaver(10^n). Surely you can be convinced that the smallest program it takes to output a number as big as BusyBeaver(10^n) is of size 10^n, and therefore accept a 10-fold reduction in probability to increase n by 1? Also, if you can't be convinced, then your utility function is effectively bounded.
0Simulacra
Somewhere I missed something, is there something wrong with bounded utilities? Every usable solution to these manipulations of infinity get dismissed because they are bounded, if they work what is the problem?
1pengvado
If your utility function is in fact bounded, then there's nothing wrong with that. But the utility function isn't up for grabs. If I care about something without bound, then I can't solve the dilemma by switching to a bounded utility function; that would simply make me optimize for some metric other than the one I wanted.
0Wei Dai
What does "the utility function isn't up for grabs" mean? I think Eliezer originated that phrase, but he apparently also believes that we can be and should be persuaded by (some) moral arguments. Aren't these two positions contradictory? (It seems like a valid or at least coherent, and potentially persuasive, argument that unbounded utility functions lead to absurd decisions.)
1Johnicholas
A notion can be constant and yet we can learn about it. For example: "The set of all prime numbers" is clearly unchanged by our reasoning, and yet we learn about it (whether it is finite, for example). Kripke used (for a different purpose) the morning star and the evening star. The concepts are discovered to be the same concept (from scientific evidence). The argument that unbounded utility functions lead to absurdity is also persuasive.
1Wei Dai
That seems to be a reasonable interpretation, but if we do interpret "the utility function isn't up for grabs" that way, as a factual claim that each person has a utility function that can be discovered but not changed by moral arguments and reasoning, then I think it's far from clear that the claim is true. There could be other interpretations that may or may not be more plausible, and I'm curious what Eliezer's own intended meaning is, as well as what pengvado meant by it.
1Johnicholas
There is a sense in which anything that makes choices does have a utility function - the utility function revealed by their choices. In this sense, for example, that akrasia doesn't exist. People prefer to procrastinate, as revealed by their choice to procrastinate. People frequently slip back and forth between this sense of "utility function" (a rather strange description of their behavior, whatever that is) and the utilitarian philosophers' notions of "utility", which have something to do with happiness/pleasure/fun. To the extent that people pursue happiness, pleasure, and fun, the two senses overlap. However, in my experience, people frequently make themselves miserable or make choices according to lawful rules (of morality, say) - without internal experiences of pleasure in following those rules.
1pengvado
And it's worse than just akrasia. If you have incoherent preferences and someone money-pumps you, then the revealed utility function is "likes running around in circles", i.e. it isn't even about the choices you thought you were deciding between.
2Johnicholas
Yup. Speaking as if "everyone" has a utility function is common around here, but it makes my teeth hurt.
0pengvado
I agree that if you can derive from my preferences a conclusion which is judged absurd by my current preferences, that's grounds to change my preferences. Though unless it's a preference reversal, such a derivation usually rests on both the preferences and the decision algorithm. In this case, as long as you're evaluating expected utility, a 1/bignum probability of +biggernum utilons is just a good deal. Afaict the nontrivial question is how to apply the thought experiment to the real world, where I don't have perfect knowledge or well calibrated probabilities, and want my mistakes to not be catastrophic. And the answer to that might be a decision algorithm that doesn't look exactly like expected utility maximization, but whose analogue of the utility function is still unbounded. Not that I have any more precise suggestions. What if you aren't balancing tiny probabilities, and Omega just gives you 80% chance of 10^^3 years and asks you if you want to pay a penny to switch to 80% chance of 10^^4 ? Assuming both of those are so far into the diminishing returns end of your bounded utility function that you see a negligible (< 20% of a penny) difference between them, that seems to me like an absurd conclusion in the other direction. Just giving up an unbounded reward is a mistake too.

Since we're talking about expected utility, I'd rather you answered this old question of mine...

The problem is that using a bounded utility function to prevent this sort of thing will lead to setting arbitrary bounds to how small you want the probability of continuing to live to go, or arbitrary bounds on how long you want to live, just like the arbitrary bounds that people tried to set up in the Dust Specks and Torture case.

On the other hand, an unbounded utility function, as I have said many times previously, leads to accepting a 1/(3^^^3) probability of some good result, as long as it is good enough, and so it results in accepting the Mugging and the Wager and so on.

The original thread is here. A Google search for "wei_dai lesswrong lifetime" found it.

ETA: The solution I proposed is down in the thread here.

If I wanted to be depressing, I'd say that, right now, my utility is roughly constant with respect to future lifespan...

Does the paradox go away if we set U(death) = -∞ utilons (making any increase in the chance of dying in the next hour impossible to overcome)? Does that introduce worse problems?

5Psy-Kosh
but U(death in bignum years) would also be - infinity utilions then, right? This problem was explicitly constructed as "living a long time and then dying vs living a short time and then dying."
3Larks
However, this doesn't describe people's actual utility functions- people crossing the road shows they're willing to take a small risk of death for other rewards.
1RolfAndreassen
I think this needs a bit of refinement, but it might work. Humans have a pretty strong immediacy bias; a greater than 0.1% chance of dying in the next hour really gets our attention. Infinity is way too strong; people do stand their ground on battlefields and such. But certainly you can assign a vast negative utility to that outcome as a practical description of how humans actually think, rather than as an ideal utility function describing how we ought to think.

Perhaps the problem here is that you're assuming that utility(probability, outcome) is the same as probability*utility(outcome). If you don't assume this, and calculate as if the utility of extra life decreased with the chance of getting it, the problem goes away, since no amount of life will drive the probability down below a certain point. This matches intuition better, for me at least.

EDIT: What's with the downvotes ?

In circumstances where the law of large numbers doesn't apply, the utility of a probability of an outcome cannot be calculated from jus... (read more)

2gwern
I for one don't follow your math; those 2 figures do look the same to me. Could you give some examples of how they give different answers?