# Exterminating life is rational

18 06 August 2009 04:17PM

I don't mean that deciding to exterminate life is rational.  But if, as a society of rational agents, we each maximize our expected utility, this may inevitably lead to our exterminating life, or at least intelligent life.

Ed Regis reports on p 216 of “Great Mambo Chicken and the TransHuman Condition,” (Penguin Books, London, 1992):

Edward Teller had thought about it, the chance that the atomic explosion would light up the surrounding air and that this conflagration would then propagate itself around the world. Some of the bomb makers had even calculated the numerical odds of this actually happening, coming up with the figure of three chances in a million they’d incinerate the Earth. Nevertheless, they went ahead and exploded the bomb.

Was this a bad decision?  Well, consider the expected value to the people involved.  Without the bomb, there was a much, much greater than 3/1,000,000 chance that either a) they would be killed in the war, or b) they would be ruled by Nazis or the Japanese.  The loss to them if they ignited the atmosphere would be another 30 or so years of life.  The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.  The loss in being conquered would also be large.  Easy decision, really.

Suppose that, once a century, some party in a conflict chooses to use some technique to help win the conflict that has a p=3/1,000,000 chance of eliminating life as we know it.  Then our expected survival time is 100 times the sum from n=1 to infinity of np(1-p)n-1.  If I've done my math right, that's ≈ 33,777,000 years.

This supposition seems reasonable to me.  There is a balance between offensive and defensive capability that shifts as technology develops.  If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed.  In the near future, biological weapons will be more able to wipe out life than we are able to defend against them.  We may then develop the ability to defend against biological attacks; we may then be safe until the next dangerous technology.

If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially.  The 34M years remaining to life is then in subjective time, and must be mapped into realtime.  If we suppose the subjective/real time ratio doubles every 100 years, this gives life an expected survival time of 2000 more realtime years.  If we instead use Ray Kurzweil's figure of about 2 years, this gives life about 40 remaining realtime years.  (I don't recommend Ray's figure.  I'm just giving it for those who do.)

Please understand that I am not yet another "prophet" bemoaning the foolishness of humanity.  Just the opposite:  I'm saying this is not something we will outgrow.  If anything, becoming more rational only makes our doom more certain.  For the agents who must actually make these decisions, it would be irrational not to take these risks.  The fact that this level of risk-tolerance will inevitably lead to the snuffing out of all life does not make the expected utility of these risks negative for the agents involved.

I can think of only a few ways that rationalilty can not inevitably exterminate all life in the cosmologically (even geologically) near future:

• We can outrun the danger:  We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction.

• Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies.

• People will stop having conflicts.

• Rational agents incorporate the benefits to others into their utility functions.
• Rational agents with long lifespans will protect the future for themselves.

• Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains.

• Independent agents will cease to exist, or to be free (the Singleton scenario).

Let's look at these one by one:

## We can outrun the danger.

We will colonize other planets; but we may also  figure out how to make the Sun go nova on demand.  We will colonize other star systems; but we may also figure out how to liberate much of the energy in the black hole at the center of our galaxy in a giant explosion that will move outward at near the speed of light.

One problem with this idea is that apocalypses are correlated; one may trigger another.  A disease may spread to another planet.  The choice to use a planet-busting bomb on one planet may lead to its retaliatory use on another planet.  It's not clear whether spreading out and increasing in population actually makes life more safe.  If you think in the other direction, a smaller human population (say ten million) stuck here on Earth would be safer from human-instigated disasters.

But neither of those are my final objection.  More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed.

## Technology will stabilize in a safe state.

Maybe technology will stabilize, and we'll run out of things to discover.  If that were to happen, I would expect that conflicts would increase, because people would get bored.  As I mentioned in another thread, one good explanation for the incessant and counterproductive wars in the middle ages - a reason some of the actors themselves gave in their writings - is that the nobility were bored.  They did not have the concept of progress; they were just looking for something to give them purpose while waiting for Jesus to return.

But that's not my final rejection.  The big problem is that by "safe", I mean really, really safe.  We're talking about bringing existential threats to chances less than 1 in a million per century.  I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

## People will stop having conflicts.

That's a nice thought.  A lot of people - maybe the majority of people - believe that we are inevitably progressing along a path to less violence and greater peace.

They thought that just before World War I.  But that's not my final rejection.  Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts.  Those that avoid conflict will be out-competed by those that do not.

But that's not my final rejection either.  The bigger problem is that this isn't something that arises only in conflicts.  All we need are desires.  We're willing to tolerate risk to increase our utility.  For instance, we're willing to take some unknown, but clearly greater than one in a million chance, of the collapse of much of civilization due to climate warming.  In return for this risk, we can enjoy a better lifestyle now.

Also, we haven't burned all physics textbooks along with all physicists.  Yet I'm confident there is at least a one in a million chance that, in the next 100 years, some physicist will figure out a way to reduce the earth to powder, if not to crack spacetime itself and undo the entire universe.  (In fact, I'd guess the chance is nearer to 1 in 10.)1  We take this existential risk in return for a continued flow of benefits such as better graphics in Halo 3 and smaller iPods.  And it's reasonable for us to do this, because an improvement in utility of 1% over an agent's lifespan is, to that agent, exactly balanced by a 1% chance of destroying the Universe.

The Wikipedia entry on Large Hadcon Collider risk says, "In the book Our Final Century: Will the Human Race Survive the Twenty-first Century?, English cosmologist and astrophysicist Martin Rees calculated an upper limit of 1 in 50 million for the probability that the Large Hadron Collider will produce a global catastrophe or black hole."  The more authoritative "Review of the Safety of LHC Collisions" by the LHC Safety Assessment Group concluded that there was at most a 1 in 1031 chance of destroying the Earth.

The LHC conclusions are criminally low.  Their evidence was this: "Nature has already conducted the LHC experimental programme about one billion times via the collisions of cosmic rays with the Sun - and the Sun still exists."  There followed a couple of sentences of handwaving to the effect that if any other stars had turned to black holes due to collisions with cosmic rays, we would know it - apparently due to our flawless ability to detect black holes and ascertain what caused them - and therefore we can multiply this figure by the number of stars in the universe.

I believe there is much more than a one-in-a-billion chance that our understanding in one of the steps used in arriving at these figures is incorrect.  Based on my experience with peer-reviewed papers, there's at least a one-in-ten chance that there's a basic arithmetic error in their paper that no one has noticed yet.  I'm thinking more like one-in-a-million, once you correct for the anthropic principle and for the chance that there is a mistake in the argument.  (That's based on a belief that priors for anything likely enough that smart people even thought of the possibility should be larger than one in a billion, unless they were specifically trying to think of examples of low-probability possibilities such as all of the air molecules in the room moving to one side.)

The Trinity test was done for the sake of winning World War II.  But the LHC was turned on for... well, no practical advantage that I've heard of yet.  It seems that we are willing to tolerate one-in-a-million chances of destroying the Earth for very little benefit.  And this is  rational, since the LHC will probably improve our lives by more than one part in a million.

## Rational agents incorporate the benefits to others into their utility functions.

"But," you say, "I wouldn't risk a 1% chance of destroying the universe for a 1% increase in my utility!"

Well... yes, you would, if you're a rational expectation maximizer.  It's possible that you would take a much higher risk, if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.  (This seems difficult, but is worth exploring.)  If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience.  It doesn't.  It's a 1% increase in your utility.  If you factor the rest of your universe into your utility function, then it's already in there.

The US national debt should be enough to convince you that people act in their self-interest.  Even the most moral people - in fact, especially the "most moral" people - do not incorporate the benefits to others, especially future others, into their utility functions.  If we did that, we would engage in massive eugenics programs.  But eugenics is considered the greatest immorality.

But maybe they're just not as rational as you.  Maybe you really are a rational saint who considers your own pleasure no more important than the pleasure of everyone else on Earth.  Maybe you have never, ever bought anything for yourself that did not bring you as much benefit as the same amount of money would if spent to repair cleft palates or distribute vaccines or mosquito nets or water pumps in Africa.  Maybe it's really true that, if you met the girl of your dreams and she loved you, and you won the lottery, put out an album that went platinum, and got published in Science, all in the same week, it would make an imperceptible change in your utility versus if everyone you knew died, Bernie Madoff spent all your money, and you were unfairly convicted of murder and diagnosed with cancer.

It doesn't matter.  Because you would be adding up everyone else's utility, and everyone else is getting that 1% extra utility from the better graphics cards and the smaller iPods.

But that will stop you from risking atmospheric ignition to defeat the Nazis, right?  Because you'll incorporate them into your utility function?  Well, that is a subset of the claim "People will stop having conflicts."  See above.

And even if you somehow worked around all these arguments, evolution, again, thwarts you.2  Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents.  The claim that rational agents are not selfish implies that rational agents are unfit.

## Rational agents with long lifespans will protect the future for themselves.

The most familiar idea here is that, if people expect to live for millions of years, they will be "wiser" and take fewer risks with that time.  But the flip side is that they also have more time to lose.  If they're deciding whether to risk igniting the atmosphere in order to lower the risk of being killed by Nazis, lifespan cancels out of the equation.

Also, if they live a million times longer than us, they're going to get a million times the benefit of those nicer iPods.  They may be less willing to take an existential risk for something that will benefit them only temporarily.  But benefits have a way of increasing, not decreasing, over time.  The discovery of the law of gravity and of the invisible hand benefit us in the 21st century more than they did the people of the 17th century.

But that's not my final rejection.  More important is time-discounting.  Agents will time-discount, probably exponentially, due to uncertainty.  If you considered benefits to the future without exponential time-discounting, the benefits to others and to future generations would outweigh any benefits to yourself so much that in many cases you wouldn't even waste time trying to figure out what you wanted.  And, since future generations will be able to get more utility out of the same resources, we'd all be obliged to kill ourselves, unless we reasonably think that we are contributing to the development of that capability.

Time discounting is always (so far) exponential, because non-asymptotic functions don't make sense.  I supposed you could use a trigonometric function instead for time discounting, but I don't think it would help.

Could a continued exponential population explosion outweigh exponential time-discounting?  Well, you can't have a continued exponential population explosion, because of the speed of light and the Planck constant.  (I leave the details as an exercise to the reader.)

Also, even if you had no time-discounting, I think that a rational agent must do identity-discounting.  You can't stay you forever.  If you change, the future you will be less like you, and weigh less strongly in your utility function.  Objections to this generally assume that it makes sense to trace your identity by following your physical body.  Physical bodies will not have a 1-1 correspondence with personalities for more than another century or two, so just forget that idea.  And if you don't change, well, what's the point of living?

Evolutionary arguments may help us with self-discounting.  Evolutionary forces encourage agents to emphasize continuity or ancestry over resemblance in an agent's selfness function.  The major variable is reproduction rate over lifespan.  This applies to genes or memes.  But they can't help us with time-discounting.

I think there may be a way to make this one work.  I just haven't thought of it yet.

## A benevolent singleton will save us all.

This case takes more analysis than I am willing to do right now.  My short answer is that I place a very low expected utility on singleton scenarios.  I would almost rather have the universe eat, drink, and be merry for 34 million years, and then die.

I'm not ready to place my faith in a singleton.  I want to work out what is wrong with the rest of this argument, and how we can survive without a singleton.

(Please don't conclude from my arguments that you should go out and create a singleton.  Creating a singleton is hard to undo.  It should be deferred nearly as long as possible.  Maybe we don't have 34 million years, but this essay doesn't give you any reason not to wait a few thousand years at least.)

## In conclusion

I think that the figures I've given here are conservative.  I expect existential risk to be much greater than 3/1,000,000 per century.  I expect there will continue to be externalities that cause suboptimal behavior, so that the actual risk will be greater even than the already-sufficient risk that rational agents would choose.  I expect population and technology to continue to increase, and existential risk to be proportional to population times technology.  Existential risk will very possibly increase exponentially, on top of the subjective-time exponential.

Our greatest chance for survival is that there's some other possibility I haven't thought of yet.  Perhaps some of you will.

1 If you argue that the laws of physics may turn out to make this impossible, you don't understand what "probability" means.

2 Evolutionary dynamics, the speed of light, and the Planck constant are the three great enablers and preventers of possible futures, which enable us to make predictions farther into the future and with greater confidence than seem intuitively reasonable.

Sort By: Best
Comment author: 06 August 2009 07:22:14PM 5 points [-]

Although your conclusions are very depressing, it seems I must accept them. The other commenters' reluctance to agree puzzles me.

Comment author: 09 August 2009 07:22:15PM *  4 points [-]

I find the analysis largely convincing as well, and further feel a 3/1M chance per century of existential disaster is extremely conservative. But I also don't find the idea of a singleton depressing. Bostrom suggests the idea of a singleton being a world democratic government or a benevolent superintelligent machine, which Eliezer's CEV seems able to realize, at least with my initial understanding. It even seems possible that singletons such as that might dissolve themselves if that's what was desired (<-serious handwaving), but I admit that a singleton has such potential for staying power that it's probably best to assume it's "forever".

With my views on the varied risks we face, the unique potential of a singleton to solve many of them, and with a personal estimate of .5 probability of surviving this century at best, a singleton seems worth looking into. Its a huge danger itself, but I think we ought to investigate the best ways to make a "safe" singleton at the same time as looking for ways to avoid risk without one, not waiting until we are sure we absolutely need one.

I realize this was not the focus of the post and so apologize if it's too off-topic. I wanted to draw more attention to it as a potential solution, though I don't mean to withdraw attention from the post's central issues.

Comment author: 06 August 2009 09:09:06PM *  11 points [-]

Here's a possible problem with my analysis:

Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

How many cards do you draw?

I'm pretty sure that someone who believes in many worlds will keep drawing cards until they die. But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

So this whole post may boil down to "maximizing expected utility" not actually being the right thing to do. Also see my earlier, equally unpopular post on why expectation maximization implies average utilitarianism. If you agree that average utilitarianism seems wrong, that's another piece of evidence that maximizing expected utility is wrong.

Comment author: 06 August 2009 10:46:31PM *  7 points [-]

Reformulation to weed out uninteresting objections: Omega knows expected utility according to your preference if you go on without its intervention U1 and utility if it kills you U0<U1. It presents a choice between walking away, that is deciding expected utility U1, and playing a lottery that gives you with equal (50%) probability either U0 or U1+3*(U1-U0). Then, expected utility of the lottery is 0.5*(4*U1-2*U0)=U1+(U1-U0)>U1.

My answer: even in a deterministic world, I take the lottery as many times as Omega has to offer, knowing that the probability of death tends to certainty as I go on. This example is only invalid for money because of diminishing returns. If you really do possess the ability to double utility, low probability of positive outcome gets squashed by high utility of that outcome.

Comment author: 06 August 2009 11:09:55PM *  2 points [-]

Does my entire post boil down to this seeming paradox?

(Yes, I assume Omega can actually double utility.)

The use of U1 and U0 is needlessly confusing. And it changes the game, because now, U0 is a utility associated with a single draw, and the analysis of doing repeated draws will give different answers. There's also too much change in going from "you die" to "you get utility U0". There's some semantic trickiness there.

Comment author: 07 August 2009 12:37:56AM 11 points [-]

Pretty much. And I should mention at this point that experiments show that, contrary to instructions, subjects nearly always interpret utility as having diminishing marginal utility.

Comment author: 07 August 2009 03:52:55AM 1 point [-]

Well, that leaves me even less optimistic than before. As long as it's just me saying, "We have options A, B, and C, but I don't think any of them work," there are a thousand possible ways I could turn out to be wrong. But if it reduces to a math problem, and we can't figure out a way around that math problem, hope is harder.

Comment author: 16 May 2011 08:28:58PM 0 points [-]

There's an excellent paper by Peter le Blanc indicating that under reasonable assumptions, if you utility function is unbounded, then you can't compute finite expected utilities. So if Omega can double your utility an unlimited number of times, you have other problems that cripple you in the absence of involvement from Omega. Doubling your utility should be a mathematical impossibility at some point.

That demolishes "Shut up and Multiply", IMO.

SIAI apparently paid Peter to produce that. It should get more attention here.

Comment author: 16 May 2011 08:55:29PM *  2 points [-]

So if Omega can double your utility an unlimited number of times

This was not assumed, I even explicitly said things like "I take the lottery as many times as Omega has to offer" and "If you really do possess the ability to double utility". To the extent doubling of utility is actually provided (and no more), we should take the lottery.

Comment author: 16 May 2011 09:06:13PM *  3 points [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply. If your utility function is linear in actual, rather than perceived, paperclips, Omega might be able to offer you the deal infinitely many times.

Comment author: 16 May 2011 09:14:40PM 1 point [-]

Also, if your utility function's scope is not limited to perception-sequences, Peter's result doesn't directly apply.

How can you act upon a utility function if you cannot evaluate it? The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Comment author: 16 May 2011 09:35:45PM *  4 points [-]

The utility function needs inputs describing your situation. The only available inputs are your perceptions.

Not so. There's also logical knowledge and logical decision-making where nothing ever changes and no new observations ever arrive, but the game still can be infinitely long, and contain all the essential parts, such as learning of new facts and determination of new decisions.

(This is of course not relevant to Peter's model, but if you want to look at the underlying questions, then these strange constructions apply.)

Comment author: 17 May 2011 10:47:38AM 2 points [-]

"Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

I have a problem with "double your utility for the rest of your life". Are we talking about utilons per second? Or do you mean "double the utility of your life", or just "double your utility"? How does dying a couple of minutes later affect your utility? Do you get the entire (now doubled) utility for those few minutes? Do you get pro rata utility for those few minutes divided by your expected lifespan?

Related to this is the question of the utility penalty of dying. If your utility function includes benefits for other people, then your best bet is to draw cards until you die, because the benefits to the rest of the universe will massively outweigh the inevitability of your death.

If, on the other hand, death sets your utility to zero (presumably because your utility function is strictly only a function of your own experiences), then... yeah. If Omega really can double your utility every time you win, then I guess you keep drawing until you die. It's an absurd (but mathematically plausible) situation, so the absurd (but mathematically plausible) answer is correct. I guess.

Comment author: 07 August 2009 12:20:59AM *  2 points [-]

Can utility go arbitrarily high? There are diminishing returns on almost every kind of good thing. I have difficulty imagining life with utility orders of magnitude higher than what we have now. Infinitely long youth might be worth a lot, but even that is only so many doublings due to discounting.

I'm curious why it's getting downvoted without reply. Related thread here. How high do you think "utility" can go?

Comment author: 07 August 2009 02:53:14PM 7 points [-]

I would guess you're being downvoted by someone who is frustrated not by you so much as by all the other people before you who keep bringing up diminishing returns even though the concept of "utility" was invented to get around that objection.

"Utility" is what you have after you've factored in diminishing returns.

We do have difficulty imagining orders of magnitude higher utility. That doesn't mean it's nonsensical. I think I have orders of magnitude higher utility than a microbe, and that the microbe can't understand that. One reason we develop mathematical models is that they let us work with things that we don't intuitively understand.

If you say "Utility can't go that high", you're also rejecting utility maximization. Just in a different way.

Comment author: 07 August 2009 04:54:13PM 0 points [-]

Nothing about utility maximization model says utility function is unbounded - the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0.

If the function is let's say U(x) = 1 - 1/(1+x), U'(x) = (x+1)^-2, then it's a properly behaving utility function, yet it never even reaches 1.

And utility maximization is just a model that breaks easily - it can be useful for humans to some limited extent, but we know humans break it all the time. Trying to imagine utilities orders of magnitude higher than current gets it way past its breaking point.

Comment author: 07 August 2009 05:35:44PM *  6 points [-]

Nothing about utility maximization model says utility function is unbounded

Yep.

the only mathematical assumptions for a well behaved utility function are U'(x) >= 0, U''(x) <= 0

Utility functions aren't necessarily over domains that allow their derivatives to be scalar, or even meaningful (my notional u.f., over 4D world-histories or something similar, sure isn't). Even if one is, or if you're holding fixed all but one (real-valued) of the parameters, this is far too strong a constraint for non-pathological behavior. E.g., most people's (notional) utility is presumably strictly decreasing in the number of times they're hit with a baseball bat, and non-monotonic in the amount of salt on their food.

Comment author: 11 August 2009 10:05:13PM *  1 point [-]

Sorry for coming late to this party. ;)

Much of this discussion seems to me to rest on a similar confusion to that evidenced in "Expectation maximization implies average utilitarianism".

As I just pointed out again, the vNM axioms merely imply that "rational" decisions can be represented as maximising the expectation of some function mapping world histories into the reals. This function is conventionally called a utility function. In this sense of "utility function", your preferences over gambles determine your utility (up to an affine transform), so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer".* Given standard assumptions about Omega, this pretty obviously means that you accept the offer.

The confusion seems to arise because there are other mappings from world histories into the reals that are also conventionally called utility functions, but which have nothing in particular to do with the vNM utility function. When we read "I'll double your utility" I think we intuitively parse the phrase as referring to one of these other utility functions, which is when problems start to ensue.

Maximising expected vNM utility is the right thing to do. But "maximise expected vNM utility" is not especially useful advice, because we have no access to our vNM utility function unless we already know our preferences (or can reasonably extrapolate them from preferences we do have access to). Maximising expected utilons is not necessarily the right thing to do. You can maximize any (potentially bounded!) positive monotonic transform of utilons and you'll still be "rational".

* There are sets of "rational" preferences for which such a statement could never be true (your preferences could be represented by a bounded utility function where doubling would go above the bound). If you had such preferences and Omega possessed the usual Omega-properties, then she would never claim to be able to double your utility: ergo the hypothetical implicitly rules out such preferences.

NB: I'm aware that I'm fudging a couple of things here, but they don't affect the point, and unfudging them seemed likely to be more confusing than helpful.

Comment author: 11 August 2009 10:16:47PM *  0 points [-]

so when Omega says "I'll double your utility" this is just a very roundabout (and rather odd) way of saying something like "I will do something sufficiently good that it will induce you to accept my offer"

It's not that easy. As humans are not formally rational, the problem is about whether to bite this particular bullet, showing a form that following the decision procedure could take and asking if it's a good idea to adopt a decision procedure that forces such decisions. If you already accept the decision procedure, of course the problem becomes trivial.

Comment author: 11 August 2009 11:16:53PM *  0 points [-]

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

The former doesn't force such decisions at all. That's precisely why I said that it's not useful advice: all it says is that you should take the gamble if you prefer to take the gamble.* (Moreover, if you did not prefer to take the gamble, the hypothetical doubling of vNM utility could never happen, so the set up already assumes you prefer the gamble. This seems to make the hypothetical not especially useful either.)

On the other hand "maximize expected utilons" does provide concrete advice. It's just that (AFAIK) there's no reason to listen to that advice unless you're risk-neutral over utilons. If you were sufficiently risk averse over utilons then a 50% chance of doubling them might not induce you to take the gamble, and nothing in the vNM axioms would say that you're behaving irrationally. The really interesting question then becomes whether there are other good reasons to have particular risk preferences with respect to utilons, but it's a question I've never heard a particularly good answer to.

* At least provided doing so would not result in an inconsistency in your preferences. [ETA: Actually, if your preferences are inconsistent, then they won't have a vNM utility representation, and Omega's claim that she will double your vNM utility can't actually mean anything. The set-up therefore seems to imply that you preferences are necessarily consistent. There sure seem to be a lot of surreptitious assumptions built in here!]

Comment author: 12 August 2009 02:10:42PM 0 points [-]

Which decision procedure are you talking about? Maximising expected vNM utility and maximizing (e.g.) expected utilons are quite different procedures - which was basically my point.

[...] you should take the gamble if you prefer to take the gamble

The "prefer" here isn't immediate. People have (internal) arguments about what should be done in what situations precisely because they don't know what they really prefer. There is an easy answer to go with the whim, but that's not preference people care about, and so we deliberate.

When all confusion is defeated, and the preference is laid out explicitly, as a decision procedure that just crunches numbers and produces a decision, that is by construction exactly the most preferable action, there is nothing to argue about. Argument is not a part of this form of decision procedure.

In real life, argument is an important part of any decision procedure, and it is the means by which we could select a decision procedure that doesn't involve argument. You look at the possible solutions produced by many tools, and judge which of them to implement. This makes the decision procedure different from the first kind.

One of the tools you consider may be a "utility maximization" thingy. You can't say that it's by definition the right decision procedure, as first you have to accept it as such through argument. And this applies not only to the particular choice of prior and utility, but also to the algorithm itself, to the possibility of representing your true preference in this form.

The "utilons" of the post linked above look different from the vN-M expected utility because their discussion involved argument, informal steps. This doesn't preclude the topic the argument is about, the "utilons", from being exactly the same (expected) utility values, approximated to suit more informal discussion. The difference is that the informal part of decision-making is considered as part of decision procedure in that post, unlike what happens with the formal tool itself (that is discussed there informally).

By considering the double-my-utility thought experiment, the following question can be considered: assuming that the best possible utility+prior are chosen within the expected utility maximization framework, do the decisions generated by the resulting procedure look satisfactory? That is, is this form of decision procedure adequate, as an ultimate solution, for all situations? The answer can be "no", which would mean that expected utility maximization isn't a way to go, or that you'd need to apply it differently to the problem.

Comment author: 12 August 2009 04:02:27PM *  0 points [-]

I'm struggling to figure out whether we're actually disagreeing about anything here, and if so, what it is. I agree with most of what you've said, but can't quite see how it connects to the point I'm trying to make. It seems like we're somehow managing to talk past each other, but unfortunately I can't tell whether I'm missing your point, you're missing mine, or something else entirely. Let's try again... let me know if/when you think I'm going off the rails here.

If I understand you correctly, you want to evaluate a particular decision procedure "maximize expected utility" (MEU) by seeing whether the results it gives in this situation seem correct. (Is that right?)

My point was that the result given by MEU, and the evidence that this can provide, both depend crucially on what you mean by utility.

One possibility is that by utility, you mean vNM utility. In this case, MEU clearly says you should accept the offer. As a result, it's tempting to say that if you think accepting the offer would be a bad idea, then this provides evidence against MEU (or equivalently, since the vNM axioms imply MEU, that you think it's ok to violate the vNM axioms). The problem is that if you violate the vNM axioms, your choices will have no vNM utility representation, and Omega couldn't possibly promise to double your vNM utility, because there's no such thing. So for the hypothetical to make sense at all, we have to assume that your preferences conform to the vNM axioms. Moreover, because the vNM axioms necessarily imply MEU, the hypothetical also assumes MEU, and it therefore can't provide evidence either for or against it.*

If the hypothetical is going to be useful, then utility needs to mean something other than vNM utility. It could mean hedons, it could mean valutilons,** it could mean something else. I do think that responses to the hypothetical in these cases can provide useful evidence about the value of decision procedures such as "maximize expected hedons" (MEH) or "maximize expected valutilons" (MEV). My point on this score was simply that there is no particular reason to think that either MEH or MEV were likely to be an optimal decision procedure to begin with. They're certainly not implied by the vNM axioms, which require only that you should maximise the expectation of some (positive) monotonic transform of hedons or valutilons or whatever.*** [ETA: As a specific example, if you decide to maximize the expectation of a bounded concave function of hedons/valutilons, then even if hedons/valutilons are unbounded, you'll at some point stop taking bets to double your hedons/valutilons, but still be an expected vNM utility maximizer.]

Does that make sense?

* This also means that if you think MEU gives the "wrong" answer in this case, you've gotten confused somewehere - most likely about what it means to double vNM utility.

** I define these here as the output of a function that maps a specific, certain, world history (no gambles!) into the reals according to how well that particular world history measures up against my values. (Apologies for the proliferation of terminology - I'm trying to guard against the possibility that we're using "utilons" to mean different things without inadvertently ending up in a messy definitional argument. ;))

*** A corollary of this is that rejecting MEH or MEV does not constitute evidence against the vNM axioms.

Comment author: 12 August 2009 04:38:33PM 0 points [-]

You are placing on a test the following well-defined tool: expected utility maximizer with a prior and "utility" function, that evaluates the events on the world. By "utility" function here I mean just some function, so you can drop the word "utility". Even if people can't represent their preference as expected some-function maximization, such tool could still be constructed. The question is whether such a tool can be made that always agrees with human preference.

An easy question is what happens when you use "hedons" or something else equally inadequate in the role of utility function: the tool starts to make decisions with which we disagree. Case closed. But maybe there are other settings under which the tool is in perfect agreement with human judgment (after reflection).

Utility-doubling thought experiment compares what is better according to the judgment of the tool (to take the card) with what is better according to the judgment of a person (maybe not take the card). As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person't preference (after "careful" reflection), proves that there is no way to set up "utility" and prior so that the "utility" maximization tool represents that person's preference.

Comment author: 12 August 2009 06:08:16PM 1 point [-]

As the tool's decision in this thought experiment is made invariant on the tool's settings ("utility" and prior), showing that the tool's decision is wrong according to a person's preference (after "careful" reflection), proves that there is no way to set up "utility"

My argument is that, if Omega is offering to double vNM utility, the set-up of the thought experiment rules out the possibility that the decision could be wrong according to a person's considered preference (because the claim to be doubling vNM utility embodies an assumption about what a person's considered preference is). AFAICT, the thought experiment then amounts to asking: "If I should maximize expected utility, should I maximize expected utility?" Regardless of whether I should actually maximize expected utility or not, the correct answer to this question is still "yes". But the thought experiment is completely uninformative.

Do you understand my argument for this conclusion? (Fourth para of my previous comment.) If you do, can you point out where you think it goes astray? If you don't, could you tell me what part you don't understand so I can try to clarify my thinking?

On the other hand, if Omega is offering to double something other than vNM utility (hedons/valutilons/whatever) then I don't think we have any disagreement. (Do we? Do you disagree with anything I said in para 5 of my previous comment?)

My point is just that the thought experiment is underspecified unless we're clear about what the doubling applies to, and that people sometimes seem to shift back and forth between different meanings.

Comment author: 12 August 2009 06:33:13PM *  1 point [-]

What you just said seems correct.

What was originally at issue is whether we should act in ways that will eventually destroy ourselves.

I think the big-picture conclusion from what you just wrote is that, if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

However, in our circumstances, time discounting and "identity discounting" encourage us to start enjoying and dooming ourselves now; whereas it would probably be better to spread life to a few other galaxies first, and then enjoy ourselves.

(I admit that my use of the word "better" is problematic.)

Comment author: 13 August 2009 09:15:03AM 1 point [-]

if we see that we're acting in ways that will probably exterminate life in short order, that doesn't necessarily mean it's the wrong thing to do.

Well, I don't disagree with this, but I would still agree with it if you substituted "right" for "wrong", so it doesn't seem like much of a conclusion. ;)

Comment author: 13 August 2009 07:38:49AM *  0 points [-]

You argue that the thought experiment is trivial and doesn't solve any problems. In my comments above I described a specific setup that shows how to use (interpret) the thought experiment to potentially obtain non-trivial results.

Comment author: 13 August 2009 08:52:01AM *  0 points [-]

I argue that the thought experiment is ambiguous, and that for a certain definition of utility (vNM utility), it is trivial and doesn't solve any problems. For this definition of utility I argue that your example doesn't work. You do not appear to have engaged with this argument, despite repeated requests to point out either where it goes wrong, or where it is unclear. If it goes wrong, I want to know why, but this conversation isn't really helping.

For other definitions of utility, I do not, and have never claimed that the thought experiment is trivial. In fact, I think it is very interesting.

Comment author: 06 August 2009 11:36:13PM *  1 point [-]

Assuming the utility increase holds my remaining lifespan constant, I'd draw a card every few years (if allowed). I don't claim to maximize "expected integral of happiness over time" by doing so (substitute utility for happiness if you like; but perhaps utility should be forward-looking and include expected happiness over time as just one of my values?). Of course, by supposing my utility can be doubled, I'll never be fully satisfied.

Comment author: 07 August 2009 07:08:30AM 0 points [-]

The "justified expectation of pleasant surprises", as someone or other said.

Comment author: [deleted] 06 August 2009 10:48:13PM 1 point [-]

It seems like you are assuming that the only effect of dying is that it brings your utility to 0. I agree that after you are dead your utility is 0, but before you are dead you have to die, and I think that is a strongly negative utility event. When I picture my utility playing this game, I think that if I start with X, then I draw a start and have 2X. Then I draw a skull, I look at the skull, my utility drops to -10000X as I shit my pants and beg omega to let me live, and then he kills me and my utility is 0.

I don't know how much sense that makes mathematically. But it certainly feels to me like fear of death makes dying a more negative event than just a drop to utility 0.

Comment author: 06 August 2009 11:11:15PM 3 points [-]

The skull cards are electrocuted, and will kill you instantly and painlessly as soon as you touch them.

(Be careful to touch only the cards you take.)

Comment author: 06 August 2009 09:25:50PM 1 point [-]

I'd wondered why nobody brought up MWI and anthropic probabilities yet.

As for this, it reminds me of a Dutch book argument Eliezer discussed some time ago. His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

In my case (assuming I'm quite confident in Many-Worlds), I might decide to take a card or two, go off and enjoy myself for a week, come back and take another card or two, et cetera.

Comment author: 06 August 2009 10:42:46PM 4 points [-]

Many worlds have nothing to do with validity of suicidal decisions. If you have an answer that maximizes expected utility but gives almost-certain probability of total failure, you still take it in a deterministic world. There is no magic by which deterministic world declares that the decision-theoretic calculation is invalid in this particular case, while many-worlds lets it be.

Comment author: 06 August 2009 10:48:13PM *  0 points [-]

I think you're right. Would you agree that this is a problem with following the policy of maximizing expected utility? Or would you keep drawing cards?

Comment author: 06 August 2009 11:13:28PM 5 points [-]

This is a variant on the St. Petersburg paradox, innit? My preferred resolution is to assert that any realizable utility function is bounded.

Comment author: 07 August 2009 12:26:42AM *  1 point [-]

Rejection of mathematical expectation

Various authors, including Jean le Rond d'Alembert and John Maynard Keynes, have rejected maximization of expectation (even of utility) as a proper rule of conduct. Keynes, in particular, insisted that the relative risk of an alternative could be sufficiently high to reject it even were its expectation enormous.

Comment author: 07 August 2009 12:30:06AM 1 point [-]

The page notes the reformulation in terms of utility, which it terms "super St. Petersberg paradox". (It doesn't have its own section, or I'd have linked directly to that.) I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

Comment author: 07 August 2009 01:38:39AM 4 points [-]

I agree that there doesn't seem to be a workable solution -- my last refuge was just destroyed by Vladimir Nesov.

I'm afraid I don't understand the difficulty here. Let's assume that Omega can access any point in configuration space and make that the reality. Then either (A) at some point it runs out of things with which to entice you to draw another card, in which case your utility function is bounded or (B) it never runs out of such things, in which case your utility function in unbounded.

Why is this so paradoxical again?

Comment author: 07 August 2009 09:52:57PM 1 point [-]

After further thought, I see that case (B) can be quite paradoxical. Consider Eliezer's utility function, which is supposedly unbounded as a function of how many years he lives. In other words, Omega can increase Eliezer's utility without bound just by giving him increasingly longer lives. Expected utility maximization then dictates that he keeps drawing cards one after another, even though he knows that by doing so, with probability 1 he won't live to enjoy his rewards.

Comment author: 07 August 2009 10:16:11PM *  4 points [-]

When you go to infinity, you'd need to define additional mathematical structure that answers your question. You can't just conclude that the correct course of action is to keep drawing cards for eternity, doing nothing else. Even if at each moment the right action is to draw one more card, when you consider the overall strategy, the strategy of drawing cards for all time may be a wrong strategy.

For example, consider the following preference on infinite strings. A string has utility 0, unless it has the form 11111.....11112222...., that is a finite number of 1 followed by infinite number of 2, in which case its utility is the number of 1s. Clearly, a string of this form with one more 1 has higher utility than a string without, and so a string with one more 1 should be preferred. But a string consisting only of 1s doesn't have the non-zero-utility form, because it doesn't have the tail of infinite number of 2s. It's a fallacy to follow an incremental argument to infinity. Instead, one must follow a one-step argument that considers the infinite objects as whole.

Comment author: 07 August 2009 09:55:54PM 2 points [-]

Does Omega's utility doubling cover the contents of the as-yet-untouched deck? It seems to me that it'd be pretty spiffy re: my utility function for the deck to have a reduced chance of killing me.

Comment author: 07 August 2009 04:00:32AM 1 point [-]

If it's not paradoxical, how many cards would you draw?

Comment author: 07 August 2009 09:09:59AM 1 point [-]

I guess no more than 10 cards. That's based on not being able to imagine a scenario such that I'd prefer .999 probability of death + .001 probability of scenario to the status quo. But it's just a guess because Omega might have better imagination that I do, or understand my utility function better than I do.

Comment author: 07 August 2009 02:14:38AM *  0 points [-]

Yeesh. I'm changing my mind again tonight. My only excuse is that I'm sick, so I'm not thinking as straight as I might.

I was originally thinking that Vladimir Nesov's reformulation showed that I would always accept Omega's wager. But now I see that at some point U1+3*(U1-U0) must exceed any upper bound (assuming I survive that long).

Given U1 (utility of refusing initial wager), U0 (utility of death), U_max, and U_n (utility of refusing wager n assuming you survive that long), it might be possible that there is a sequence of wagers that (i) offer positive expected utility at each step; (ii) asymptotically approach the upper bound if you survive; and (iii) have a probability of survival approaching zero. I confess I'm in no state to cope with the math necessary to give such a sequence or disprove its existence.

Comment author: 07 August 2009 03:23:55AM *  1 point [-]

There is no such sequence. Proof:

In order for wager n to be nonnegative expected utility, P(death)*U_0 + (1-P(death))*U_(n+1) >= U_n. Equivalently, P(death this time | survived until n) <= (U_(n+1)-U_n) / (U_(n+1)-U0).

Assume the worst case, equality. Then the cumulative probability of survival decreases by exactly the same factor as your utility (conditioned on survival) increases. This is simple multiplication, so it's true of a sequence of borderline wagers too.

With a bounded utility function, the worst sequence of wagers you'll accept in total is P(death) <= (U_max-U0)/(U1-U0). Which is exactly what you'd expect.

Comment author: 07 August 2009 04:01:27AM 0 points [-]

How would it help if this sequence existed?

Comment author: 07 August 2009 08:37:05PM *  3 points [-]

Why is rejection of mathematical expectation an unworkable solution?

This isn't the only scenario where straight expectation is problematic. Pascal's Mugging, timeless decision theory, and maximization of expected growth rate come to mind. That makes four.

In my opinion, LWers should not give expected utility maximization the same axiomatic status that they award consequentialism. Is this worth a top level post?

Comment author: 11 August 2009 02:48:52AM *  1 point [-]

This is exactly my take on it also.

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are concave". This has always struck me as extremely fishy, for two reasons: (a) it gives rise to paradoxes like this, and (b) it doesn't at all match what making a choice feels like for me: if someone offers me a risky bet, I feel inclined to reject it because it is risky, not because I have done some extensive integration of my utility function over all possible outcomes. So it seems a much safer assumption to just assume that people's preferences are a function from probability distributions of outcomes, rather than making the more restrictive assumption that that function has to arise as an integral over utilities of individual outcomes.

So why is the "expected utility" model so popular? A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

So an economist who wants to apply game theory will be inclined to assume that actors are maximizing expected utility; but we LWers shouldn't necessarily.

Comment author: 11 August 2009 06:01:53PM 0 points [-]

There is a model which is standard in economics which say "people maximize expected utility; risk averseness arises because utility functions are convex".

Do you mean concave?

A couple of months ago I came across a blog-post which provides one clue: it pointed out that standard zero-sum game theory works when players maximize expected utility, but does not work if they have preferences about probability distributions of outcomes (since then introducing mixed strategies won't work).

Technically speaking, isn't maximizing expected utility a special case of having preferences about probability distributions about outcomes? So maybe you should instead say "does not work elegantly if they have arbitrary preferences about probability distributions."

This is what I tend to do when I'm having conversations in real life; let's see how it works online :-)

Comment author: 07 August 2009 11:09:28PM 0 points [-]

Why is rejection of mathematical expectation an unworkable solution?

Well, rejection's not a solution per se until you pick something justifiable to replace it with.

I'd be interested in a top-level post on the subject.

Comment author: 06 August 2009 11:22:02PM *  0 points [-]

If this condition makes a difference to you, your answer must also be to take as many cards as Omega has to offer.

Comment author: 06 August 2009 11:29:58PM 0 points [-]

I don't follow.

(My assertion implies that Omega cannot double my utility indefinitely, so it's inconsistent with the problem as given.)

Comment author: 06 August 2009 11:35:43PM *  2 points [-]

You'll just have to construct a less convenient possible world where Omega has merely trillion cards and not an infinite amount of them, and answer the question about taking a trillion cards, which, if you accept the lottery all the way, leaves you with 2 to the trillionth power odds of dying. Find my reformulation of the topic problem here.

Comment author: 07 August 2009 12:27:40AM 0 points [-]

Agreed.

Comment author: 07 August 2009 12:24:19AM 0 points [-]

Gotcha. Nice reformulation.

Comment author: 06 August 2009 10:34:32PM *  1 point [-]

His argument was that in cases where some kind of infinity is on the table, aiming to satisfice rather than optimize can be the better strategy.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

Where did you get the term "satisfice"? I just read that dutch-book post, and while Eliezer points out the flaw in demanding that the Bayesian take the infinite bet, I didn't see the word 'satisficing' in their anywhere.

Comment author: 07 August 2009 03:31:13AM 1 point [-]

Huh, I must have "remembered" that term into the post. What I mean is more succinctly put in this comment.

Can we apply that to decisions about very-long-term-but-not-infinitely-long times and very-small-but-not-infinitely-small risks?

Hmm... it appears not. So I don't think that helps us.

This question still confuses me, though; if it's a reasonable strategy to stop at N in the infinite case, but not a reasonable strategy to stop at N if there are only N^^^N iterations... something about it disturbs me, and I'm not sure that Eliezer's answer is actually a good patch for the St. Petersburg Paradox.

Comment author: 07 August 2009 12:11:21AM 1 point [-]

It's an old AI term meaning roughly "find a solution that isn't (likely) optimal, but good enough for some purpose, without too much effort". It implies that either your computer is too slow for it to be economical to find the true optimum under your models, or that you're too dumb to come up with the right models, thus the popularity of the idea in AI research.

You can be impressed if someone starts with a criteria for what "good enough" means, and then comes up with a method they can prove meets the criteria. Otherwise it's spin.

Comment author: 07 August 2009 04:50:32AM 0 points [-]

I'm more used to it as a psychology (or behavior econ) term for a specific, psychologically realistic, form of bounded rationality. In particular, I'm used to it being negative! (that is, a heuristic which often degenerates produces a bias)

Comment author: [deleted] 29 May 2012 03:49:39PM *  0 points [-]

But even if you don't believe in many worlds, I think you do the same thing, unless you are not maximizing expected utility. (Unless chance is quantized so that there is a minimum possible possibility. I don't think that would help much anyway.)

Or unless your utility function is bounded above, and the utility you assign to the status quo is more than the average of the utility of dying straight away and the upper bound of your utility function, in which case Omega couldn't possibly double your utility. (Indeed, I can't think of any X right now such that I'd prefer {50% X, 10% I die right now, 40% business as usual} to {100% business as usual}.)

Comment author: [deleted] 07 August 2009 02:26:17AM 0 points [-]

If I draw cards until I die, my expected utility is positive infinity. Though I will almost surely die and end up with utility 0, it is logically possible that I will never die, and end up with a utility of positive infinity. In this case, 10 + 0(positive infinity) = positive infinity.

The next paragraph requires that you assume our initial utility is 1.

If you want, warp the problem into an isomorphic problem where the probabilities are different and all utilities are finite. (Isn't it cool how you can do that?) In the original problem, there's always a 5/6 chance of utility doubling and a 1/6 chance of it going to 1/2. (Being dead isn't THAT bad, I guess.) Let's say that where your utility function was U(w), it is now f(U(w)), where f(x) = 1 - 1/(2 + log_2 x). In this case, the utilities 1/2, 1, 2, 4, 8, 16, . . . become 0, 1/2, 2/3, 3/4, 4/5, 5/6, . . . . So, your initial utility is 1/2, and Omega will either lower your utility to 0 or raise it by applying the function U' = U/(U + 1). Your expected utility after drawing once was previously U' = 5/3U + 1/2; it's now... okay, my math-stamina has run out. But if you calculate expected utility, and then calculate the probability that results in that expected utility, I'm betting that you'll end up with a 1/2 probability of *ever dying.

(The above paragraph surrounding a nut: any universe can be interpreted as one where the probabilities are different and the utility function has been changed to match... often, probably.)

Comment author: 07 August 2009 07:53:49AM 0 points [-]

I don't believe in quantifyable utility (and thus not in doubled utility) so I take no cards. But yeah, that looks like a way to make utilitarian equivalent to suicidal.

Comment author: 06 August 2009 10:56:07PM 0 points [-]

This is completely off topic (and maybe I'm just not getting the joke) but does Many Worlds necessarily imply many human worlds? Star Trek tropes aside, I was under the impression that Many Worlds only mattered to gluons and Shrodinger's Cat - that us macro creatures are pretty much screwed.

...

You were joking, weren't you? I like jokes.

Comment author: 06 August 2009 11:14:01PM 0 points [-]

"Many worlds" here is shorthand for "every time some event happens that has more than one possible outcome, for every possible outcome, there is (or comes into being) a world in which that was the outcome."

As far as the truth or falsity of Many Worlds mattering to us - I don't think it can matter, if you maximize expected utility (over the many worlds).

Comment author: 08 August 2009 11:19:00AM 2 points [-]

That is not what Many Wolds says. It is only about quantum outcomes, not "possible" outcomes.

Comment author: 06 August 2009 09:19:31PM *  5 points [-]

And even if you somehow worked around all these arguments, evolution, again, thwarts you. Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents. The claim that rational agents are not selfish implies that rational agents are unfit.

This is not how evolution works. Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this. Also, evolution can't really thwart you. You're done evolving; you can check it off your to-do list.

It's entirely plausible that being unselfish is adaptive; from a personal (non-gene, i.e. the perspective we actually have) perspective, having children is extremely unselfish.

Selfishness and unselfishness are arational. Rationality is about maximizing the output of your utility function (in this context). Selfishness is about what that utility function actually is.

Comment author: 06 August 2009 10:19:49PM *  0 points [-]

Evolution cares about how many of your offspring survive. Selfishness need not be conducive to this.

Selection, acting on the individual, selects for those individuals who act in ways that cause their own offspring to survive more. That is what I mean by selfishness. Selfish genes. Selfish memes.

Once people no longer die, selection will not have so much to do with death and reproduction, but with the accumulation of resources. Think about that, and it will become more clear that that will select directly for selfishness in the conventional sense.

Comment author: 06 August 2009 10:50:49PM 1 point [-]

Honestly, isn't this nitpicking? It's true that Lord Azatoth stopped selecting for genes in our species ten thousand years ago, but when that game stopped working for him he switched to making our memes compete against eachother (in any sane world we'd be having this conversation in Chinese, and my mother's 'Scottish' surname wouldn't be Nordic).

You're absolutely right, and he did simplify this portion, but it doesn't undermine the weight of his argument any more than my saying "I'm not sexist, I'm a fully evolved male!" is rendered irrelevant by the fact that current social mores have little to nothing to do with evolutionary biology.

It's one thing to correct Phil's statement, or offer a suggested rewording that would improve the strength of the point he was trying to make, but if feels as if you're pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.

Comment author: 06 August 2009 11:29:43PM 1 point [-]

as if you're pin pointing this one poor choice of wording, and using it to imply that the entire premise is flawed.

Argumentum ad evolutionum is both common enough and horribly wrong enough that I would not call it "nitpicking." The claim that unselfish agents will be outcompeted by selfish agents is complex, context-dependent, and requires support. The idea that there will somehow be an equilibrium in which unselfish agents get crowded out seems absurd, and this is what "evolution" seems intended to evoke, because evolution is (in significant part) about competitively crowding out the sub-optimal.

He also makes a much bigger mistake, and I should have addressed that in greater detail. Utility curves are arational, and term "selfish" gets confused way more than it should. It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think. If my actual utility curve involves other people's utility, or it involves maximizing the number of paper clips in existence, there is absolutely no reason to believe I could better accomplish goals if I were "selfish" by this definition.

Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.

Comment author: 07 August 2009 01:03:54AM *  1 point [-]

It seems clear from context that he means it hedonistically, i.e. my own hedonistic experience is my only concern if I'm selfish; I don't care about what other people want or think.

Instead of trying to interpret the context, you should believe that I mean what I say literally. I repeat:

If you still think that you wouldn't, it's probably because you're thinking a 1% increase in your utility means something like a 1% increase in the pleasure you experience. It doesn't. It's a 1% increase in your utility. If you factor the rest of your universe into your utility function, then it's already in there.

In fact, I have already explained my usage of the word "selfish" to you in this same context, repeatedly, in a different post.

Psychohistorian wrote:

Utility curves are strictly arational. A rational paperclip maximizer is an entirely possible being. Any statement of the kind "Rational agents are/are not selfish" is a type error; selfishness is entirely orthogonal to rationality.

I quote myself again:

If you act in the interest of others because it's in your self-interest, you're selfish. Rational "agents" are "selfish", by definition, because they try to maximize their utility functions. An "unselfish" agent would be one trying to also maximize someone else's utility function. That agent would either not be "rational", because it was not maximizing its utiltity function; or it would not be an "agent", because agenthood is found at the level of the utility function.

Comment author: 07 August 2009 03:19:29AM 1 point [-]

Of course, you have already shown that you choose to pretend I am using the word "selfish" in the colloquial sense which I have repeatedly explicitly said is not the sense I am using it in, in this post and in others, so this isn't going to help.

If it isn't working, why don't you try something different?

Comment author: 07 August 2009 03:35:18AM 0 points [-]

(I deleted that paragraph.)

Do you have an idea for something else to try?

Comment author: 07 August 2009 08:57:47AM 1 point [-]

I don't think it's really a necessary distinction; the idea of an unselfish utility maximizer doesn't quite make sense, because utility is defined so nebulously that pretty much everyone has to seek maximizing their utility.

Comment author: 07 August 2009 02:45:21PM 0 points [-]

the idea of an unselfish utility maximizer doesn't quite make sense

You're right that it doesn't make sense, which is why some people assume I mean something else when I say "selfish". But a lot of commenters do seem to believe in unselfish utility maximizers, which is why I keep using the word.

Comment author: 07 August 2009 03:10:46PM 0 points [-]

Avoiding morally charged words. If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.
My article here http://www.forbes.com/2009/06/22/singularity-robots-computers-opinions-contributors-artificial-intelligence-09_land.html is an attempt to do this.

Comment author: 07 August 2009 05:37:14PM *  1 point [-]

If possible shy far far away from ANY pattern that people can automatically match against with system 2 so that system 1 stays engaged.

Do you mean "system 1 ... system 2"?

Comment author: 07 August 2009 02:43:44AM 1 point [-]

Rational agents incorporate the benefits to others into their utility functions.

as a section header may have thrown me off there.

That aside, I do understand what you're saying, and I did notice the original contrast between the 1%/1%. Though I'd note it doesn't follow that a rational agent would be willing to take a 1% chance of destroying the universe in exchange for a 1% increase in his utility function; the universe being destroyed would probably output a negative, i.e. greater than 100% loss, so that's not an even bet.

The whole arational point is my mistake; the whole paragraph:

But maybe they're just not as rational as you...

reads very much like it is using selfish in the strict rather than holistic utility sense, and that was what I was focusing on in this response. I was focusing specifically on that section and did not reread the whole post, so I got the wrong idea. My point on evolution remains, and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail. But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

Comment author: 07 August 2009 03:42:04AM *  3 points [-]

and the negative-utility argument still makes the +1% for 1% chance of destruction argument fail

That's why what I wrote in that section was:

it's not possible that you would not accept a .999% risk, unless you are not maximizing expected value, or you assign the null state after universe-destruction negative utility.

You wrote:

But this doesn't matter much, since one can hardly suppose all agents in charge of making such decisions will be perfectly rational.

I am supposing that. That's why it's in the title of the post. I don't mean that I am certain that is how things will turn out to be. I mean that this post says that rational behavior leads to these consequences. If that means that the only way to avoid the destruction of life is to cultivate a particular bias, then that's the implication.

Comment author: 09 August 2009 07:32:13PM *  2 points [-]

If you don't believe black holes can ever be used as weapons, here's an article about a star 8000 light years away that some astronomers worry may harm life on Earth (to what extent it doesn't say).

Comment author: 06 August 2009 09:55:00PM *  2 points [-]

But deciding to use utility functions that will [...] seems to be rational.

You can't decide your utility function. It's a given. You can only make decisions based on preference that, probably, can be represented in part as a utility function. Deciding to use a particular utility function (that doesn't happen to be exactly the one representing your own preference) constitutes throwing away your humanity and replacing it with whatever the new utility function says.

Comment author: 06 August 2009 11:00:20PM *  0 points [-]

Comment author: 06 August 2009 09:17:59PM 3 points [-]

What are the existential risks for a multi-galaxy super-civilization? Or even a multi-stellar civilization expanding outward at some fraction of light speed? I don't see how life can be exterminated once it has spread that far. "liberate much of the energy in the black hole at the center of our galaxy in a giant explosion" does not make sense, since a black hole is not considered a store of energy that can be liberated.

If you are speculating about new physics that haven't been discovered yet, then "subjective-time exponential" and risk per century seems irrelevant (we can just assume that all of physics will be discovered sooner or later), and a more pertinent question might be how much of physics are as yet undiscovered, and what is the likelihood that some new physics will allow a galaxy/universe killer to be built.

I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.

Comment author: 29 May 2014 04:59:29AM *  1 point [-]

Speaking of new physics, there was the discovery that stars are other suns rather than tiny holes in celestial sphere... and in the future there's the possibility of discovering practically attainable interstellar travel. Discoveries in physics can have different effects.

And if we're to talk of limitless new and amazing physics, there may be superbombs, and there may be infinite subjective time within finite volume of spacetime, or something of that sort.

Comment author: 06 August 2009 10:25:54PM *  -1 points [-]

I don't see how life can be exterminated once it has spread that far.

You may be right. It takes a long time to become a multi-galaxy super-civilization. IIRC our galaxy is 100,000 light-years across, and the nearest galaxy is about 2 million light-years away. We might make it in time. It depends a lot on how far time-compression goes, and on how correlated apocalypses are.

"liberate much of the energy in the black hole at the center of our galaxy in a giant explosion" does not make sense, since a black hole is not considered a store of energy that can be liberated.

I argue that the amount of physics left to be discovered is finite, and therefore the likelihood that a galaxy/universe killer can be built in the future does not approach arbitrarily close to 1 as time goes to infinity.

That's my hope as well.

Comment author: 06 August 2009 11:42:19PM 2 points [-]

None of the results indicate a possibility that the "energy in the black hole at the center of our galaxy" can be liberated in a giant explosion.

The first result is a 1974 paper by Stephen Hawkings predicting that black holes emit black-body radiation at a temperature inverse in its mass. For large black holes this temperate is close to absolute zero, making them more useful as entropy dumps than energy sources.

On the other hand, if you could simultaneously convert a lot of ordinary matter into numerous tiny black holes, they would all instantly evaporate and have the effect of a single great explosion, so that's one risk to be worried about.

Comment author: 07 August 2009 12:09:06AM *  0 points [-]

None of the results indicate a possibility that the "energy in the black hole at the center of our galaxy" can be liberated in a giant explosion.

You're right about that. But they do indicate that the energy in smaller black holes can be liberated in giant explosions. And they indicate that black holes could be used as an energy sources. So when you said, "a black hole is not considered a store of energy that can be liberated," that was wrong; or at least it was wrong if you meant "a black hole is not considered a store of energy." And that was what I said was wrong.

Comment author: 06 August 2009 11:45:07PM 0 points [-]

Comment author: 06 August 2009 11:47:32PM 0 points [-]

I asked for a list of possible risks, and nobody has given any other answer...

Comment author: 06 August 2009 11:48:52PM 1 point [-]

Still, the question of whether one particular risk is real has almost no bearing on the total existential risk.

Comment author: 07 August 2009 12:55:50AM 1 point [-]

That's only true if there are lots of different existential risks besides this particular one. The fact that no one has answered my question with a list of such risks seems to argue against that. I also argued earlier that the amount of physics left to be discovered is finite, so the number of such risks is finite.

More generally, I guess it boils down to cognitive strategies. I like to start from specific examples, build intuitions, find similarities, then proceed to generalize. I program like this too. If I have to write two procedures that I know will end up sharing a lot of code, I will write one complete procedure first, then factor out the common code as I write the second one, instead of writing the common function first. I suppose this seems like a waste of time to someone used to working directly on the general/abstract issue.

Comment author: 07 August 2009 01:25:14AM 0 points [-]

Well, you know my specific example of a risk. Even if you know all about physics, that is the rules of the game, you can still lose to an opponent that can figure out a winning strategy.

Examples are good when you can confidently say something about them, and their very existence was in question. But there are so many ways to sidestep mere physical threat that it doesn't seem a good choice. An explosion is just something that happens to the local region, in a lawful physical way. You could cook some dynamic redundancy to preserve computation in case of an explosion.

Comment author: 06 August 2009 05:27:28PM 3 points [-]

if you met the girl of your dreams and she loved you

cough

Comment author: 06 August 2009 06:17:05PM 2 points [-]

Comment author: 07 August 2009 03:12:53PM 2 points [-]

What's that Precious? f you found The Precious and loved it. Much tastier.

Comment author: 06 August 2009 07:52:18PM *  2 points [-]

Incidentally, I once met Brian Moriarty at a party John Romero threw where I embarassed myself in front of or offended all of my childhood heroes, one after another, ending with Steve Wozniak.

I was talking to him about trends in text adventures, and said, "One great thing is that IF authors have gotten away from the idea that every game has to be about saving the world."

He said something like, "Well, I happen to think that saving the world is not such a bad thing," and went off in a bit of a huff. And then I remembered that he was the author of Trinity, which was about the Trinity test and saving the world from nuclear holocaust. (And was a really good game, BTW.)

Comment author: [deleted] 29 May 2012 03:54:19PM 1 point [-]

The loss to them if they ignited the atmosphere would be another 30 or so years of life. The loss to them if they lost the war and/or were killed by their enemies would also be another 30 or so years of life.

This assumes they don't care about their children and grandchildren after their death.

Comment author: 29 May 2012 04:09:18PM 1 point [-]

...and the children and grandchildren of their enemies, and the millions of currently-living people not involved in the war, and etc.

Comment author: 07 August 2009 02:58:06AM *  1 point [-]

This whole point may reflect collective confusion surrounding the term "utility."

I do not presently have a coefficient in my utility function attached to John Doe, who is a mechanic in Des Moines (I'm assuming). I know nothing about him, and whatever happens to him does not affect my experience of happiness in the slightest. I wish him well, but it would make little sense to say he is reflected in my utility function. I would agree that, ceteris paribus, the better off he is, the better, but (particularly since I won't know it), this doesn't really weigh in my experience of life.

On the other hand, if you asked me if I would rather him die if meant I got a thousand dollars, I'd have to turn down the offer. I care about his utility in the abstract, even if it doesn't actually affect my happiness.

There's a relevant distinction between abstract collective utility and personally experienced utility. The human mind is not powerful enough to comprehend true, complete, abstract utility, and if it was you'd probably become terminally depressed. One can believe in the importance of maximizing abstract utility while not actually experiencing it. When Omega offers to double our utility, we think that means something we experience, and we don't experience the abstract utility of the entire planet. I believe that this distinction gets confused, leading to this post feeling contrary to intuition.

On which note, we really don't know what total utility looks like - it's too complex. So the world gets destroyed or total utility gets doubled 50-50 bet is not evaluable by the individual because we don't know how to evaluate the disutility of the world being destroyed, other than we'd rather not risk it.

This is all made that much more painful by the fact that reason alone cannot say which is preferable, the scratching of my finger or the destruction of the world. I think Hume may have beaten us to this.

Comment author: 07 August 2009 03:38:58AM 0 points [-]

When Omega offers to double our utility, we think that means something we experience, and we don't experience the abstract utility of the entire planet. I believe that this distinction gets confused, leading to this post feeling contrary to intuition.

Actually, if you think of the paradox below in these terms, as being one where you're offered vague, unmeasurable rewards, it ceases to be a paradox. It's only a paradox because we've abstracted those confusing issues away.

It is a puzzle that is meant to get at the question of whether our mathematical models of rationality are correct. If you're not talking about mathematical models, you're having a different conversation.

Comment author: 06 August 2009 09:15:36PM *  1 point [-]

You can put monetary value on humanity, just as you can on a person's life.

Comment author: 06 August 2009 09:24:47PM 1 point [-]

If humanity goes away, who will collect the savings?

Comment author: 06 August 2009 10:17:46PM 3 points [-]

I see what he's saying, but there's something wrong with it. If you put a monetary value on a life, it means that you could increase utility by trading that life for more than that much money, because you could do things with that money that would increase other people's utility enough to make up for the life. But once you've traded the last life away, you can't use the money.

Comment author: 06 August 2009 10:59:53PM 1 point [-]

Robert A. Heinlein: An extinct species has no moral behaviour. ~An address to a Westpoint graduating class

Comment author: [deleted] 06 August 2009 05:46:38PM 1 point [-]

On "incorporating the benefits to others into their utility functions", you hint at a sharp dichotomy between Scrooges and Saints - people who act entirely in their own self-interest, and people who act in everyone's interest (because that is the nature of their own self-interest). But most humans are not at these poles - most of us act in the interest of at least several people. Mirroring (understanding) is partially a learned trait, but actually caring about other people who you mirror is emotionally "basic". By this I mean it's entirely in our self-interest to act in the interest of some others. That was to partially address your "unselfish agents will be out-competed by selfish agents" claim. False dichotomy.

You haven't given a convincing argument that people will stop having Life-And-Death conflicts.

On the LHC, it sounds like you're arguing for a more precautionary approach to science. What would be acceptable conditions in your book for turning on the LHC? (Also, that particular experiment received so much publicity, the numbers were undoubtedly checked hundreds of times).

(I've got to get to work, and I'm sure other posters will address the relevant points and problems before I get a chance.)

Comment author: 06 August 2009 06:14:18PM *  -1 points [-]

By this I mean it's entirely in our self-interest to act in the interest of some others. That was to partially address your "unselfish agents will be out-competed by selfish agents" claim. False dichotomy.

It's not a false dichotomy. If you act in the interest of others because it's in your self-interest, you're selfish. Rational "agents" are "selfish", by definition, because they try to maximize their utility functions. An "unselfish" agent would be one trying to also maximize someone else's utility function. That agent would either not be "rational", because it was not maximizing its utiltity function; or it would not be an "agent", because agenthood is found at the level of the utility function. I tried to make this point in another thread, and lost like 20 karma points doing so. But it's still right. I request anyone down-voting this comment to provide some alternative interpretation under which a rational agent is not selfish.

EDIT: A great example of what I mean by "agenthood is found at the level of the utility function" is that you shouldn't consider an ant an agent.

The whole point of the essay is to try to find some way for it to be in everyone's self-interest to act in ways that will prevent us from taking small risks of exterminating life. And I failed to find any such way. So you see, the entire essay is predicated on the point that you're making.

You haven't given a convincing argument that people will stop having Life-And-Death conflicts.

Do you mean, I haven't given a convincing argument that people will not stop having Life-And-Death conflicts?

On the LHC, it sounds like you're arguing for a more precautionary approach to science.

Not actually. The next hundred thousand years are a special case.

Comment author: [deleted] 06 August 2009 09:15:16PM 1 point [-]

I think we're agreeing on the first point - any rational agent is selfish. But then there's no such thing as an unselfish agent, right? Also, no need to use the term selfish, if it's implicit in rational agent. If unselfish agents don't exist, it's easy to out-compete them!

"trying to also maximize someone else's utility function... would not be an 'agent', because agenthood is found at the level of the utility function." What do you mean by this? I read this as saying that a utility function which is directly dependent on another's utility is not a utility function. In other words, anyone who cares about another, and takes direct pleasure from another's wellbeing, is not an agent. If that's what you mean, then most humans aren't agents. Otherwise, I'm not understanding.

On Life-And-Death conflicts, yes, that's what I meant. You haven't given any such argument!

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

Comment author: 06 August 2009 10:05:00PM *  1 point [-]

Also, no need to use the term selfish, if it's implicit in rational agent.

Right - now I remember, we've gone over this before. I think it is implicit in rational agent; but a lot of people forget this, as evidenced by the many responses that say something like, "But it's often in an agent's self-interest to act in the interest of others!"

If you think about why they're saying this in protest to my saying that a rational agent is selfish, it can't be because they are legitimately trying to point out that a selfish agent will sometimes act in ways that benefit others. That would be an uninteresting and uninformative point. No, I think the only thing they can mean is that they believe that decision theory is something like the Invisible Hand, and will magically result in an equilibrium where everybody is nice to each other, and so the agents really aren't selfish at all.

So I use the word "selfish" to emphasize that, yes, these agents really pursue their own utility.

Comment author: [deleted] 06 August 2009 10:51:50PM 0 points [-]

(Well, "we" haven't - I'm pretty new on these forums, and missed that disagreement!)

You still haven't addressed any of my complaints with your argument. I never mentioned anything about time-discounting - it looked like you saw your second-to-last proposition to be the only one with merit, so I was totally addressing two that you dismissed.

In my first point, now that we are clear on definitions, I meant that you 1) implied a dichotomy between agents whose utility functions are entirely independent of other people's, and those whose utility functions are very heavily dependent (Scrooges and Saints). You then made the statement "unselfish agents will be out-competed by selfish agents." Since we agree that there's no such thing as an unselfish agent, you probably meant "people who care a lot about everyone will be out competed by people who care about nobody but themselves" (selfish rational agents with highly dependent vs. highly independent utility functions). This is a false dichotomy because most people don't fall into either extreme, but have a utility function that depends on some others, but not everyone and not to an equal degree.

And my two questions still stand, on conflict and the LHC.

(Interesting post, by the way!)

Comment author: 07 August 2009 04:53:10AM 0 points [-]

you probably meant "people who care a lot about everyone will be out competed by people who care about nobody but themselves."

No, I didn't mean that. This is, I think, the 5th time I've denied saying this on Less Wrong. I've got to find a way of saying this more clearly. I was arguing against people who think that rational agents are not "selfish" in the sense that I've described elsewhere in these comments. If it helps, I'm using the word "selfish" in a way so that an agent could consciously desire strongly to help other people, but still be "selfish".

On life-and-death conflicts, I did give such an argument, but very briefly:

Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.

I realize this isn't enough for someone who isn't already familiar with the full argument, but it's after midnight and I'm going to bed.

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

Comment author: [deleted] 07 August 2009 05:58:30AM 0 points [-]

My confusion isn't coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.

On life-and-death conflicts, sorry if I'm inquiring on something widely known by everyone else, but I wouldn't mind a link or elaboration if you find the time. I agree that people will have conflicts both as a result of human nature and of finite resources, but I don't see why conflicts must always be deadly.

During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.

Comment author: 07 August 2009 08:34:27PM 0 points [-]

My confusion isn't coming from the term selfish, but from the term unselfish agent. You clearly suggested that such a thing exists in the quoted statement, and I have no idea what this creature is.

I wrote, "Even if you don't agree that rational agents are selfish, your unselfish agents will be out-competed by selfish agents." The "unselfish agent" is a hypothetical that I don't believe in, but that the imaginary person I'm arguing with believes in; and I'm saying, "Even if there were such an agent, it wouldn't be competitive."

My argument was not very clear. I wouldn't worry too much over that point.

You just said the opposite of what you said in your original post here, that the LHC was turned on for no practical advantage.

No; I said, "no practical advantage that I've heard of yet." First, the word "practical" means "put into practice", so that learning more theory doesn't count as practical. Second, "that I've heard of yet" was a qualifier because I suppose that some practical advantage might result from the LHC, but we might not know yet what that will be.

Comment author: 07 August 2009 08:44:23PM *  3 points [-]

If "selfish" (as you use it) is a word that applies to every agent without significant exception, why would you ever need to use the word? Why not just say "agent"? It seems redundant, like saying "warm-blooded mammal" or something.

Comment author: 07 August 2009 08:48:05PM *  1 point [-]

Yes, it's redundant. I explained why I used it nonetheless in the great-great-great-grandparent of the comment you just made. Summary: You might say "warm-blooded mammal" if you were talking with people who believed in cold-blooded mammals.

Comment author: 06 August 2009 10:10:39PM *  0 points [-]

I read this as saying that a utility function which is directly dependent on another's utility is not a utility function.

No; I meant that each agent has a utility function, and tries to maximize that utility function.

If we can find an evolutionarily-stable cognitive makeup for an agent that allows it to have a utility function that weighs the consequences in the distant future equally with the consequences to the present, then we may be saved. In other words, we need to eliminate time-discounting.

One thing I didn't explain clearly, is that it may be that uncertainty alone provides a large enough time-discounting to make universe-death inevitable. Because you're more and more uncertain what the impact of a decision will be the farther you look into the future, you weigh that impact less and less the farther into the future you go.

But maybe this is not inevitably the right thing to do, if you can find a way to predict future impacts that is uncertain, but also unbiased!

EDIT: No. Unbiased doesn't cut it.

On life-and-death conflicts, I did give such an argument, but very briefly:

Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.

I realize this isn't enough for someone who isn't already familiar with the full argument, but it's after midnight and I'm going to bed.

On the LHC, why are the next hundred thousand years a special case? And again, under what conditions should the LHC run?

The next 100,000 years are a special case because we may learn most of what we will learn over the next billion years in the next 100,000 years. During this period, the risks of something like running the LHC are probably outweighed by how much the knowledge acquired as a result will help us estimate future risks, and figure out a solution to the problem.

Comment author: 06 August 2009 07:24:37PM -2 points [-]

Re: "If technology keeps changing, it is inevitable that, much of the time, a technology will provide the ability to destroy all life before the counter-technology to defend against it has been developed."

Unsupported hypothesis. As life spreads out in the universe, it gets harder and harder to destroy all of it - while the technology of destruction will stabilise.

Comment author: 06 August 2009 07:41:07PM 2 points [-]

Unsupported hypothesis. As life spreads out in the universe, it gets harder and harder to destroy all of it - while the technology of destruction will stabilise.

Did you read the essay before posting? I have a section on life spreading out in the universe, and a section on whether the technology of destruction can stabilize.

Comment author: 27 May 2014 10:46:04PM *  0 points [-]

I was directed here from FIMFiction.

Because of https://en.wikipedia.org/wiki/Survivorship_bias we really can't know what the odds are of doing something that ends up wiping out all life on the planet; nothing we have tried thus far has even come close, or even really had the capability of doing so. Even global thermonuclear war, terrible as it would be, wouldn't end all life on Earth, and indeed probably wouldn't even manage to end human civilization (though it would be decidedly unpleasant and hundreds of millions of people would die).

Some people thought that the nuclear bomb would ignite the atmosphere... but a lot of people didn't, either, and that three in a million chance... I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure? Indeed, there is good reason to believe that the atmosphere may well have experienced such events before, in the form of impact events; this is why we knew, for instance, that the LHC was safe - we had experienced considerably more energetic events previously. Some people claimed it might destroy the universe, but the odds were actually 0 - it simply lacked the ability to do so, because if it was going to cause a vacuum collapse the universe would have already been destroyed by such an event elsewhere. Meanwhile, the physics of small black holes means that they're not a threat - they would decay almost instantly, and would lack the gravity necessary to cause any real problems. And thus far, if we actually look at what we've got, the reality is that everything we have tried has had p=0 of destroying civilization in reality (that is the universe we -actually- live in), meaning that that p = 3 x 10^-6 was actually hopelessly pessimistic. Just because someone can assign arbitrary odds to something doesn't mean that they're right. In fact, it usually means that they're bullshitting.

Remember NASA making up its odds of an individual bolt failing at being one in a 10^8? That's the sort of made up number we're looking at here.

And that's the sort of made up number I always see in these situations; people simply come up with stuff, then pretend to justify it with math when in reality it is just a guess. Statistics used as a lamppost; for support, not illumination.

And this is the biggest problem with all existential threats - the greatest existential threat to humanity is, in all probability, being smacked by a large meteorite, which is something we KNOW, for certain, happens every once in a while. And if we detected that early enough, we could actually prevent such an event from happening.

Everything else is pretty much entirely made up guesswork, based on faulty assumptions, or very possibly both.

Of the "humans kill us all" scenarios, the most likely is some horrible highly transmissible genetically engineered disease which was deliberately spread by madmen intent on global destruction. Here, there are tons of barriers; the first, and perhaps largest barrier is the fact that crazy people have trouble doing this sort of thing; it requires a level of organization which tends to be beyond them. Secondly, it requires knowledge we lack, and which indeed, once we obtain it, may or may not make containing the outbreak of such a disease relatively trivial - you speak of offense being easier than defense, but in the end, a lot of technological systems are easier to break than they are to make, and understanding how to make something like this may well require us to understand how to break it in the process (and indeed, may well be derived from us figuring out how to break it). Thirdly, we actually already have measures which require no technology at all - quarantines - which could stop such a thing from wiping out too many people. Even if you did it in a bunch of places simultaneously, you'd still probably fail to wipe out humanity with it just because there are too many people, too spread out, to actually succeed. And fourth, you'd probably need to test it, and that would put you at enormous risk of discovery. I have my doubts about this scenario, but it is by far the likelist sort of technological disaster.

Of course, if we have sentient non-human intelligences, they'd likely be immune to such nonsense. And given our improvements in automation controlling plague-swept areas is probably going to only get easier over time; why use soldiers who can potentially get infected when we can patrol with drones?

Comment author: 27 May 2014 10:46:25PM *  1 point [-]

Everything else is way further down the totem pole.

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life. There are absolutely enormous amounts of bacteria and viruses and fungi and everything else all around us, and given the enormous advantage which would be conferred by being a grey goo from an evolutionary standpoint, we would expect the entire planet to have already been covered in the stuff - probably repeatedly. The fact that we see so much diversity - the fact that nothing CAN do this, despite enormous evolutionary incentive TO do this - suggests that grey goo scenarios are either impossible or incredibly unlikely. And that's ignoring the thermodynamic issues which would almost certainly prevent such a scenario from occurring as well, given the necessity of reshaping whatever material into the self-replicating material, which would surely take more energy than is present in the material to begin with.

Physics experiments gone wrong have similar problems - we've seen supernovas. The energy released by a supernova is just vastly beyond what any sort of planetary civilization is likely capable of doing. And seeing as supernovas don't destroy everything, it is vastly unlikely that whatever WE do will do the same. There are enormously energetic events in the universe, and the universe itself is reasonably stable - it seems unlikley that our feeble, mere planetary energy levels are going to do any better in the "destroy everything" department. And even before that, there was the Big Bang, and the universe came to exist out of that whole mess. We have the Sun, and meteoritic impact events, both of which are very powerful indeed. And yet, we don't see exotic, earth-shattering physics coming into play there in unexpected ways. Extremely high energy densities are not likely to propagate - they're likely to dissipate. And we see this in the universe, and in the laws of thermodynamics.

It is very easy to IMAGINE a superweapon that annihilates everything. But actually building one? Having one have realistic physics? That's another matter entirely. Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry. We don't see this happening. Thus we can assume with a pretty high level of confidence that such weapons do not exist or cannot be created without an implausible amount of work.

The difficult physics of interstellar travel is not to be denied, either - the best we can do with present physics is nuclear pulse propulsion, which is perhaps 10% of c and has enormous logistical issues. Anything FTL requires exotic physics which we don't have any idea of how to create, and which may well describe situations which are not physically plausible - that is to say, the numbers may work, but there may well be no way to get there, the same as how there's no particular reason going faster than c is impossible, but you can't ever even REACH c, so the fact that there is a "safe space" according to the math on the other side is meaningless. Without FTL, interstellar travel is far too slow for such disasters to really propagate themselves across the galaxy - any sort of plague would die out on the planet it was created on, and even WITH FTL, it is still rather unlikely that you could easily spread something like that. Only if cheap FTL travel existed would spreading the plague be all that viable... but with cheap FTL travel, everyone else can flee it that much more easily.

My conclusion from all of this is that these sorts of estimates are less "estimates" and more "wild guesses which we pretend have some meaning, and which we throw around a lot of fancy math to convince ourselves and others that we have some idea what we're talking about". And that estimates like one in three million, or one in ten, are wild overestimates - and indeed, aren't based on any logic any more sound than the guy on the daily show who said that it would either happen, or it wouldn't, a 50% chance.

We have extremely strong evidence against galactic and universal annihilation, and there are extremely good reasons to believe that even planetary level annihilation scenarios are unlikely due to the sheer amount of energy involved. You're looking at biocides and large rocks being diverted from their orbits to hit planets, neither of which are really trivial things to do.

It is basically a case of http://tvtropes.org/pmwiki/pmwiki.php/Main/ScifiWritersHaveNoSenseOfScale, except applied in a much more pessimistic manner.

The only really GOOD argument we have for lifetime limited civilizations is the url=https://en.wikipedia.org/wiki/Fermi_paradox - that is to say, where are all the bloody aliens? Unfortunately, the Fermi Paradox is a somewhat weak argument primarily because we have absolutely no idea whatsoever which side of the Great Filter we are on. That being said, if practical FTL travel exists, I would expect that to pretty much ensure that any civilization which invented it would likely simply never die because of how easy it would be to spread out, making destroying them all vastly more difficult. The galaxy would probably end up colonized and recolonized regardless of how much people fought against it.

Without FTL travel, galactic colonization is possible, but it may be impractical from an economic standpoint; there is little benefit to the home planet of having additional planets colonized - information is the only thing you could expect to really trade over interstellar distances, and even that is questionable given that locals will likely try to develop technology locally and beat you to market, so unless habitable systems are very close together duplication of effort seems extremely likely. Entertainment would thus be the largest benefit - games, novels, movies and suchlike. This MIGHT mean that colonization is unlikely, which would be another explaination... but even there, that assumes that they wouldn't want to explore for the sake of doing so.

Of course, it is also possible we're already on the other side of the Great Filter, and the reason we don't see any other intelligent civilizations colonizing our galaxy is because there aren't any, or the ones which have existed destroyed themselves earlier in their history or were incapable of progressing to the level we reached due to lack of intelligence, lack of resources, eternal, unending warfare which prevented progress, or something else.

This is why pushing for having a multiplanetary civilization is, I think, a good thing; if we hit the point where we had 4-5 extrasolar colonies, I think it would be pretty solid evidence in favor of being beyond the Great Filter. Given the dearth of evidence for interstellar disasters created by intelligent civilizations, I think that it is likely that our main concern about destroying ourselves comes until the point where we expand.

But I digress.

It isn't impossible that we will destroy ourselves (after all, the Fermi Paradox does offer some weak evidence for it), but I will say that I find any sort of claims of numbers for the likelihood of doing so incredibly suspect, as they are very likely to be made up. And given that we have no evidence of civilizations being capable of generating galaxy-wide disasters, it seems likely that whatever disasters exist are planetary scale at best. And our lack of any sort of plausible scenarios even for that hurts even that argument. The only real evidence we have against our civilization existing indefinitely is the Fermi Paradox, but it has its own flaws. We may destroy ourselves. But until we find other civilizations, you are fooling yourself if you think you aren't just making up numbers. Anything which destroys us outside of an impact event is likely something we cannot predict.

Comment author: 28 May 2014 03:59:55AM 3 points [-]

People talk about the grey goo scenario, but I actually think that is quite silly because there is already grey goo all over the planet in the form of life" ... " nothing CAN do this, because nothing HAS done it."

The grey goo scenario isn't really very silly. We seem to have had a green goo scenario around 1.5 to 2 billion years ago that killed off many or most critters around due to release of deadly deadly oxygen; if the bacterial ecosystem were completely stable against goo scenarios this wouldn't have happened. We have had mini goo scenarios when for example microbiota pretty well adapted to one species made the jump to another and oops, started reproducing rapidly and killing off their new host species rapidly, e.g. Yersinia pestis. Just because we haven't seen a more omnivous goo sweep over the ecosphere recently ..., ...other than Homo sapiens, which is actually a pretty good example of a grey goo - think of the species as a crude mesoscale universal assembler, which is spreading pretty fast and killing off other species at a good clip and chewing up resources quite rapidly... ... doesn't mean it couldn't happen at the microscale also. Ask the anaerobes if you can find them, they are hiding pretty well still after the chlorophyll incident.

Since the downside is pretty far down, I don't think complacency is called for. A reasonable caution before deploying something that could perhaps eat everyone and everything in sight seems prudent.

Remember that the planet spent almost 4 billion years more or less covered in various kind of goo before the Precambrian Explosion. We know /very little/ of the true history of life in all that time; there could have been many, many, many apocalyptic type scenarios where a new goo was deployed that spread over the planet and ate almost everything, then either died wallowing in its own crapulence or formed the base layer for a new sort of evolution.

Multicellular life could have started to evolve /thousands of times/ only to be wiped out by goo. If multicellulars only rarely got as far as bones or shells, and were more vulnerable to being wiped out by a goo-plosion than single celled critters that could rebuild their population from a few surviving pockets or spores, how would we even know? Maybe it took billions of years for the Great War Of Goo to end in a Great Compromise that allowed mesoscopic life to begin to evolve, maybe there were great distributed networks of bacterial and viral biochemical computing engines that developed intelligence far beyond our own and eventually developed altruism and peace, deciding to let multicellular life develop.

Or we eukaryotes are the stupid runaway "wet" technology grey goo of prior prokaryote/viral intelligent networks, and we /destroyed/ their networks and intelligence with our runaway reproduction. Maybe the reason we don't see disasters like forests and cities dissolving in swarms of Andromeda-Strain like universal gobblers is that safeguards against that were either engineered in, or outlawed, long ago. Or, more conventionally, evolved.

What we /do/ think we know about the history of life is that the Earth evolved single celled life or inherited it via panspermia etc. within about half a billion years of the Earth's coalescence, then some combination of goo more or less dominated the Earth's surface te roost (as far as biology goes) for over three billion years, esp if you count colonies like stromatolites as gooey. In the middle of this long period was at least one thing that looked like a goo apocalypse that remade the Earth profoundly enough that the traces are very obvious (e.g. huge beds of iron ore). But there could have been many more mass extinctions we know of.

Then less than a billion years ago something changed profoundly and multicellulars started to flourish. This era is less than a sixth of the span of life on earth. So... five sixths, goo dominated world, one sixth, non goo dominated world, is the short history here. This does not fill me with confidence that our world is very stable against a new kind of goo based on non-wet, non-biochemical assemblers.

I do think we are pretty likely not to deploy grey goo, though. Not because humans are not idiots - I am an idiot, and it's the kind of mistake I would make, and I'm demonstrably above average by many measures of intelligence. It's just that I think Eliezer and others will deploy a pre-nanotech Friendly AI before we get to the grey goo tipping point, and that it will be smart enough, altruistic enough, and capable enough to prevent humanity from bletching the planet as badly as the green microbes did back in the day :)

Comment author: 28 May 2014 11:27:36PM 5 points [-]

You are starting from the premise that gray goo scenarios are likely, and trying to rationalize your belief.

Yes, we can be clever and think of humans as green goo - the ultimate in green goo, really. That isn't what we're talking about and you know it - yes, intelligent life can spread out everywhere, that isn't what we're worried about. We're worried about unintelligent things wiping out intelligent things.

The great oxygenation event is not actually an example of a green goo type scenario, though it is an interesting thing to consider - I'm not sure if there even is a generalized term for that kind of scenario, as it was essentially slow atmospheric poisoning. It would be more of a generalized biocide type scenario - the cyanobacteria which caused the great oxygenation event created something which was incidentally toxic to other things, but it was purely incidental, had nothing to do with their own action, probably didn't even benefit most of them directly (that is to say, the toxicity of the oxygen they produced probably didn't help them personally), and what actually took over afterwards were things which were rather different from what came before, many of which were not descended from said cyanobacteria.

It was a major atmospheric change, and is (theoretically) a danger, though I'm not sure how much of an actual danger it is in the real world - we saw the atmosphere shift to an oxygen-dominated one, but I'm not sure how you'd do it again, as I'm not sure there's something else which can be freed en-mass which is toxic - better oxygenators than oxygen are hard to come by, and by their very nature are rather difficult to liberate from an energy balance standpoint. It seems likely that our atmosphere is oxygen-based and not, say, chlorine or fluorine based for a reason arising from the physics of liberating said chemicals from chemical compounds.

As far as repeated green goo scenarios prior to 600Mya - I think that's pretty unlikely, honestly. Looking at microbial diversity and microbial genomes, we see that the domains of life are ridiculously ancient, and that diversity goes back an enormously long distance in time. It seems very unlikely that repeated green goo type scenarios would spare the amount of diversity we actually see in the real world. Eukaryotic life arose 1.6-2.1Bya, and as far as multicellular life goes, we've evidence of cyanobacteria which showed signs of multicellularity 3Bya.

That's a long, long time, and it seems unlikely that repeated gray goo scenarios are what kept life simple. It seems more likely that what kept life simple was the fact that complexity is hard - indeed, I suspect the big advancement was actually major advancements in modularity of life. The more modular life becomes, the easier it is to evolve quickly and adapt to new circumstances, but modularity from non-modularity is something which is pretty tough to sort out. Once things did sort it out, though, we saw a massive explosion in diversity. Evolving to be better at evolving is a good strategy for continuing to exist, and I suspect that complex multicelluar life only came to exist when stuff got to the point where this could happen.

If we saw repeated green goo scenarios, we'd expect the various branches of life to be pretty shallow - even if some diversity survived, we'd expect each diverse group to show a major bottleneck back at whenever the last green goo occurred. But that's not what we actaully see. Fungi and animals diverged about 1.5 Bya, for instance, and other eukaryotic diversity occurred even prior to that. Animals have been diverging for 1.2 billion years.

It seems unlikely, then, that there have been any green goo scenarios in a very, very long time, if indeed they ever did occur. Indeed, it seems likely that life evolved to prevent said scenarios, and did so successfully, as none have occurred in a very, very, very long time.

Pestilence is not even close to green goo. Yes, introducing a new disease into a new species can be very nasty, but it almost never actually is, as most of the time, it just doesn't work at all. Even amongst the same species, Smallpox and other old-world diseases wiped out the Native Americans, but Native American diseases were not nearly so devastating to the old-worlders.

Most things which try to jump the species barrier have a great deal of difficulty in doing so, and even when they successfully do so, their virulence ends up dropping over time because being ridiculously fatal is actually bad for their own continued propagataion. And humans have become increasingly better at stopping this sort of thing. I did note engineered plagues as the most likely technological threat, but comparing them to gray goo scenarios is very silly - pathogens are enormously easier to control. The trouble with stuff like gray goo is that it just keeps spreading, but with a pathogen, it requires a host - there are all sorts of barriers in place to pathogens, and everything is evolved to be able to deal with pathogens because they sometimes have to deal with even new ones, and things which are more likely to survive exposure to novel pathogens are more likely to pass on their genes in the long term.

With regards to "intelligent viral networks" - this is just silly. Life on earth is NOT the result of intelligence. You can tell this from our genomes. There are no signs of engineering ANYWHERE in us; no signs of intelligent design.

Comment author: 29 May 2014 04:23:31AM *  4 points [-]

The gray goo is predicated on the sort of thinking common in bad scifi.

Basically, in scifi the nanotech self replicators which eat everything in their path are created in one step. As opposed to realistic depiction of technological progress where the first nanotech replicators have to sit in a batch of special nutrients and be microwaved, or otherwise provided energy, while being kept perfectly sterile (to keep bacteria from eating your nanotech). Then it'd get gradually improved in great many steps and find many uses ranging from cancer cure to dishwashers, with corresponding development in goo control methods. You don't want your dishwasher goo eating your bread.

The levels of metabolic efficiency and sheer universality required for the gray goo to be able to eat everything in it's path (and that's stuff which hasn't gotten eaten naturally), require multitude of breakthroughs on top of an incredibly advanced nanotechnology and nano-manufacturing capacity within artificial environments.

How does such an advanced civilization fight the gray goo? I can't know what would be the best method, but a goo equivalent of bacteriophage is going to be a lot, lot less complicated than the goo itself (as the goo has to be able to metabolize a variety of foods efficiently).

Comment author: 01 June 2014 01:19:46PM 0 points [-]

Comment author: 28 May 2014 12:33:17PM 0 points [-]

Indeed, we have very strong evidence against it: surely, intelligent life has arisen elsewhere in the universe, and we would see galaxies being annihilated by high-end weaponry.

That's a bad argument. We don't know for sure that intelligent life has arisen. The fact that we don't see events like that can simply mean that we are the first.

Comment author: 28 May 2014 10:42:05PM 0 points [-]

That's a pretty weak argument due to the mediocrity principle and the sheer scale of the universe; while we certainly don't know the values for all parts of the Drake Equation, we have a pretty good idea, at this point, that Earth-like planets are probably pretty common, and given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

Most likely, the Great Filter lies somewhere in the latter half of the equation - complex, multicellular life, intelligent life, civilization, or the rapid destruction thereof. But even assuming that intelligent life only occurs in one galaxy out of every thousand, which is incredibly unlikely, that would still give us many opportunities to observe galactic destruction.

It is theoretically possible that we're the only life in the Universe, but that is incredibly unlikely; most Universes in which life exists will have life exist in more than one place.

Comment author: 29 May 2014 01:13:34AM 0 points [-]

given that abiogenesis occurred very rapidly on Earth, that is weak evidence that abiogenesis isn't hard in an absolute sense.

We don't even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.

most Universes in which life exists will have life exist in more than one place.

Why? I don't see any reason why that should be the case. If you take for example posts that internet forum users write most of the time most users who write posts only write one post.

Comment author: [deleted] 01 June 2014 08:01:02AM 1 point [-]

We don't even know that it occurred on earth at all. It might have occurred elsewhere in our galaxy and traveled to earth via asteroids.

That would make it more likely that there's life on other planets, not less likely.

Comment author: 01 June 2014 08:55:18AM *  1 point [-]

Most planets and stars in the universe are not in our galaxy. If our galaxy has a bit of unicellular life because some very rare event happened and is the only galaxy with life, that fits to a universe where we are the only intelligent species.

Comment author: [deleted] 01 June 2014 10:03:00AM 0 points [-]

It looks like you accidentally submitted your comment before finishing it (or there's a misformatted link or something).

Comment author: 01 June 2014 01:00:03PM 0 points [-]

I corrected it.

Comment author: 27 May 2014 11:23:31PM *  0 points [-]

Your central point seems to be "a rational agent should take a risk that might result in universal destruction in exchange for increased utility".

The problem here is I'm not sure that this is even a meaningful argument to begin with. Obviously universal destruction is extremely bad, but the problem is that utility probably includes all life NOT being extinguished. Or, in other words, this isn't necessarily a meaningful calculation if we assume that the alternative makes it more likely that universal annihilation will occur.

Say the Nazis gain an excessive amount of power. What happens then? Well, there's the risk that they make some sort of plague to cleanse humanity, screw it up, and wipe everyone out. That scenario seems MORE likely in a Nazi-run world than one which isn't. And - let's face it - chances are the Nazis will try and develop nuclear weapons, too, so at best you only bought a few years. And if the wrong people develop them first, you're in a lot of trouble. So the fact of the matter is that the risk is going to be taken regardless, which further diminishes the loss of utility you could expect from universal annihilation - sooner or later, someone is going to do it, and if it isn't you, then it will be someone else who gains whatever benefits there are from it.

The higher utility situation likely decreases the future odds of universal annihilation, meaning that, in other words, it is entirely rational to take that risk simply because the odds of destroying the world NOW are less than the odds of the world being destroyed further on down the line by someone else if you don't make this decision, especially if you can be reasonably certain someone else is going to try it out anyway. And given the odds are incredibly low, it is a lot less meaningful of a choice to begin with.

Comment author: 27 May 2014 10:47:57PM -1 points [-]

Incidentally, regarding some other things in here:

[quote]They thought that just before World War I. But that's not my final rejection. Evolutionary arguments are a more powerful reason to believe that people will continue to have conflicts. Those that avoid conflict will be out-competed by those that do not.[/quote]

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital. As capital becomes increasingly important, conflict - at least, violent, capital-destroying conflict - becomes massively less beneficial to the perpetrator of said conflict, doubly so when they actually also likely benefit from the capital contained in other nations as well due to trade.

And that's ignoring the fact that we've already sort of engineered a global scenario where "The West" (the US, Canada, Japan, South Korea, Taiwan, Australia, New Zealand, and Western Europe, creeping now as far east as Poland) never attack each other, and slowly make everyone else in the world more like them. It is group selection of a sort, and it seems to be working pretty well. These countries defend their capital, and each others' capital, benefit from each others' capital, and engage soley in non-violent conflict with each other. If you threaten them, they crush you and make you more like them; even if you don't, they work to corrupt you to make you more like them. Indeed, even places like China are slowly being corrupted to be more like the West.

The more that sort of thing happens, the less likely violent conflict becomes because it is simply less beneficial, and indeed, there is even some evidence to suggest we are being selected for docility - in "the West" we've seen crime rates and homicide rates decline for 20+ years now.

As a final, random aside:

My favorite thing about the Trinity test was the scientist who was taking side bets on the annihilation of the entire state of New Mexico, right in front of the governor of said state, who I'm sure was absolutely horrified.

Comment author: 28 May 2014 02:28:58PM 1 point [-]

the fact that capital is vastly easier to destroy than it is to create

Capital is also easier to capture than it is to create. Your argument looks like saying that it's better to avoid wars than to lose them. Well, yeah. But what about winning wars?

we've already sort of engineered a global scenario where "The West" ... never attack each other

In which meaning are you using the word "never"? :-D

Comment author: 28 May 2014 11:42:54PM 0 points [-]

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital. Cruise missiles and drones are excellent for winning without any risk at all, but they're not good for actually keeping the capital you are trying to take intact.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" came to be in the first place. It isn't the longest of time spans, but over time armed conflict in Europe has greatly diminished and been pushed further and further east.

Comment author: 29 May 2014 04:07:03PM 1 point [-]

The problem is that asymmetric warfare, which is the best way to win a war, is the worst way to acquire capital.

The best way to win a war is to have an overwhelming advantage. That sort is situation is much better described by the word "lopsided". Asymmetric warfare is something different.

Example: Iraqi invasion of Kuwait.

Spying, subversion, and purchasing are far cheaper, safer, and more effective means of capturing capital than violence.

Spying can capture technology, but technology is not the same thing as capital. Neither subversion nor purchasing are "means of capturing capital" at all. Subversion destroys capital and purchases are exchanges of assets.

As far as "never" goes - the last time any two "Western" countries were at war was World War II, which was more or less when the "West" came to be in the first place.

That's an unusual idea of the West. It looks to me like it was custom-made to fit your thesis.

Can you provide a definition? One sufficiently precise to be able to allocate countries like Poland, Israel, Chile, British Virgin Islands, Estonia, etc. to either "West" or "not-West".

Comment author: 28 May 2014 04:52:33PM 0 points [-]

Depends on the capital. Doesn't work too well for infrastructure and human capital, and the west has plenty of those anyway. What the west is insecure about is energy,and it seems that a combination of diplomacy, threat and proxy warfare is a more efficient way to keep it flowing than all out capture.

Comment author: 28 May 2014 06:09:55PM 1 point [-]

Doesn't work too well for infrastructure and human capital

Depends on the human capital. Look at the history of the US space program :-/

is a more efficient way to keep it flowing

At the moment. I'm wary of evolutionary arguments based on a few decades worth of data.

Comment author: 28 May 2014 06:17:39PM 0 points [-]

The example of von Braun and co crossed my mind. But that was something of a side effect. Fighting a war specifically to capture a smallish numbers of smart people is frought with risks.

Comment author: 28 May 2014 11:43:59PM 1 point [-]

Opportunistic seizure of capital is to be expected in a war fought for any purpose.

Comment author: 27 May 2014 11:52:11PM 0 points [-]

Incidentally, you can blockquote paragraphs by putting > in front of them, and you can find other help by clicking the "Show Help" button to the bottom right of the text box. (I have no clue why it's all the way over there; it makes it way less visible.)

There's actually a pretty good counter-argument to this, namely the fact that capital is vastly easier to destroy than it is to create, and that, thusly, an area which avoids conflict has an enormous advantage over one that doesn't because it maintains more of its capital.

But, the more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

Comment author: 28 May 2014 11:47:33PM *  2 points [-]

The more conflict avoidant the agents in an area, the more there is to gain from being an agent that seeks conflict.

This is only true if the conflict avoidance is innate and is not instead a form of reciprocal altruism.

Reciprocal altruism is an ESS where pure altruism is not because you cannot take advantage of it in this way; if you become belligerent, then everyone else turns on you and you lose. Thus, it is never to your advantage to become belligerent.

Comment author: 29 May 2014 04:33:11AM *  1 point [-]

Agreed. The word 'avoid' and the group selection-y argument made me think it was a good idea to raise that objection and make sure we were discussing reciprocal pacifists, not pure pacifists.

Comment author: 27 May 2014 11:47:23PM *  0 points [-]

I don't even know how they got at it, but it sounds like a typical wild guess to me. How would you even arrive at that figure?

Here is a contemporary paper discussing the risk, which doesn't seem to come up with the 3e-6 number, and here are some of Hamming's reflections. An excerpt from the second link:

Shortly before the first field test (you realize that no small scale experiment can be done--either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, "It is the probability that the test bomb will ignite the whole atmosphere." I decided I would check it myself!

Compton claims (in an interview with Pearl Buck I cannot easily find online) that 3e-6 was actually the decision criterion (if it was higher than that, they were going to shut down the project as more dangerous than the Nazis), and the estimate came in at lower, and so they went ahead with the project.

In modern reactors, they try to come up with a failure probability by putting distributions on unknown variables during potential events, simulating those events, and then figuring out what portion of the joint input distribution will lead to a catastrophic failure. One could do the same with unknown parameters like the cross-section of nitrogen at various temperatures; "this is what we think it could be, and we only need to be worried if it's over here."

Comment author: 27 May 2014 10:46:49PM *  0 points [-]

Apparently I don't know how to use this system properly.

Comment author: 06 August 2009 11:20:00PM 0 points [-]

In saying "our compression of subjective time can be exponential", do you actually mean that the compression rate may keep growing exponentially as a function of real time?

Comment author: 06 August 2009 11:56:00PM *  0 points [-]

Compression attained can be an exponential function of time. That's not the same as saying that compression rate can grow exponentially. I mean, if it's a "rate", it already expresses how compression grows, so "the compression rate grows exponentially" means "the first derivative of compression grows exponentially".

Anyway, compression can't keep increasing indefinitely, due to the Planck constant. Mike Vassar once did some back-of-envelope-calculations showing that we have surprisingly few orders of magnitude to go before we hit it in terms of computational power - less than 20 orders of magnitude, IIRC. But two orders of magnitude is enough to kill us, in this scenario. Basically, it will take us something like 5 million years to reach another galaxy, at which point you might consider life safe. If we get just 2 orders of magnitude out of subjective time compression that's like 500 million years, and our survival to that point seems dubious.

(I went back to clarify this because I realized people don't usually think of 20 orders of magnitude as "surprisingly few".)

Comment author: 07 August 2009 12:00:26AM *  0 points [-]

By "compression rate" I meant "compression ratio". Sorry for the confusion. But you know that if something grows exponentially, all of its nth derivatives do also, right?

I did know that the actual universe probably has some physical limits in how you can shrink a computation in space and/or time, thus my question. Actually, I thought you might have done the not-math "exponential" as a way of saying "A LOT!!!"

Comment author: 07 August 2009 12:05:56AM *  0 points [-]

Okay, it is the same thing as saying that compression rate can grow exponentially.

I meant exponential. I don't know if I believe it's exponential, but almost all other futurists say that things are speeding up (time is compressing) exponentially.

Comment author: 06 August 2009 11:28:00PM 0 points [-]

"if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk". I assume you mean that if my utility is around 0, and things are trending toward worse, I should be happy to accept a 99.9% chance of destroying the universe (assuming I'm the .1% possibility gives me some improvement).

"Is life barely worth living? Buy a lottery ticket, and if you don't win, kill yourself - you win either way!" - probably not the best marketing campaign for the state-run lottery.

Comment author: 06 August 2009 11:49:37PM *  1 point [-]

"if your utility is at risk of going negative; it's not possible that you would not accept a .999% risk"

Look at where the semicolon is. You've combined the end of one clause with the beginning of a different clause.

"Is life barely worth living? Buy a lottery ticket, and if you don't win, kill yourself - you win either way!" - probably not the best marketing campaign for the state-run lottery.

Comment author: 06 August 2009 06:29:53PM *  0 points [-]

We're talking about bringing existential threats to chances less than 1 in a million per century. I don't know of any defensive technology that can guarantee a less than 1 in a million failure rate.

Under your theory of 3/1M/Century, you'd only need to do better than a 1/3 failure rate to lower chances to 1/1M/C. A 1/3 failure rate seems rather plausible. If the defense had a 1/1M failure rate, you'd have a 3/1,000,000,000,000 chance of eradication per century.

Comment author: 06 August 2009 06:42:39PM *  1 point [-]

Assume that there is at least one attack per century, and a successful attack will end life. Therefore, you need a failure rate less than 1 in a million to survive a million centuries.

Comment author: 06 August 2009 08:51:00PM 0 points [-]

Every orgainism you see has is the result of an unbroken chain of non-extinction that stretches back some 4 billion years. The rate of complete failure for living systems is not known - but it appears to have been extremely low so far.

Comment author: 06 August 2009 09:25:43PM *  3 points [-]
• Time compression did not start recently. (Well, it did, once you account for time compression.)

• Bacteria have limited technological capabilities.

Comment author: 06 August 2009 09:26:08PM 0 points [-]

This assumes that a successful attack will end life with P~=1 and a successful attack will occur once per century, which seems, to put it mildly, excessive.

As I understood your original assumption, each century sees one event with P=3/1M of destruction, independent of any intervention. If an intervention has a 1/3 failure rate, and you intervene every time, this would reduce your chance of annihilation/century to 1/1M, which is your goal.

It's quite possible we're thinking of different things when we say failure rate; I mean the failure rate of the defensive measure; I think you mean the failure rate as in the pure odds the world blows up.

Comment author: 06 August 2009 09:58:17PM *  0 points [-]

I wasn't using the Trinity case when I wrote that part. This part assumes that we will develop some technology X capable of destroying life, and that we'll also develop technology to defend against it. Then say each century sees 1 attack using technology X that will destroy life if it succeeds. (This may be, for instance, from a crazy or religious or just very very angry person.) You need to defend successfully every time. It's actually much worse, because there will probably be more than one Technology X.

If you think about existing equilibria between attackers and defenders, such as spammers vs. spam filters, it seems unlikely that, once technology has stopped developing, every dangerous technology X will have such a highly-effective defense Y against it. The priors, I would think, would be that (averaged over possible worlds) you would have something more like a 50% chance of stopping any given attack.

Comment author: 29 May 2011 01:02:53PM -1 points [-]

Assuming rational agents with a reasonable level of altruism (by which I mean, incorporating the needs of other people and future generations into their own utility functions, to a similar degree to what we consider "decent people" to do today)...

If such a person figures that getting rid of the Nazis or the Daleks or whoever the threat of the day is, is worth a tiny risk of bringing about the end of the world, and their reasoning is completely rational and valid and altrustic (I won't say "unselfish" for reasons discussed elsewhere in this thread) and far-sighted (not discounting future generations too much)...

... then they're right, aren't they?

If the guys behind the Trinity test weighed the negative utility of the Axis taking over the world, presumably with the end result of boots stamping on human faces forever, and determined that the 3/1,000,000 chance of ending all human life was worth preventing this future from coming to pass, then couldn't Queen Victoria perform the same calculations, and conclude "Good heavens. Nazis, you say? Spreading their horrible fascism in my empire? Never! I do hope those plucky Americans manage to build their bomb in time. Tiny chance of destroying the world? Better they take that risk than let fascism rule the world, I say!"

If the utility calculations performed regarding the Trinity test were rational, altrustic and reasonably far-sighted, then they would have been equally valid if performed at any other time in history. If we apply a future discounting factor of e^-kt, then that factor would apply equally to all elements in the utility calculation. If the net utility of the test were positive in 1945, then it should have been positive at all points in history before then. If President Truman (rationally, altrustically, far-sightedly) approved of the test, then so should Queen Victoria, Julius Caesar and Hammurabi have, given sufficient information. Either the utility calculations for the test were right, or they weren't.

If they were right, then the problem stops being "Oh no, future generations are going to destroy the world even if they're sensible and altruistic!", and starts being "Oh no, a horrible regime might take over the world! Let's hope someone creates a superweapon to stop them, and damn the risk!"

If they were wrong, then the assumption that the ones performing the calculation were rational, altrustic and far-sighted is wrong. Taking these one by one:

1) The world might be destroyed by someone making an irrational decision. No surprises there. All we can do is strive to raise the general level of rationality in the world, at least among people with the power to destroy the world.

2) The world might be destroyed by someone with only his own interests at heart. So basically we might get stuck with Dr Evil. We can't do a lot about that either.

3) The world might be destroyed by someone acting rationally and altrustically for his own generation, but who discounts future generations too much (i.e. his value of k in the discounting factor is much larger than ours). This seems to be the crux of the problem. What is the "proper" value of k? It should probably depend on how much longer humans are going to be around, for reasons unrelated to the question at hand. If the world really is going to end in 2012, then every dollar spent on preventing global warming should have been spent on alleviating short-term suffering all over the world, and the proper value for k is very large. If we really are going to be here for millions of years, then we should be exceptionally careful with every resource (both material and negentropy-based) we consume, and k should be very small. Without this knowledge, of course, it's very difficult to determine what k should be.

That may be the way to avoid a well-meaning scientist wiping out all human life - find out how much longer we have as a species, and then campaign that everyone should live their lives accordingly. Then, the only existential risks that would be implemented are the ones that are actually, seriously, truly, incontrovertibly, provably worth it.

Comment author: 29 April 2014 02:09:58PM 0 points [-]

You've sidestepped my argument, which is that just the existential risks that are worth it are enough to guarantee destroying the universe in a cosmologically short time.

Comment author: 06 August 2009 10:00:44PM 0 points [-]

It seems like this post exhibits a great deal of omission bias. Refusing to make rational trade-offs with existential risk doesn't make the risk go away.

Comment author: 06 August 2009 11:18:29PM *  0 points [-]

You seem to be saying that we're doomed if we do, and doomed if we don't.

So, take on all these risks, because life is going to be extinguished anyway?

I don't think that's what you mean. I think, if you mean anything coherent, you mean that... no, I can't figure out what you might mean.

If you choose not to take a risk, it makes that risk go away. If you mean that you're going to get hit by some other risk that you didn't think of anyway, you are showing little faith in intelligence. As we learn more, we become aware of more and more risks. There can't be an infinite number of existential risks waiting for us, or we wouldn't be here. Therefore, we can expect to eventually anticipate most existential risks, and deal with them.

Comment author: 08 August 2009 07:03:52AM 1 point [-]

I'm assuming that avoiding risks involves a tradeoff versus other forms of human welfare, and I don't see why a strategy that makes humanity worse off but longer lived is necessarily preferable to one that makes humanity better off but shorter lived.

And yes, we're "doomed" in the sense that, as far as I understand, an infinitely long existence isn't an available option.

Comment author: 06 August 2009 07:26:51PM *  0 points [-]

Re: "If you believe in accelerating change, then the number of important events in a given time interval increases exponentially, or, equivalently, the time intervals that should be considered equivalent opportunities for important events shorten exponentially."

Uh - that doesn't go on forever. Any more than rats with a grain pile allows growth forever. Your statement takes the idea of exponential change into an utterly ridiculous realm.

Comment author: 06 August 2009 07:39:03PM *  0 points [-]

You're right. It still has a large impact, though. Even if we get only 3 more doublings, it reduces the time available by a factor of 8.

The nearest other galaxy is 2 million light years away. If we get 6 doublings, that's 128 million subjective light years. That's a worrisome amount.

The nearest other star is 4.24 light years away. If we get 20 doublings, that's over 4 million subjective light years away. Also a worrisome amount.

Comment author: 06 August 2009 08:33:55PM 0 points [-]

The observation bears on this statement:

"More important is that our compression of subjective time can be exponential, while our ability to escape from ever-broader swaths of destruction is limited by lightspeed."

Eventually, compression in subjective time stops, while expansion continues.

Comment author: 06 August 2009 09:13:42PM 0 points [-]

Yes, that's right. If you can survive long enough to get to that point.

Comment author: 06 August 2009 06:17:40PM 0 points [-]

Your argument assumes that the time-horizon of rational utility maximisers never reaches further than their next decision. If I only get one shot to increase my expected utility by 1%, and I'm rational, yes, I'll take any odds better than 99:1 in favour on an all or nothing bet. That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.

Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war. As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated (although Fermi offered evens on the morning of the Trinity test).

Comment author: 06 August 2009 06:44:25PM *  2 points [-]

That is a highly contrived scenario: it is almost always possible to stake less than your entire utility on an outcome, in which case you generally should in order to reduce risk-of-ruin and thus increase long-term expected utility.

It is derived from real-life experiences, which I listed. Yes, it is almost always possible to stake less than your entire utility. Almost. Hence the "once per century", instead of "billions of times per day".

Further, the risks of not using nuclear weapons in the Second World War were nothing like those you gave. Japan was in no danger of occupying the United States at the time the decision to initiate the Trinity test was made; as for Germany, it had already surrendered! The anticipated effects of not using the Bomb were rather that of losing perhaps several hundred thousand soldiers in an invasion of Japan, and of course the large economic cost of prolonging the war.

I think that's right. Do you realize you are only making my case stronger, by showing that the decision was made by somewhat-rational people for even fewer benefits?

If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point. Arguing about what the risks and benefits truly were historically is irrelevant. It doesn't really matter what actual humans did in an actual situation, because we already agree that humans are irrational. What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.

As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated

So what were they?

Comment author: 06 August 2009 07:36:50PM *  0 points [-]

I meant that low existential risk wagers are almost always available, regardless of the presence or absence of high existential risk wagers, and I claimed that those low risk wagers are preferable even when the cost of not taking the high risk wager is very high, provided that is that you have a long time horizon. The only time you should take a high existential risk wager is when your long-term future utility will be permanently and substantially decreased by your not doing so. That doesn't apply to your example of the first nuclear test, as the alternative would not lead to the nightmare invasion scenario, but rather to a bloody but recoverable mess. So you haven't proven the rationality of testing nukes (although only in that scenario, as you point out).

If you accept that it would be rational to perform the Trinity test in a hypothetical world in which the Germans and Japanese were winning the war, and in which it had a 3 in 1 million chance of destroying life on Earth, then I have made my point.

It actually still depends on the time horizon and the utility of a surrender or truce. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...

What matters is whether we can find conditions under which perfectly rational beings, faced with the situation as I posed it, would choose not to conduct the Trinity test.

I really don't think we can. If the utility of not-testing is zero at all times after T, the utility of destroying the Earth is zero at all times after T, and the utility of winning the war is greater than or equal to 0 at all times after T, then whatever your time horizon you will test if you are an expected utility maximizer. Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don't hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.

As for the calculated risk of atmospheric ignition, the calculated risk at the time of testing was even lower than the 300000:1 you stated

So what were they?

They were zero! Proved to be physically impossible, and then no-one seriously questioned the validity of the calculations. As for the best-possible estimate, it must have been higher, of course.

Comment author: 06 August 2009 08:00:42PM 0 points [-]

Where I disagree is that there exists a base rate of anywhere like once per century for taking a 3 in a million chance of destroying the world in any scenarios where those sorts of figures don't hold. I also disagree that rational agents will confront each other with such choices anything like so often as once per century.

If you were to wind the clock back to 1940, and restart World War 2, is there less than a 10% chance of arriving in the nightmare scenario? If not, doesn't this imply the base rate is at least once per thousand years?

Comment author: 06 August 2009 07:44:10PM 0 points [-]

I think that you're historically correct, but it's enough to posit a hypothetical World War 2 in which the alternative was the nightmare invasion scenario, and show that it would then be rational to conduct the Trinity test.

Comment author: 06 August 2009 07:54:57PM *  0 points [-]

I've edited my comment in response to the hypothetical.

Comment author: 06 August 2009 07:58:40PM 0 points [-]

It actually still depends on the time horizon. I suppose a 30 year cut-off could be short enough to make it rational. Of course if being overrun by Nazis is assumed to lead to eternal hellish conditions...

The time-horizon is very important. One of my points is that I don't see how a rational agent could have a time-horizon on the scale of the life of the universe.

Comment author: 06 August 2009 06:29:41PM *  2 points [-]

although Fermi offered evens on the morning of the Trinity test

I bet I know which side of the bet Fermi was willing to take.

Comment author: 06 August 2009 07:10:54PM 1 point [-]

It would be rational to offer any odds for that bet.

Comment author: 06 August 2009 07:03:29PM 0 points [-]

Seconded regarding the stakes in WW2. The scientists weren't on the front lines either, so it's highly doubtful they would have been killed.