Nick Beckstead: On the Overwhelming Importance of Shaping the Far Future

ABSTRACT: In slogan form, the thesis of this dissertation is that shaping the far future is overwhelmingly important. More precisely, I argue that:

Main Thesis: From a global perspective, what matters most (in expectation) is that we do what is best (in expectation) for the general trajectory along which our descendants develop over the coming millions of years or longer.

The first chapter introduces some key concepts, clarifies the main thesis, and outlines what follows in later chapters. Some of the key concepts include: existential risk, the world's development trajectory, proximate benefits and ripple effects, speeding up development, trajectory changes, and the distinction between broad and targeted attempts to shape the far future. The second chapter is a defense of some methodological assumptions for developing normative theories which makes my thesis more plausible. In the third chapter, I introduce and begin to defend some key empirical and normative assumptions which, if true, strongly support my main thesis. In the fourth and fifth chapters, I argue against two of the strongest objections to my arguments. These objections come from population ethics, and are based on Person-Affecting Views and views according to which additional lives have diminishing marginal value. I argue that these views face extreme difficulties and cannot plausibly be used to rebut my arguments. In the sixth and seventh chapters, I discuss a decision-theoretic paradox which is relevant to my arguments. The simplest plausible theoretical assumptions which support my main thesis imply a view I call fanaticism, according to which any non-zero probability of an infinitely good outcome, no matter how small, is better than any probability of a finitely good outcome. I argue that denying fanaticism is inconsistent with other normative principles that seem very obvious, so that we are faced with a paradox. I have no solution to the paradox; I instead argue that we should continue to use our inconsistent principles, but we should use them tastefully. We should do this because, currently, we know of no consistent set of principles which does better.

[If there's already been a discussion post about this, my apologies, I couldn't find it.]

 

New Comment
20 comments, sorted by Click to highlight new comments since:
[-]Shmi60

I am no sure whether the thesis argues for or against maxipok. Maxipok seems to me like an eminently sensible approach, but, just like any other, is probably flawed when taken to extremes. My guess is that "we should continue to use our inconsistent principles, but we should use them tastefully" means the ability to detect and avoid these extremes, regardless of the specific optimization goal.

I think it argues mostly for maxipok, but argues that other future-steering goals can also be overwhelmingly important.

This will be a good place to collect the discussion on Beckstead's thesis; thanks!

For the completionist, my earlier comment on Beckstead's thesis is here.

Chapter one of Nick's thesis appeared, in summary form, here and on the Effective Altruism blog.

(Too bad this post wasn't made by Nick Beckstead, because then he would be able to receive notice when someone posts here. I guess I'll send a PM alerting him to this comment.)

I'd like to suggest that Nick do a sequence of posts on the main novel arguments in his thesis, both to draw more attention to them, and to focus discussion. Right now it's hard to get much of a public discussion going because if I read one section of his thesis and post a comment on that, most other people will either have read that section a long time ago and have forgotten much of it, or will read it in the future and therefore can't respond.

That aside, I do have an object-level comment. Nick states (in section 6.3.1) that Period Independence is incompatible with bounded utility function, but I think that's wrong. Consider a total utilitarian who exponentially discounts each person-stage according to their distance from some chosen space-time event. Then the utility function is both bounded (assuming the undiscounted utility for each person-stage is bounded) and satisfies Period Independence. Another idea for a bounded utility function satisfying Period Independence, which I previously suggested on LW and was originally motivated by multiverse-related considerations, is to discount or bound the utility assigned to each person-stage by their algorithmic probability.

That aside, I do have an object-level comment. Nick states (in section 6.3.1) that Period Independence is incompatible with bounded utility function, but I think that's wrong. Consider a total utilitarian who exponentially discounts each person-stage according to their distance from some chosen space-time event. Then the utility function is both bounded (assuming the undiscounted utility for each person-stage is bounded) and satisfies Period Independence.

I agree with this. I think I was implicitly assuming some additional premises, particularly Temporal Impartiality. I believe that bounded utility + Temporal Impartiality is inconsistent with bounded utility. (Even saying this implicitly assumes other stuff, like transitive rankings, etc., though I agree that Temporal Impartiality is much more substantive.)

Another idea for a bounded utility function satisfying Period Independence, which I previously suggested on LW and was originally motivated by multiverse-related considerations, is to discount or bound the utility assigned to each person-stage by their algorithmic probability.

I am having a hard time parsing this. Could you explain where the following argument breaks down?

Let A(n,X) be a world in which there are n periods of quality X.

  1. The value of what happens during a period is a function of what happens during that period, and not a function of what happens in other periods.

  2. If the above premise is true, then there exists a positive period quality X such that, for any n, A(n,X) is a possible world.

  3. Assuming Period Independence and Temporal Impartiality, as n approaches infinity, the value of A(n,X) approaches infinity.

  4. Therefore, Period Independence and Temporal Impartiality imply an unbounded utility function.

The first premise here is something I articulate in Section 3.2, but may not be totally clear given the informal statement of Period Independence that I run with.

Let me note that one thing about your proposal confuses me, and could potentially be related to why I don't see which step of the above argument you deny. I primarily think of probability as a property of possible worlds, rather than individuals. Perhaps you are thinking of probability as a property of centered possible worlds? Is your proposal that the goodness of a world A with is of the form:

g(A) = well-being of person 1 prior centered world probability of person 1 in world A + well-being of person 2 prior centered world probability of person 2 in A + ...

? If it is, this is a proposal I have not thought about and would be interested in hearing more about its merits and why it is bounded.

Could you explain where the following argument breaks down?

My proposal violates Temporal Impartiality.

I primarily think of probability as a property of possible worlds, rather than individuals. Perhaps you are thinking of probability as a property of centered possible worlds?

Yes, sort of. When I said "algorithmic probability" I was referring to the technical concept divorced from standard connotations of "probability", but my idea is also somewhat related to the idea of probability as a property of centered possible worlds.

I guess there's a bit of an inferential gap between us that makes it hard for me to quickly explain the idea to you. From my perspective, it would be much easier if you were already familiar with Algorithmic Information Theory and my UDT ideas, but I'm not sure if you want to read up on all that. Do you see Paul Christiano often? If so, he can probably explain it to you in person fairly quickly. Or, since you're at FHI, Stuart Armstrong might also know enough about my ideas to explain them to you.

OK, I"ll ask Paul or Stewart next time I see them.

Does your proposal also violate #1 because the simplicity of an observer-situated-in-a-world is a holistic property of the the observer-situated-in-a-world rather than a local one?

Does your proposal also violate #1 because the simplicity of an observer-situated-in-a-world is a holistic property of the the observer-situated-in-a-world rather than a local one?

Yes (assuming by #1 you mean Period Independence), but it's not clear to what extent. For example there are at least two kinds of programs that can output a human brain. A) simulate a world and output the object at some space-time location. B) simulate a world and scan for an object matching some criteria, then output such an object. If a time period gets repeated exactly, people's algorithmic probability from A gets doubled, but algorithmic probability from B doesn't. I'm not sure at this point whether A dominates B or vice versa.

Also, it's not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post that's related to this.

Also, it's not clear to me that strict Period Independence is a good thing. It seems reasonable to not value a time period as much if you knew it was an exact repetition of a previous time period. I wrote a post that's related to this.

I agree that Period Independence may break in the kind of case you describe, though I'm not sure. I don't think that the kind of case you are describing here is a strong consideration against using Period Independence in cases that don't involve exact repetition. I think your main example in the post is excellent.

I don't think that the kind of case you are describing here is a strong consideration against using Period Independence in cases that don't involve exact repetition.

What if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be "timid" while making the downsides of "timidity" seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little.

BTW what do you think about my suggestion to do a sequence of blog posts based on your thesis? Or maybe you can at least do one post as a trial run? Also as an unrelated comment, the font in your thesis seems to be such that it's pretty uncomfortable to read in Adobe Acrobat, unless I zoom in to make the text much larger than I usually have to. Not sure if it's something you can easily fix. If not, I can try to help if you email me the source of the PDF.

What if we assume Period Independence except for exact repetitions, where the value of extra repetitions eventually go to zero? Perhaps this could be a way to be "timid" while making the downsides of "timidity" seem not so bad or even reasonable? For example in section 6.3.2, such a person would only choose deal 1 over deal 2 if the years of happy lives offered in deal 1 are such that he would already have repeated all possible happy time periods so many times that he values more repetitions very little.

I think it would be interesting if you could show that the space of possible periods-of-lives is structured in such a way that, when combined with a reasonable rule for discounting repetitions, yields a bounded utility function. I don't have fully developed views on the repetition issue and can imagine that the view has some weird consequences, but if you could do this I would count it as a significant mark in favor of the perspective.

BTW what do you think about my suggestion to do a sequence of blog posts based on your thesis?

I think this would have some value but isn't at the top of my list right now.

Also as an unrelated comment, the font in your thesis seems to be such that it's pretty uncomfortable to read in Adobe Acrobat, unless I zoom in to make the text much larger than I usually have to. Not sure if it's something you can easily fix. If not, I can try to help if you email me the source of the PDF.

I think I'll keep with the current format for citation consistency for now. But I have added a larger font version here.

"Period Independence: By and large, how well history goes as a whole is a function of how well things go during each period of history; when things go better during a period, that makes the history as a whole go better; when things go worse during a period, that makes history as a whole go worse; and the extent to which it makes history as a whole go better or worse is independent of what happens in other such periods."

How far can this go? If I slice history in 1 day periods, each day the universe contains one unique advanced civilization with the same overall total moral value, each civilization would be completely alien and ineffable to another, each civilization only lives for one day, and then it's gone forever. This universe holds the same moral value as the one where only one of those civilizations flourishes for eternity?

We need a name for the "effective altruists" or "extreme altruists" who specifically care about the cosmic future that allegedly potentially depends on events on Earth. Or even just for the field of studies which concerns itself with how to act in such a situation. "Astronomical altruism" and "astronomical ethics" suggest themselves... And I would be more impressed with such astronomical altruists, and their earthbound cousins the effective altruists, if they showed more awareness of the network of catastrophe and disappointment that is so much a part of human life to date.

The astronomical altruists are a minority within a minority, and I suppose I see two camps here. One group thinks in terms of FAI and the contingency of value systems, so the possible futures are conceived as: extinction, a civilization with human-friendly values replicated across the galaxies, a civilization with warped values replicated across the galaxies, paperclips... and so there is the idea that a cosmically bad outcome is possible, not just because of the "astronomical waste" of a future universe that could have been filled with happy people but instead ends up uninhabited, but because the blueprint of the cosmic civilization was flawed at its inception - producing something that is either just alien to human sensibilities, even "renormalized" ones, because it forgot some essential imperatives or introduced others; or (the worst nightmare) producing something that looks actually evil and hostile to human values, and replicating that across millions of light-years.

I was going to say that the other camp just hopes for an idyllic human life copied unto infinity, and concerns itself neither with contingency of value, nor with the possibility that a trillionfold duplication of Earth humanity will lead to a trillionfold magnification of the tragedies already known from our history. Some extreme advocates of space colonization might fit this description, but of course there are other visions out there - a crowded galaxy of upload-descended AIs, selected for their enthusiasm for replication, happily living in subsistence conditions (i.e. with very tight resource budgets); or poetic rhapsodies about an incomprehensibly diverse world of robot forms and AIs of astronomical size, remaking the cosmos into one big Internet...

So perhaps it's more accurate to say that there are two subtypes of astronomical altruism which are a little unreflective about the great future that could happen, the great future for the sake of which we must fight those various threats of extinction grouped under "existential risk". There is a humanist vision, which supposes that the great future consists of human happiness replicated across the stars. It imagines an idyll that has never existed on Earth, but which has certainly been imagined many times over by human utopians seeking a way beyond the grim dour world of history; the novelty is that this idyll is then imagined as instantiated repeatedly across cosmic spaces. And there is a transhumanist vision, basically science-fictional, of inconceivable splendors, endless strange worlds and strange modes of being, the product of an imagination stirred by the recent centuries of intellectual and technological progress.

Now here is something curious. If we keep looking for other views that have been expressed, we will occasionally run across people who are aware that cosmically extended civilization means the possibility or even likelihood of cosmically extended tragedy and catastrophe. And some of these people will say that, nonetheless, it is still worth affirming the drive to spread across the universe: this prospect is so grand that it would redeem even astronomically sized tragedy. I cannot think of any prominent public "tragic cosmists" in the present, who have the fanaticism of the astronomical altruists but whose sensibility is tragic affirmation of life, but I'm sure such views are held privately by a few people.

In any case, you would think that utilitarians concerned about "astronomical waste" would also be concerned about the possibility of "astronomical tragedy". And perhaps they could perform their utilitarian calculation, which normally turns up the result that the good outweighs the bad and therefore we should go for it. But this whole aspect seems very underplayed in discussions e.g. of existential risk. There might be a good future, or a bad future, or no future; but who ever talks of a good future riddled with bad, or a bad future with islands of good?

There seems to be a mindset according to which actions here and now (I mean 21st century Earth) set the tone for everything that follows. We need to make the effort to produce a good future, but once it is achieved and set in motion, then we can relax, and we or our descendants will just reap the rewards. Perhaps the FAI camp has some justification for thinking like this, since they envision the rise of a hyperintelligence of overwhelming power, with the capacity to make its preferences law within its expanding sphere of influence...

But otherwise, this idea that this is the Now that matters the most, reflects a sort of optimism of the will, an optimism about one's place in the scheme of things and one's capacity to make a difference in a big way. Some advocates of space colonization say that it's about not having all our eggs in one basket; so there might be some justification there, for thinking this is a special moment - this is indeed the time when it has first become possible for humans to live beyond Earth. If you're worried about whole-Earth vulnerabilities, then this is our first chance to simply place people beyond their reach. Take that, earthbound existential risks!

From this perspective, what I don't see discussed is (1) the fact that hazard persists even beyond Earth (2) the fact that saving the human race from destruction also means perpetuating its suffering, evil, and folly. Of course it's very difficult to get your mind around the full spectrum of possibilities, when they include wars between ideologies that don't even exist yet, or the catastrophic decline and fall of vast projects not yet imagined. But I think that much of the ethical anxiety about the imperative to keep the possibility of a big future alive, has not assimilated the lessons of the earthbound present; that it's based in either a desire to protect the happiness of oneself and one's friends from a threatening world, or an affirmation of will and power, which hasn't accepted the lesson of life and history, that things do fall apart or get torn apart, that life also includes frustration, desolation, and boredom.

I do not know whether any form of cosmic hope is warranted, but I especially doubt that cosmic hope that is pursued in a state of blindness or denial, will nonetheless be fulfilled.

I had a conversation about argument mapping software with Katja Grace and Paul Christiano at the weekend, and this comment reinforces my conclusion that really good argument mapping software would be a very high value thing to have. I want to map out the tree of arguments underlying Beckstead's thesis, so that I can ask you to identify a particular node you disagree with, and set out a counterargument that he hasn't already presented. It would be a lot easier to tell whether there is value in what you say that way.

However, in the absence of that, a paragraph saying "On page X he asserts Y, but Z" would help a lot.

Does this thesis say something beyond, "If life is good, and if we have a chance to create lots of life, then we should go for it?"

Personally, it's not actually about saving the future. It's about justifying the past.
(Not necessarily endorsing this as an ethical thesis. Reporting it as an attempted stack trace on my actual emotional reasons for supporting the good-future project.)

There might be a good future, or a bad future, or no future; but who ever talks of a good future riddled with bad, or a bad future with islands of good?

Given an FAI singleton or uFAI singleton, islands are improbable. A Malthusian future full of ems, however, seems like a possible fit to your model. So expectations about how intelligence and power will coalesce or diversify are crucial.

But otherwise, this idea that this is the Now that matters the most, reflects a sort of optimism of the will

I think it just reflects a straight-line prediction. Every previous "Now" (as said in the past) was crucial; why wouldn't this one be? I'm assuming that history is pretty chaotic. Small disturbances in the past would lead to vast ones in the present.

[+][anonymous]-90