In the not too distant past, people thought that our universe might be capable of supporting an unlimited amount of computation. Today our best guess at the cosmology of our universe is that it stops being able to support any kind of life or deliberate computation after a finite amount of time, during which only a finite amount of computation can be done (on the order of something like 10^120 operations).

Consider two hypothetical people, Tom, a total utilitarian with a near zero discount rate, and Eve, an egoist with a relatively high discount rate, a few years ago when they thought there was .5 probability the universe could support doing at least 3^^^3 ops and .5 probability the universe could only support 10^120 ops. (These numbers are obviously made up for convenience and illustration.) It would have been mutually beneficial for these two people to make a deal: if it turns out that the universe can only support 10^120 ops, then Tom will give everything he owns to Eve, which happens to be $1 million, but if it turns out the universe can support 3^^^3 ops, then Eve will give $100,000 to Tom. (This may seem like a lopsided deal, but Tom is happy to take it since the potential utility of a universe that can do 3^^^3 ops is so great for him that he really wants any additional resources he can get in order to help increase the probability of a positive Singularity in that universe.)

You and I are not total utilitarians or egoists, but instead are people with moral uncertainty. Nick Bostrom and Toby Ord proposed the Parliamentary Model for dealing with moral uncertainty, which works as follows:

Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability.  Now imagine that each of these theories gets to send some number of delegates to The Parliament.  The number of delegates each theory gets to send is proportional to the probability of the theory.  Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting.  What you should do is act according to the decisions of this imaginary Parliament.

It occurred to me recently that in such a Parliament, the delegates would makes deals similar to the one between Tom and Eve above, where they would trade their votes/support in one kind of universe for votes/support in another kind of universe. If I had a Moral Parliament active back when I thought there was a good chance the universe could support unlimited computation, all the delegates that really care about astronomical waste would have traded away their votes in the kind of universe where we actually seem to live for votes in universes with a lot more potential astronomical waste. So today my Moral Parliament would be effectively controlled by delegates that care little about astronomical waste.

I actually still seem to care about astronomical waste (even if I pretend that I was certain that the universe could only do at most 10^120 operations). (Either my Moral Parliament wasn't active back then, or my delegates weren't smart enough to make the appropriate deals.) Should I nevertheless follow UDT-like reasoning and conclude that I should act as if they had made such deals, and therefore I should stop caring about the relatively small amount of astronomical waste that could occur in our universe? If the answer to this question is "no", what about the future going forward, given that there is still uncertainty about cosmology and the nature of physical computation. Should the delegates to my Moral Parliament be making these kinds of deals from now on?

New to LessWrong?

New Comment
17 comments, sorted by Click to highlight new comments since: Today at 5:53 AM

If Moral Parliament can make deals, it could as well decide on a single goal to be followed thereafter, at which point moral uncertainty is resolved (at least formally). For this to be a good idea, the resulting goal has to be sensitive to facts discovered in the future. This should also hold for other deals, so it seems to me that unconditional redistribution of resources in not the kind of deal that a Moral Parliament should make. Some unconditional redistributions of resources are better than others, but even better are conditional deals that say where the resources will go depending on what is discovered in the future. And while resources could be wasted, so that at a future point you won't be able to direct at much in a new direction, seats in the Moral Parliament can't be.

If Moral Parliament can make deals, it could as well decide on a single goal to be followed thereafter, at which point moral uncertainty is resolved (at least formally). For this to be a good idea, the resulting goal has to be sensitive to facts discovered in the future.

The "Eve" delegates want the "Tom" delegates to have less power no matter what, so they will support a deal that gives the "Tom" delegates less expected power in the near term. The "Tom" delegates give greater value to open-ended futures, so they will trade away power in the near term in exchange for more power if the future turns out to be open ended.

So this seems to be a case where both parties support a deal that takes away sensitivity if the future turns out to be short. Both parties support a deal that gives the "Eve" delegates more power in that case.

[-]gjm10y100

Another possible conclusion is that the "moral parliament" model either doesn't match how you actually think, or doesn't match how you "should" think.

Realizing the implication here has definitely made me more skeptical of the moral parliament idea, but if it's an argument against the moral parliament, then it's also a potential argument against other ideas for handling moral uncertainty. The problem is that trading is closely related to Pareto optimality. If you don't allow trading between your moral theories, then you likely end up in situations where each of your moral theories says that option A is better or at least no worse than option B, but you choose option B anyway. But if you do allow trading, then you end up with the kind of conclusion described in my post.

Another way out of this may be to say that there is no such thing as how one "should" handle moral uncertainty, that the question simply doesn't have an answer, that it would be like asking "how should I make decisions if I can't understand basic decision theory?". It's actually hard to think of a way to define "should" such that the question does have an answer. For example, suppose we define "should" as what an ideal version of you would tell you to do, then presumably they would have resolved their moral uncertainty already and tell you what the correct morality is (or what your actual values are, whichever makes more sense), and tell you to follow that.

it would be like asking "how should I make decisions if I can't understand basic decision theory?"

But that seems to have an answer, specifically along the lines of "follow those heuristics recommended by those who are on your side and do understand decision theory."

Regarding your question, I don't see theoretical reasons why one shouldn't be making deals like that (assuming one can and would stick to them etc). I'm not sure which decision theory to apply to them though.

The Moral Parliament idea generally has a problem regarding time. If it is thought of as making decisions for the next action (or other bounded time period), with new distribution of votes etc when the next choice comes up, then there are intertemporal swaps (and thus pareto improvements according to each theory) that it won't be able to achieve. This is pretty bad, as it at least appears to be getting pareto dominated by another method. However, if it is making one decision for all time over all policies for resolving future decisions, then (1) it is even harder to apply in real life than it looked, and (2) it doesn't seem to be able to deal with cases where you learn more about ethics (i.e. update your credence function over moral theories) -- at least not without quite a bit of extra explanation about how that works. I suppose the best answer may well be that the policies over which the representatives are arguing include branches dealing with all ways the credences could change, weighted by their probabilities. This is even more messy.

My guess is that of these two broad options (decide one bounded decision vs decide everything all at once) the latter is better. But either way it is a bit less intuitive than it first appears.

What's the argument against doing the UDT thing here?

I'm not aware of a specific argument against doing the UDT thing here. It's just a combination of the UDT-like conclusion being counterintuitive, UDT being possibly wrong in general, and the fact that we don't really know what the UDT math says if we apply it to humans or human-like agents (and actually we don't even know what the UDT math is, since logical uncertainty isn't solved yet and we need that to plug into UDT).

Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won't work since one part of you can't uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.

In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don't need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you're a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you're initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.

So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don't see it.

Claim: There are some deals you should make but can't.

Although this is fighting the hypothetical, I think that the universe is almost certainly infinite because observers such as myself will be much more common in infinite than finite universes. Plus, as I'm sure you realize, the non-zero probability that the universe can support an infinite number of computations means that the expected number of computations we expect to be performed in our universe is infinite.

As Bostrom has written, if the universe is infinite then it might be that nothing we do matters so perhaps your argument is correct but with the wrong sign.

[-][anonymous]9y30

Forget the erroneous probabalistic argument: it doesn't matter if the universe is infinite. What we see of it will always be finite, due to inflation.

I think you mean lightspeed travel ?

That doesn't rule out infinite computation, though, since in an infinite universe we have a perpetually increasing amount of resources (as we explore further and further at lightspeed).

[-][anonymous]2y20

No, I was referring to inflationary space-time. The fact that the universe is still expanding (and accelerating in its expansion) means that 92% of the observable universe can never be reachable by us, even if we had the capability to leave Earth now at light speed. The amount of resources accessible to future humanity is shrinking every day as more and more galaxies move outside of our future light cone.

the non-zero probability that the universe can support an infinite number of computations means that the expected number of computations we expect to be performed in our universe is infinite.

Where do you get the non-zero probability from? If it's from the general idea that nothing has zero probability, this proves too much. On the same principle, every action has non-zero probability of infinite positive utility and of infinite negative utility. This makes expected utility calculations impossible, because Inf - Inf = NaN.

I consider this a strong argument against the principle, often cited on LW, that "0 and 1 are not probabilities". It makes sense as a slogan for a certain idea, but not as mathematics.

I'm not certain of this, but my guess is that most physicists would assign much great than, say, .0001 probability to the universe being infinite.