I have written a paper about “multiverse-wide cooperation via correlated decision-making” and would like to find a few more people who’d be interested in giving a last round of comments before publication. The basic idea of the paper is described in a talk you can find here. The paper elaborates on many of the ideas and contains a lot of additional material. While the talk assumes a lot of prior knowledge, the paper is meant to be a bit more accessible. So, don’t be disheartened if you find the talk hard to follow — one goal of getting feedback is to find out which parts of the paper could be made more easy to understand.

If you’re interested, please comment or send me a PM. If you do, I will send you a link to a Google Doc with the paper once I'm done with editing, i.e. in about one week. (I’m afraid you’ll need a Google Account to read and comment.) I plan to start typesetting the paper in LaTeX in about a month, so you’ll have three weeks to comment. Since the paper is long, it’s totally fine if you don’t read the whole thing or just browse around a bit.

New Comment
7 comments, sorted by Click to highlight new comments since:
[-][anonymous]30

Interesting, I like the idea. FWIW, I wouldn't spend too much time predicting the values of superrational ETs and the characteristics of their civilizations. It appears to be too difficult to be accurate with a significant level of detail, and requires inference from a sample size of one (humans and evolution on Earth). I recommend punting to the FAI for this part.

Or alternately, one could indeed spend time on it, but be careful to remain aware of one's uncertainty.

And if pilot wave theory is correct and these other universes exist only in your head...?

Then it doesn't work unless you believe in some other theory that postulates the existence of a sufficiently large universe or multiverse, Everett is only one option.

So, i'm trying to wrap my head around this concept. Let me sketch an example:

Far-future humans have a project where they create millions of organisms that they think could plausibly exist in other universes. They prioritize organisms that might have evolved given whatever astronomical conditions are thought to exist in other universes, and organisms that could plausibly base their behavior on moral philosophy and game theory. They also create intelligent machines, software agents, or anything else that could be common in the multiverse. They make custom habitats for each of these species and instantiate a small population in each one. The humans do this via synthetic biology, actual evolution from scratch (if affordable), or simulation. Each habitat is optimized to be an excellent environment to live in from the perspective of the species or agent inside it. This whole project costs a small fraction of the available resources of the human economy. The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

Is this an example of the type of cooperation discussed here?

Yep, this is roughly the type of cooperation I have in mind. Some minor points:

Overall, I am not sure whether gains from trade would arise in this specific scenario. Perhaps, it’s not better for the civilizations than if each civilization only builds habitats for itself?

The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

As I argue in section “No reciprocity needed: whom to treat beneficially”, the benefit doesn’t necessarily come from the species that we benefit. Even if agent X is certain that agent Y cannot benefit X, agent X may still help Y to make it more likely that X receives help from other agents who are in a structurally similar situation w.r.t. Y and think about it in a way similar to X.

Also, the other civilizations don’t need to be able to check whether we helped them, just like in the prisoner’s dilemma against a copy we don’t have to be able to check whether the other copy actually cooperated. It’s enough to know, prior to making one’s own decision, that the copy reasons similarly about these types of problems.

Yep, this is roughly the type of cooperation I have in mind. Some minor points:

Overall, I am not sure whether gains from trade would arise in this specific scenario. Perhaps, it’s not better for the civilizations than if each civilization only builds habitats for itself?

The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

As I argue in section “No reciprocity needed: whom to treat beneficially”, the benefit doesn’t necessarily come from the species that we benefit. Even if agent X is certain that agent Y cannot benefit X, agent X may still help Y to make it more likely that X receives help from other agents who are in a structurally similar situation w.r.t. Y and think about it in a way similar to X.

Also, the other civilizations don’t need to be able to check whether we helped them, just like in the prisoner’s dilemma against a copy we don’t have to be able to check whether the other copy actually cooperated. It’s enough to know, prior to making one’s own decision, that the copy reasons similarly about these types of problems.