Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] A survey of polls on Newcomb’s problem

Caspar42 20 September 2017 04:50PM
Comment author: Manfred 20 August 2017 02:25:53PM 1 point [-]

Why do we care about acausal trading with aliens to promote them acting with "moral reflection, moral pluralism," etc?

Comment author: Caspar42 31 August 2017 04:15:18PM 0 points [-]

Thanks for the comment!

W.r.t. moral reflection: Probably many agents put little intrinsic value on whether society engages in a lot of moral reflection. However, I would guess that as a whole the set of agents having a similar decision mechanism as I have do care about this significantly and positively. (Empirically, disvaluing moral reflection seems to be rare.) Hence, (if the basic argument of the paper goes through) I should give some weight to it.

W.r.t. moral pluralism: Probably even fewer agents care about this intrinsically. I certainly don’t care about it intrinsically. The idea is that moral pluralism may avoid conflict or create gains from “trade”. For example, let’s say the aggregated values of agents with my decision algorithm contain two values A and B. (As I argue in the paper, I should maximize these aggregated values to maximize my own values throughout the multiverse.) Now, I might be in some particular environment with agents who themselves care about A and/or B. Let’s say I can choose between two distributions of caring about A and B: Either each of the agents cares about A and B, or some care only about A and the others only about B. The former will tend to be better if I (or rather the set of agents with my decision algorithm) care about A and B, because it avoids conflicts, makes it more easy to exploit comparative advantages, etc.

Note that I think neither promoting moral reflection nor promoting moral pluralism is a strong candidate for a top intervention. Multiverse-wide superrationality just increases their value relative to what, say, what a utilitarian would think about these interventions. I think it’s a lot more important to ensure that AI uses the right decision theory. (Of course, this is important, anyway, but I think multiverse-wide superrationality drastically increases its value.)

Comment author: Caspar42 26 August 2017 08:12:30AM 2 points [-]

I recently published a different proposal for implementing acausal trade as humans: https://foundational-research.org/multiverse-wide-cooperation-via-correlated-decision-making/ Basically, if you care about other parts of the universe/multiverse and these parts contain agents that are decision-theoretically similar to you, you can cooperate with them via superrationality. For example, let's say I give most moral weight to utilitarian considerations and care less about, e.g., justice. Probably other parts of the universe contain agents that reason about decision theory in the same way that I do. Because of orthogonality ( https://wiki.lesswrong.com/wiki/Orthogonality_thesis ), many of these will have other goals, though most of them will probably have goals that arise from evolution. Then if I expect (based on the empirical study of humans or thinking about evolution) that many other agents care a lot about justice, this gives me a reason to give more weight to justice as this makes it more likely (via superrationality / EDT / TDT / ... ) that other agents also give more weight to my values.

Comment author: scarcegreengrass 22 June 2017 09:02:30PM 0 points [-]

So, i'm trying to wrap my head around this concept. Let me sketch an example:

Far-future humans have a project where they create millions of organisms that they think could plausibly exist in other universes. They prioritize organisms that might have evolved given whatever astronomical conditions are thought to exist in other universes, and organisms that could plausibly base their behavior on moral philosophy and game theory. They also create intelligent machines, software agents, or anything else that could be common in the multiverse. They make custom habitats for each of these species and instantiate a small population in each one. The humans do this via synthetic biology, actual evolution from scratch (if affordable), or simulation. Each habitat is optimized to be an excellent environment to live in from the perspective of the species or agent inside it. This whole project costs a small fraction of the available resources of the human economy. The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

Is this an example of the type of cooperation discussed here?

Comment author: Caspar42 23 June 2017 04:42:12PM 1 point [-]

Yep, this is roughly the type of cooperation I have in mind. Some minor points:

Overall, I am not sure whether gains from trade would arise in this specific scenario. Perhaps, it’s not better for the civilizations than if each civilization only builds habitats for itself?

The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

As I argue in section “No reciprocity needed: whom to treat beneficially”, the benefit doesn’t necessarily come from the species that we benefit. Even if agent X is certain that agent Y cannot benefit X, agent X may still help Y to make it more likely that X receives help from other agents who are in a structurally similar situation w.r.t. Y and think about it in a way similar to X.

Also, the other civilizations don’t need to be able to check whether we helped them, just like in the prisoner’s dilemma against a copy we don’t have to be able to check whether the other copy actually cooperated. It’s enough to know, prior to making one’s own decision, that the copy reasons similarly about these types of problems.

Comment author: scarcegreengrass 22 June 2017 09:02:30PM 0 points [-]

So, i'm trying to wrap my head around this concept. Let me sketch an example:

Far-future humans have a project where they create millions of organisms that they think could plausibly exist in other universes. They prioritize organisms that might have evolved given whatever astronomical conditions are thought to exist in other universes, and organisms that could plausibly base their behavior on moral philosophy and game theory. They also create intelligent machines, software agents, or anything else that could be common in the multiverse. They make custom habitats for each of these species and instantiate a small population in each one. The humans do this via synthetic biology, actual evolution from scratch (if affordable), or simulation. Each habitat is optimized to be an excellent environment to live in from the perspective of the species or agent inside it. This whole project costs a small fraction of the available resources of the human economy. The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

Is this an example of the type of cooperation discussed here?

Comment author: Caspar42 23 June 2017 04:39:59PM 0 points [-]

Yep, this is roughly the type of cooperation I have in mind. Some minor points:

Overall, I am not sure whether gains from trade would arise in this specific scenario. Perhaps, it’s not better for the civilizations than if each civilization only builds habitats for itself?

The game theoretic motive is that, by doing something good for a hypothetical species, there might exist an inaccessible universe in which that species is both living and able to surmise that the humans have done this, and that they will by luck create a small utopia of humans when they do their counterpart project.

As I argue in section “No reciprocity needed: whom to treat beneficially”, the benefit doesn’t necessarily come from the species that we benefit. Even if agent X is certain that agent Y cannot benefit X, agent X may still help Y to make it more likely that X receives help from other agents who are in a structurally similar situation w.r.t. Y and think about it in a way similar to X.

Also, the other civilizations don’t need to be able to check whether we helped them, just like in the prisoner’s dilemma against a copy we don’t have to be able to check whether the other copy actually cooperated. It’s enough to know, prior to making one’s own decision, that the copy reasons similarly about these types of problems.

Comment author: Caspar42 29 May 2017 01:30:03PM 1 point [-]

Closely related is the law of total expectation: https://en.wikipedia.org/wiki/Law_of_total_expectation

It states that E[E[X|Y]]=E[X].

Comment author: username2 29 May 2017 12:50:09PM 2 points [-]

And if pilot wave theory is correct and these other universes exist only in your head...?

Comment author: Caspar42 29 May 2017 01:14:30PM 2 points [-]

Then it doesn't work unless you believe in some other theory that postulates the existence of a sufficiently large universe or multiverse, Everett is only one option.

Invitation to comment on a draft on multiverse-wide cooperation via alternatives to causal decision theory (FDT/UDT/EDT/...)

5 Caspar42 29 May 2017 08:34AM

I have written a paper about “multiverse-wide cooperation via correlated decision-making” and would like to find a few more people who’d be interested in giving a last round of comments before publication. The basic idea of the paper is described in a talk you can find here. The paper elaborates on many of the ideas and contains a lot of additional material. While the talk assumes a lot of prior knowledge, the paper is meant to be a bit more accessible. So, don’t be disheartened if you find the talk hard to follow — one goal of getting feedback is to find out which parts of the paper could be made more easy to understand.

If you’re interested, please comment or send me a PM. If you do, I will send you a link to a Google Doc with the paper once I'm done with editing, i.e. in about one week. (I’m afraid you’ll need a Google Account to read and comment.) I plan to start typesetting the paper in LaTeX in about a month, so you’ll have three weeks to comment. Since the paper is long, it’s totally fine if you don’t read the whole thing or just browse around a bit.

Comment author: Caspar42 18 May 2017 08:35:53PM 1 point [-]

Another piece of evidence is this minor error in section 9.2 of Peterson's An Introduction to Decision Theory:

According to causal decision theory, the probability that you have the gene given that you read Section 9.2 is equal to the probability that you have the gene given that you stop at Section 9.1. (That is, the probability is independent of your decision to read this section.) Hence, it would be a mistake to think that your chances of leading a normal life would have been any higher had you stopped reading at Section 9.1.

Comment author: entirelyuseless 16 May 2017 02:08:42PM 2 points [-]

I agree that this is part of what confuses the discussion. This is why I have pointed out in previous discussions that in order to be really considering Newcomb / Smoking Lesion, you have to be honestly more convinced the million is in the box after choosing to take one, than you would have been if you had chosen both. Likewise, you have to be honestly more convinced that you have the lesion, after you choose to smoke, than you would have been if you did not. In practice people would tend not to change their minds about that, and therefore they should smoke and take both boxes.

Some relevant discussion here.

Comment author: Caspar42 18 May 2017 01:32:32PM 0 points [-]

Great, thank you!

View more: Next