Suppose I buy shares in a company that builds an AI, which then works for the good of the company, which rewards share-owners. This is ordinary causality: I contributed towards its building, and was rewarded later.
What makes it possible to be rewarded as a shareholder is a legal system which enforces your ownership rights: a kind of pre-commitment which is feasible even among humans who cannot show proofs about their "source code." The legal system is a mutual enforcement system which sets up a chain of causality towards your being paid back.
Suppose I contribute towards something other than its building, in the belief that an AI which will later come into being will reward me for having done this. Still doesn't seem acausal to me.
It's interesting what to consider what happens when the second agent cannot precommit to repaying you. For example, if the agent does not yet exist.
Suppose I believe an AI is likely to be built that will conquer the world and transfer all wealth to its builders.
The question is: Why would it do that? In the future, when this new agent comes into existence, why would it consume resources to repay its builders (assuming that it receives no benefit at that future time)? The "favor" that the builders did is past and gone; repaying them gives the agent no benefit. Since we are talking in this comment subthread about an FAI that is truly friendly to all humanity, it might distribute its efforts equality to all humanity rather than "wasting" resources on differential payback.
The answer to this question has to do with acausal trade. I wrote a LW Wiki article on the topic. It's pretty mind-bending and it took me a while to grasp, but here is a summary. If Agent P (in this case the AI) can model or simulate Agent Q (in this case humans in P's past) to prove statements (probably probabalistic statements) about it, and Q can model P, then P's optimal move is to do what Q wants, and Q's optimal move is to do what P wants. This holds in the limiting case of perfect knowledge and infinite computational power, but in real life, clearly, it depends on a lot of assumptions about P's and Q's ability to model each other, and the relative utility they can grant each other.
What I don't quite understand is why the following, simpler argument isn't sufficient. It seems to lead to the same results, and it doesn't require acausal trade.
I'm not building just any AI. I want to build an AI that will, by design, reward its builders. Just like any other tool I build, I wouldn't do it if I didn't expect it to do certain things and not do other ones.
Similarly, if you cooperate with Roko's Basilisk, you try to build it because it's the kind of AI that punishes those who didn't try to build it. You know it punishes non-builders, because ...
Todays xkcd
I guess there'll be a fair bit of traffic coming from people looking it up?