Mitchell_Porter comments on A Rationalist's Tale - Less Wrong

82 Post author: lukeprog 28 September 2011 01:17AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (305)

You are viewing a single comment's thread. Show more comments above.

Comment author: lessdazed 10 September 2011 09:19:09AM 2 points [-]

Is there non-dualist theism? if not, that's the bottleneck making dismissal of theism justified, though ignorance does not excuse inaccurate descriptions of theism.

Comment author: Mitchell_Porter 10 September 2011 09:51:19AM 12 points [-]

My problem with Will's outlook is that if we are indeed being "watched over by a superintelligence", it doesn't appear to care about us in any very helpful way. Our relationship to it is therefore more about survival than it is about morality. According to the scenario, there is some thing out there which is all-powerful, whose actions depend partly on our actions, and which doesn't care about {long list of evolutionary and historical holocausts}, in any way that we would recognize as caring. Clearly, if we had any idea of the relationship between our actions and its actions, it would be in one's interest, first of all, to act so that it would not allow various awful things to happen to you and anyone you care about, and second, to act so that you might gain some advantage from its powers.

It appears that the only distinctive reason Will has for entertaining such a scenario is the usual malarkey about timeless game-theoretic equilibria... A while back, I was contemplating a post, to be called "Towards a critique of acausal reason", which was going to mention three fallacies of timeless decision theory: acausal democracy, acausal trade, acausal blackmail. The last two arise from a fallacy of selective attention: to believe them possible, you must only pay attention to possible worlds which only care about you in a highly specific way. But for any possible world where there is an intelligence simulating your response and which will do X if you do Y, there is another possible world where there is an intelligence which will do X if you don't do Y. And the actual multiplicity of worlds in which intelligences make decisions on the basis of decisions made by agents in other possible worlds that they are simulating it is vanishingly small, in the set of all possible worlds. Why the hell would you base your decision, regarding what to do in your own reality, on the opinions or actions of a possible entity in another world? You may as well just flip a coin. The whole idea that intelligences in causally disjoint worlds are in a position to trade, bargain, or arrive at game-theoretic equilibria is deeply flawed; it's only a highly eccentric agent which "cares" strongly about events which are influenced by only an extremely small fraction of its subjective duplicates (its other selves in the space of possible worlds). So some of these "eccentric agents" may genuinely "do deals", but there is no reason to think that they are anything more than a vanishingly small minority among the total population of the multiverse. (Obviously it would be desirable for people trying to work rigorously in TDT to make this argument in a rigorous form, but I don't see anything that's going to change the basic conclusion.)

So that leaves us in the more familiar situation, of possibly being in a simulation, or possibly facing the rise of a superintelligence in the near future, or possibly being somewhere in the guts of a cosmic superintelligence which either just tolerates our existence because we haven't crossed thresholds-of-caring yet, or which has a purpose for us which extends to tolerating the holocausts I mentioned earlier. All of this suggests that our survival and well-being are on the line, but it doesn't suggest that we are embedded in an order that is moral in any conventional sense.

Comment author: lessdazed 10 September 2011 10:57:22AM *  3 points [-]

acausal democracy

What does that even mean? Does that mean something like: hypothetical lunar farmers in a hypothetical lunar utopia should send down some ore to Earth, and that actual people hundreds of years earlier in a representative body voted 456-450 not to fund a lunar expedition even with a rider to the bill requiring future farmers to send down ore, but the farmer votes from the future+450 > 456? So the farmers "promised' to send ore?

acausal blackmail

It seems more like a real self inflicted wound than a fallacy or fake blackmail to me, perhaps we don't disagree. it's something that is real if one has certain patterns of mind that one could self modify away from, I think.

Comment author: Mitchell_Porter 10 September 2011 11:16:16AM 2 points [-]

By "acausal democracy", I mean the attempt to justify the practice of democracy - specifically, the act of voting - with timeless decision theory. No-one until you has attempted to depict a genuinely acausal democracy :-) This doesn't involve the "fallacy of selective attention", it's another sort of error, or combination of errors, in which TDT reasoning is supposed to apply to agents with only a bare similarity to yourself. See discussion here for a related example.

I also think we agree regarding acausal blackmail, that for a human being it can only be a mistake. Only one of those "eccentric agents" with a very peculiar utility function or decision architecture could rationally be susceptible to acausal blackmail - its decision procedure would have to insist that "selective attention" (to just those possible worlds where the specific blackmail threat is being made) is important, rather than attending to other worlds where contrary threats are being made, or to worlds where the action under consideration will be rewarded rather than punished, or to worlds where the agent is simply a free agent not being threatened or enticed by a captor who cares about acausal dealmaking (and those worlds should be in the vast majority).

Comment author: Will_Newsome 10 September 2011 11:19:31AM 1 point [-]

Right, humans can't even do straightforward causal reasoning, let alone weird superrational reasoning.

Comment author: Wei_Dai 24 April 2012 10:39:35PM 2 points [-]

I brought up a similar objection to acausal trade, and found Nesov's reply somewhat convincing. What do you think?

Comment author: Mitchell_Porter 24 April 2012 11:37:44PM 2 points [-]

We are now advanced enough to tackle this issue formally, by trying to construct an equilibrium in a combinatorially exhaustive population of acausal trading programs. Is there an acausal version of the "no-trade theorem"?