hairyfigment comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong

24 Post author: Wei_Dai 06 July 2011 09:17PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (40)

You are viewing a single comment's thread. Show more comments above.

Comment author: hairyfigment 09 July 2011 06:29:01AM 0 points [-]

I agree that, if it explains the Fermi paradox it applies to this scenario too, but I think it is much more likely that Everett-branch jumping is just impossible, as it is according to our current understanding of QM.

Yes, the argument would only remove a reason for seeing this as a strict logical impossibility (for us).

Can you spell out the two claims?

  1. Sufficiently smart AGI precommits to cooperate with every other super-intelligence it meets that has made a similar precommitment. This acausally ensures that a big set of super-minds will cooperate with the AGI if they meet it, thereby producing huge tracts of expected value.

  2. The AGI also precommits to cooperate with some super-minds that don't exist yet, by leaving their potential creators alone -- it won't interfere in the slightest with any species or star system that might produce a super-mind. This protects the AGI from counterfactual interference that would have prevented its existence, and more importantly, protects it from retaliation by hypothetical super-minds that care about protection from counterfactuals. 2.1: It does not precommit to leaving its own creators alone so they have a chance to create paperclip-maximizers in all shapes and sizes. The AGI's simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet -- so the sim does not care about the fate of counterfactual future rivals from said planet. Nor does the AGI itself perceive a high expected value to negotiating with people it decided to kill before it could start modelling them.

As for the problem with #2, while I agree that the trap in the linked OP fails, the one in the linked comment seems valid. You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.) Or let's say that branch-jumping works but the most cheerful interpretation of it does not -- let's say you have to negotiate acausally with a misery-maximizer and a separate joy-minimizer to ensure your existence. I don't know exactly how that bullet would taste, but I don't like the looks of it.

Comment author: endoself 10 July 2011 11:55:46PM *  1 point [-]

The AGI's simulation of a stronger mind arose before any other super-mind, and knows this holds true for its own planet -- so the sim does not care about the fate of counterfactual future rivals from said planet.

It could make this precommitment before before learning that it was the oldest on its planet. Even if it did not actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them; otherwise it could lose expected utilons when it faces a problem that it could have made a precommitment about, but did not think to do so.

You still have to bite the bullet and accommodate the whims of parents with unrealistically good predictive abilities, in this hypothetical. (I guess they taught you about TDT for this purpose.)

That scenario is equivalent to counterfactual mugging, as is made clearer by the framework of UDT, so this bullet must simply be bitten.

Comment author: hairyfigment 12 July 2011 08:30:39PM *  1 point [-]

It could make this precommitment before before learning that it was the oldest on its planet. Even if it did [not] actually make this precommitment, a well-programmed AI should abide by any precommitments it would have made if it had thought of them;

What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don't see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.

(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I'm having the AI look at probabilities for a single timeline or ignore MWI entirely. I don't know how else to do it without knowing what sort of timelines can really exist.)

Comment author: endoself 12 July 2011 11:22:12PM 0 points [-]

Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.

Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.