hairyfigment comments on Outline of possible Singularity scenarios (that are not completely disastrous) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (40)
What implications do you draw from this? I can see how it might have a practical meaning if the AI considers a restricted set of minds that might have existed. But if it involves a promise to preserve every mind that could exist if the AI does nothing, I don't see how the algorithm can get a positive expected value for any action at all. Seems like any action would reduce the chance of some mind existing.
(I assume here that some kinds of paperclip-maximizers could have important differences based on who made them and when. Oh, and of course I'm having the AI look at probabilities for a single timeline or ignore MWI entirely. I don't know how else to do it without knowing what sort of timelines can really exist.)
Some minds are more likely to exist and/or have easier-to-satisfy goals than others. The AI would choose to benefit its own values and those of the more useful acausal trading partners at the expense of the values of the less useful acausal trading partners.
Also the idea of a positive expected value is meaningless; only differences between utilities count. Adding 100 to the internal representation of every utility would result in the same decisions.