I got it either here or here, but neither has a discussion. The link's in Wei Dai's reply cover the same subject matter, but do not make direct reference to the story.
As I see nowhere else particularly to put it, here's a thought I had about the agent in the story, and specifically whether the proposed system works if not all other entities subscribe to it.
There is a non-zero probability that there exists/could exist an AI that does not subscribe to the system outlined of respecting other AIs values. It is equally probable that his AI was created before me or after me. Given this, if it already exists I can have no defence against it. If it does not yet exist I am safe from it, but must act as much as possible to preve...
Suppose we could look into the future of our Everett branch and pick out those sub-branches in which humanity and/or human/moral values have survived past the Singularity in some form. What would we see if we then went backwards in time and look at how that happened? Here's an attempt to answer that question, or in other words to enumerate the not completely disastrous Singularity scenarios that seem to have non-negligible probability. Note that the question I'm asking here is distinct from "In what direction should we try to nudge the future?" (which I think logically ought to come second).
Sorry if this is too cryptic or compressed. I'm writing this mostly for my own future reference, but perhaps it could be expanded more if there is interest. And of course I'd welcome any scenarios that may be missing from this list.