Peter_de_Blanc comments on Advice for AI makers - Less Wrong

7 Post author: Stuart_Armstrong 14 January 2010 11:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (196)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peter_de_Blanc 14 January 2010 02:07:46PM *  1 point [-]

Try to build an AI that:

  1. Implements a timeless decision theory.
  2. Is able to value things that it does not directly perceive, and in particular cares about other universes.
  3. Has a utility function such that additional resources have diminishing marginal returns.

Such an AI is more likely to participate in trades across universes, possibly with a friendly AI that requests our survival.

[EDIT]: It now occurs to me that an AI that participates in inter-universal trade would also participate in inter-universal terrorism, so I'm no longer confident that my suggestions above are good ones.

Comment author: Blueberry 14 January 2010 05:54:30PM 0 points [-]

Can you please elaborate on "trades across universes"? Do you mean something like quantum civilization suicide, as in Nick Bostrom's paper on that topic?

Comment author: Wei_Dai 14 January 2010 07:05:46PM 1 point [-]

Here's Nesov's elaboration of his trading across possible worlds idea.

Personally, I think it's an interesting idea, but I'm skeptical that it can really work, except maybe in very limited circumstances such as when the trading partners are nearly identical.

Comment author: Blueberry 15 January 2010 05:33:43PM 0 points [-]

Cool, thanks!