Reflective stability

Written by Eliezer Yudkowsky last updated

An agent is "reflectively stable" in some regard, if having a choice of how to construct a successor agent or modify its own code, the agent will only construct a successor that thinks similarly in that regard.

  • In tiling agent theory, an expected utility satisficer is reflectively consistent, since it will approve of building another EU satisficer, but an EU satisficer is not reflectively stable, since it may also approve of building an expected utility maximizer (it expects the consequences of building the maximizer to satisfice).
  • Having a utility function that only weighs paperclips is "reflectively stable" because paperclip maximizers only try to build other paperclip maximizers.

If, thinking the way you currently do (in some regard), it seems unacceptable to not think that way (in that regard), then you are reflectively stable (in that regard).