Reflective decision theory is a term occasionally used to refer to a decision theory that would allow an agent to take actions in a way that they do not trigger regret. This regret is conceptualized, according to the Causal Decision Theory, as a Reflective inconsistency, a divergence between the agent who took the action and the same agent reflecting upon it after.
This problem represents the best example of what Eliezer Yudkowsky calls the regret of rationality. Simply put, consider an alien superintelligence that comes to you and wants to play a simple game:
He sets two boxes in front of you - Box A and Box B.
Box A is transparent and has 1000 dollars inside. Box B is opaque and can contain 1000000 dollars or nothing.
You can choose to take both boxes or to take only Box B.
The catch is: this superintelligence is a Predictor (which has been making correct predictions), and will only put the 1000000 dollars in Box B if, and only if, it predicts you will choose Box B.
By the time you decide, the alien has already made the prediction and left the scene, and you are faced with the choice. A or B?
The dominant view in the literature regards chosing both boxes as the more rational decision, although the alien actually rewards irrational agents. When considering thought experiments such as this, it's suggested that a sufficiently powerful AGI would solve it by being able to access its own source code and to self-modify. This would allow it to alter its own behavior and decision process, beating the paradox through the definition of a precommitment to a certain choice in such situations.
In order for us to understand the AGI's behavior in this and other situations and to be able to implement it, we will have to create a reflectively consistent decision theory. Particularly, reflective consistency would be needed to ensure that it preserved a friendly value system throughout its self-modifications.
Eliezer Yudkowsky's has proposed theoretical solution to the reflective decision theory problem in his Timeless Decision Theory.