Wei_Dai comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (104)
Suppose the AI we build (AI1) finds itself insufficiently intelligent to persuade us. It decides to build a more powerful AI (AI2) to give it advice. AI2 wakes up and modifies AI1 into being perfectly satisfied with the way things are. Then, mission accomplished, they both shut down and leave humanity unchanged.
I think what went wrong here is that this formulation of utilitarianism isn't reflectively consistent.
If there are, then the AI would modify us physically instead.