Manfred comments on Heroin model: AI "manipulates" "unmanipulatable" reward - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (10)
It implies it only in combination with the false premise that peoples' actions accurately reflect the utility function we want to maximize.