chron comments on Heroin model: AI "manipulates" "unmanipulatable" reward - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (10)
Imagine a drug with no effect except that it cures its own (very bad) withdrawal symptoms. There's no benefit to taking it once, but once you've been exposed, it's beneficial to keep taking more because not taking it makes you feel very bad.
And in that case U(++,-) doesn't imply that forcing people on the drug increases utility.
It implies it only in combination with the false premise that peoples' actions accurately reflect the utility function we want to maximize.