VAuroch comments on What should a friendly AI do, in this situation? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (66)
By modeling them, now and after the consequences. If, after they were aware of the consequences, they regret the decision by a greater margin (adjusted for the probability of the bad outcome) than the margin by which they would decide to not take action now, then they are only deciding wrongly because they are being insufficiently moved by abstract evidence, and it is in their actual rational interest to take action now, even if they don't realize it.
You're overloading friendly pretty hard. I don't think that's a characteristic of most friendly AI designs and don't see any reason other than idealism to think it is.