buybuydandavis comments on Eliezer's YU lecture on FAI and MOR [link] - Less Wrong

2 Post author: Dr_Manhattan 07 March 2013 04:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread.

Comment author: buybuydandavis 07 March 2013 07:05:35PM 1 point [-]

Argues for folk theorem that in general, rational agents will preserve their utility functions during self-optimization.

The Ghandi example works because he was posited with one goal. With multiple competing goals, I'd expect some goals to lose, and having lost, be more likely to lose the next time.

Comment author: Eliezer_Yudkowsky 07 March 2013 11:21:42PM 1 point [-]

"Utility functions." Omohundro argues that agents which don't have utility functions will have to acquire them. I'm not totally sure I believe this is a universal law but I suspect that something like it is true in a lot of cases, for reasons like those above.

Comment author: shminux 07 March 2013 08:52:25PM -2 points [-]

The Ghandi example works because he was posited with one goal.

And unchanged circumstances. What would Ghandi do when faced with a trolley problem?

Comment author: RichardHughes 07 March 2013 10:10:00PM -1 points [-]

Same thing as 'multiple competing goals', where those goals are 'do not be part of a causal chain that leads to the death of others' and 'reduce the death of others'.