You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer's YU lecture on FAI and MOR [link]

2 Post author: Dr_Manhattan 07 March 2013 04:09PM

Comments (7)

Comment author: Kaj_Sotala 07 March 2013 05:49:25PM 8 points [-]

YU = Yeshiva University, apparently.

Comment author: Qiaochu_Yuan 08 March 2013 02:19:42AM 6 points [-]

Summary?

Comment author: Gastogh 08 March 2013 09:09:23AM 6 points [-]

I read the first half, skimmed the second, and glanced at a handful of the slides. Based on that, I would say it's mostly introductory material with nothing new for those who have read the sequences. IOW, a summary of the lecture would basically be a summary of a summary of LW.

Comment author: buybuydandavis 07 March 2013 07:05:35PM 1 point [-]

Argues for folk theorem that in general, rational agents will preserve their utility functions during self-optimization.

The Ghandi example works because he was posited with one goal. With multiple competing goals, I'd expect some goals to lose, and having lost, be more likely to lose the next time.

Comment author: Eliezer_Yudkowsky 07 March 2013 11:21:42PM 1 point [-]

"Utility functions." Omohundro argues that agents which don't have utility functions will have to acquire them. I'm not totally sure I believe this is a universal law but I suspect that something like it is true in a lot of cases, for reasons like those above.

Comment author: shminux 07 March 2013 08:52:25PM -2 points [-]

The Ghandi example works because he was posited with one goal.

And unchanged circumstances. What would Ghandi do when faced with a trolley problem?

Comment author: RichardHughes 07 March 2013 10:10:00PM -1 points [-]

Same thing as 'multiple competing goals', where those goals are 'do not be part of a causal chain that leads to the death of others' and 'reduce the death of others'.