Mild optimization relates directly to one of the three core reasons why aligning at-least-partially superhuman AGI is hard - making very powerful optimization pressures flow through the system puts a lot of stress on its potential weaknesses and flaws.
I'm interested in this taxonomy of core reasons. Unfortunately this page doesn't specify the other two. What are they?
Also, this page is part of the AI alignment domain -- was it written by Eliezer? (surprisingly, "10 changes by 3 authors" is a link to edit and does not show author information or edit history)
From Arbital's Mild Optimization page:
I'm interested in this taxonomy of core reasons. Unfortunately this page doesn't specify the other two. What are they?
Also, this page is part of the AI alignment domain -- was it written by Eliezer? (surprisingly, "10 changes by 3 authors" is a link to edit and does not show author information or edit history)