eli_sennesh comments on Safety engineering, target selection, and alignment theory - Less Wrong

17 Post author: So8res 31 December 2015 03:43PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread. Show more comments above.

Comment author: So8res 31 December 2015 04:28:33PM 5 points [-]

By your analogy, one of the main criticism of doing MIRI-style AGI safety research now is that it's like 10th century Chinese philosophers doing Saturn V safety research based on what they knew about fire arrows.

This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI alignment research is less like this and more like Chinese mathematicians researching calculus and gravity, which is still difficult, but much easier than attempting to do safety engineering on the Saturn V far in advance :-)

Comment author: [deleted] 01 January 2016 06:41:04PM 2 points [-]

Don't kid yourself in the effort to seem humble: it's an entirely feasible research effort.