V_V comments on Safety engineering, target selection, and alignment theory - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (11)
No need to imagine it. Rockets have been around since at least the 10th century.
Pretty much the same work that was needed in order to transport humans to the Moon at all.
Note how humans didn't manage to fly rockets to the Moon, or even to use them as really effective weapons, until they figured out calculus, celestial mechanics, and a ton of other stuff.
By your analogy, one of the main criticism of doing MIRI-style AGI safety research now is that it's like 10th century Chinese philosophers doing Saturn V safety research based on what they knew about fire arrows.
This is a fairly common criticism, yeah. The point of the post is that MIRI-style AI alignment research is less like this and more like Chinese mathematicians researching calculus and gravity, which is still difficult, but much easier than attempting to do safety engineering on the Saturn V far in advance :-)
Don't kid yourself in the effort to seem humble: it's an entirely feasible research effort.