You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Superintelligence 16: Tool AIs - Less Wrong Discussion

7 Post author: KatjaGrace 30 December 2014 02:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 01 January 2015 02:00:48PM *  1 point [-]

MIRIs argument, which I agree with for once, is that a safe goal can have dangerous sub goals.

The tool AI proponents argument, as I understand it, is that a system that defaults to doing nothing is safer.

I think MIRI types are persistently mishearing that, because they have an entirely different set of presuppositions....that safety is all-or-nothing, not a series of mitigations. That safety is not a matter of engineering, but mathematical proof....not that you can prove anything behind the point where the uncertainty within the system is less than the uncertainty about the system.