Mark_Friedenbach comments on MIRI's technical research agenda - Less Wrong

33 Post author: So8res 23 December 2014 06:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (52)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 27 January 2015 09:56:45PM 0 points [-]

I have read all of the resources you linked to and their references, the sequences, and just about every post on the subject here on LessWrong. Most of what passes for thinking regarding AI boxing and oracles here is confused and/or fallacious.

A superintelligence with nearly zero power could turn to be a heck of a lot more power than you expect.

It would be helpful if you could point to the specific argument which convinced you of this point. For the most part every argument I've seen along these lines either stacks the deck against the human operator(s), or completely ignores practical and reasonable boxing techniques.

The incentives to tap more perceived utility by unboxing the AI or building other unboxed AIs will be huge.

Again, I'd love to see a citation. Having a real AGI in a box is basically a ticket to unlimited wealth and power. Why would anybody risk losing control over that by unboxing? Seriously, someone owns an AGI would be paranoid about keeping their relative advantage and spend their time strengthening the box and investing in physical security.