Nick_Tarleton comments on Einstein's Superpowers - Less Wrong

30 Post author: Eliezer_Yudkowsky 30 May 2008 06:40AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (92)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 31 May 2008 07:57:00PM 0 points [-]

But why would you build an AI in a box if you planned to never let it out?

To have it work for you, e.g., solve subproblems of Friendly AI. But this would require letting some information out, which should be presumed unsafe.

Roland: the presumption of unFriendliness is much stronger for an AI than a human, and the strength of evidence for Friendliness that can reasonably be hoped for is much greater.