You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Thomas comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread. Show more comments above.

Comment author: asr 09 January 2012 03:57:54AM 4 points [-]

A general intelligence that only cares about answering the question given to it does just that. As effectively as it can with the resources it has available to it. Unless it is completely isolated from all external sources of information it will proceed directly to creating more of itself as soon as it has been given a difficult question. The very best you could hope for if the question answer is completely isolated is an AI Box. If Pat is the gatekeeper then R. I. P. humanity.

This need not be the case. Whenever we talk about software "wanting" something, we are of course speaking metaphorically. It might be straightforward to build a super-duper Watson or Wolfram Alpha, that responds to natural queries "intelligently", without the slightest propensity to self-modify or radically alter the world. You might even imagine such a system having a background thread trying to pre-compute answers to interesting questions and share them with humans, once per day, without any ability to self-modify or significant probability of radical alteration to human society.

Comment author: Thomas 09 January 2012 10:08:36AM *  -1 points [-]

The one who poses the Answering machine is the friendly or is not friendly. The whole system - Oracle+Owner(User) - is a rouge or quite friendly SAI then.

The whole problem shifts a little, but doesn't change very much for the rest of the humanity.