WrongBot comments on The Importance of Self-Doubt - Less Wrong

23 Post author: multifoliaterose 19 August 2010 10:47PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (726)

You are viewing a single comment's thread. Show more comments above.

Comment author: WrongBot 20 August 2010 09:02:47PM 3 points [-]

AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter. If FAI is important, only people who are working on FAI can be expected to produce positive outcomes with any significant probability.

Comment author: Morendil 20 August 2010 09:22:48PM 7 points [-]

AGI researchers who are not concerned with Friendliness are trying to destroy human civilization. They may not believe that they are doing so, but this does not change the fact of the matter.

"Trying to" normally implies intent.

I'll grant that someone working on AGI (or even narrower AI) who has become aware of the Friendliness problem, but doesn't believe it is an actual threat, could be viewed as irresponsible - unless they have reasoned grounds to doubt that their creation would be dangerous.

Even so, "trying to destroy the world" strikes me as hyperbole. People don't typically say that the Project Manhattan scientists were "trying to destroy the world" even though some of them thought there was an outside chance it would do just that.

On the other hand, the Teller report on atmosphere ignition should be kept in mind by anyone tempted to think "nah, those AI scientists wouldn't go ahead with their plans if they thought there was even the slimmest chance of killing everyone".

Comment author: timtyler 21 August 2010 08:32:12AM *  0 points [-]

I think machine intelligence is a problem which is capable of being subdivded.

Some people can work on one part of the problem, while others work on other bits. Not all parts of the problem have much to do with values - e.g. see - this quote:

In many respects, prediction is a central core problem for those interested in synthesising intelligence. If we could predict the future, it would help us to solve many of our problems. Also, the problem has nothing to do with values. It is an abstract math problem that can be relatively simply stated. The problem is closely related to the one of building a good quality universal compression algorithm.