Viliam_Bur comments on How many people here agree with Holden? [Actually, who agrees with Holden?] - Less Wrong

4 Post author: private_messaging 14 May 2012 11:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (105)

You are viewing a single comment's thread. Show more comments above.

Comment author: Viliam_Bur 15 May 2012 07:56:28AM *  2 points [-]

Seems to me that Holden's opinion is something like: "If you can't make the AI reliably friendly, just make it passive, so it will listen to humans instead of transforming the universe according to its own utility function. Making a passive AI is safe, but making an almost-friendly active AI is dangerous. SI is good at explaining why almost-friendly active AI is dangerous, so why don't they take the next logical step?"

But from SI's point of view, this is not a solution. First, it is difficult, maybe even impossible, to make something passive and also generally intellligent and capable of recursive self-improvement. It might destroy the universe as a side effect of trying to do what it percieves as our command. Second, the more technology progresses, the relatively easier it will be to build an active AI. Even if we build a few passive AIs, it does not prevent some other individual or group to build an active AI and use it to destroy the world. Having a blueprint for a passive AI will probably make building active AI easier.

(Note: I am not sure I am representing Holden's or SI's views correctly, but this is how it makes most sense to me.)