MatthewB comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bgoertzel 02 November 2010 01:30:38AM 12 points [-]

I agree that a write-up of SIAI's argument for the Scary Idea, in the manner you describe, would be quite interesting to see.

However, I strongly suspect that when the argument is laid out formally, what we'll find is that

-- given our current knowledge about the pdf's of the premises in the argument, the pdf on the conclusion is verrrrrrry broad, i.e. we can't conclude hardly anything with much of any confidence ...

So, I think that the formalization will lead to the conclusion that

-- "we can NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly lead to bad consequences for humanity"

-- "we can also NOT confidently say, now, that: Building advanced AGI without a provably Friendly design will almost certainly NOT lead to bad consequences for humanity"

I.e., I strongly suspect the formalization

-- will NOT support the Scary Idea

-- will also not support complacency about AGI safety and AGI existential risk

I think the conclusion of the formalization exercise, if it's conducted, will basically be to reaffirm common sense, rather than to bolster extreme views like the Scary Idea....

-- Ben Goertzel

Comment author: MatthewB 02 November 2010 05:34:43AM 2 points [-]

I agree.

I doubt you would remember this, but we talked about this at the Meet and Greet at the Singularity Summit a few months ago (in addition to CBGBs and Punk Rock and Skaters).

James Hughes mentioned you as well at a Conference in NY where we discussed this very issue as well.

One thing that you mentioned at the Summit (well in conversation) was that The Scary Idea was tending to cause some paranoia among people who otherwise might be contributing more to the development of AI (of course, you also seemed pretty hostile to brain emulation too) as it tends to cause funding that could be going to AI to be slowed as a result.