nhamann comments on What I would like the SIAI to publish - Less Wrong

27 Post author: XiXiDu 01 November 2010 02:07PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (218)

You are viewing a single comment's thread. Show more comments above.

Comment author: pjeby 02 November 2010 12:29:09AM 6 points [-]

It seems like you're essentially saying "This argument is correct. Anyone who thinks it is wrong is irrational."

No, what I'm saying is, I haven't yet seen anyone provide any counterarguments to the argument itself, vs. "using arguments as soldiers".

The problem is that it's not enough to argue that a million things could stop a foom from going supercritical. To downgrade AGI as an existential threat, you have to argue that no human being will ever succeed in building a human or even near-human AGI. (Just like to downgrade bioweapons as an existential threat, you have to argue that no individual or lab will ever accidentally or on purpose release something especially contagious or virulent.)

Furthermore, even if you suppose that Foom is likely, it's not clear where the threshold for Foom is. Could a sub-human level AI foom? What about human-level intelligence? Or maybe we need super-human intelligence? Do we have good evidence for where the Foom-threshold would be?

It's fairly irrelevant to the argument: there are many possible ways to get there. The killer argument, however, is that if a human can build a human-level intelligence, then it is already super-human, as soon as you can make it run faster than a human. And you can limit the self-improvement to just finding ways to make it run faster: you still end up with something that can and will kick humanity's butt unless it has a reason not to.

Even ems -- human emulations -- have this same problem, and they might actually be worse in some ways, as humans are known for doing worse things to each other than mere killing.

It's possible that there are also sub-human foom points, but it's not necessary for the overall argument to remain solid: unFriendly AGI is no less an existential risk than bioweapons are.

Comment author: nhamann 02 November 2010 02:51:47AM *  2 points [-]

It's fairly irrelevant to the argument: there are many possible ways to get there

I don't see how you can say that. It's exceedingly relevant to the question at hand, which is: "Should Ben Goertzel avoid making OpenCog due to concerns of friendliness?". If the Foom-threshold is exceedingly high (several to dozens times the "level" of human intelligence), then it is overwhelmingly unlikely that OpenCog has a chance to Foom. It'd be something akin to the Wright brothers building a Boeing 777 instead of the Wright flyer. Total nonsense.

Comment author: pjeby 02 November 2010 06:11:56PM 2 points [-]

It's exceedingly relevant to the question at hand, which is: "Should Ben Goertzel avoid making OpenCog due to concerns of friendliness?"

Ah. Well, that wasn't the question I was discussing. ;-)

(And I would think that the answer to that question would depend heavily on what OpenCog consists of.)