XiXiDu comments on Tallinn-Evans $125,000 Singularity Challenge - Less Wrong

27 Post author: Kaj_Sotala 26 December 2010 11:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (369)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 29 December 2010 08:54:16PM 0 points [-]

I didn't mean to suggest that aliens are a more likely risk than AI. I was trying to show that unknown unknowns can not be employed to the extent you suggest. You can't just say that ruling out many possibilities of how an AI could be dangerous doesn't make it less dangerous because it might come up with something we haven't thought about. That line of reasoning would allow you to undermine any evidence to the contrary.

I'll be back tomorrow.

Comment author: Kaj_Sotala 30 December 2010 09:07:30PM 1 point [-]

You can't just say that ruling out many possibilities of how an AI could be dangerous doesn't make it less dangerous because it might come up with something we haven't thought about. That line of reasoning would allow you to undermine any evidence to the contrary.

Not quite.

Suppose that someone brought up a number of ways by which an AI could be dangerous, and somebody else refuted them all by pointing out that there's no particular way by which having superior intelligence would help in them. (In other words, humans could do those things too, and an AI doing them wouldn't be any more dangerous.) Now if I couldn't come up with any examples where having a superior intelligence would help, then that would be evidence against the "a superior intelligence helps overall".

But all of the examples we have been discussing (nanotech warfare, biological warfare, cyberwarfare) are technological arms races. And in a technological arms race, superior intelligence does bring quite a decisive edge. In the discussion about cyberwarfare, you asked what makes the threat from an AI hacker different from the threat of human hackers. And the answer is that hacking is a task that primarily requires qualities such as intelligence and patience, both of which an AI could have a lot more than humans do. Certainly human hackers could do a lot of harm as well, but a single AI could be as dangerous as all of the 90th percentile human hackers put together.