You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Konkvistador comments on Q&A with experts on risks from AI #1 - Less Wrong Discussion

29 Post author: XiXiDu 08 January 2012 11:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (66)

You are viewing a single comment's thread.

Comment author: [deleted] 08 January 2012 01:35:01PM *  9 points [-]

No. There is no reason to suppose that any manufactured system will have any emotional stance towards us of any kind, friendly or unfriendly. In fact, even if the idea of "human-level" made sense, we could have a more-than-human-level super-intelligent machine, and still have it bear no emotional stance towards other entities whatsoever. Nor need it have any lust for power or political ambitions, unless we set out to construct such a thing (which AFAIK, nobody is doing.) Think of an unworldly boffin who just wants to be left alone to think, and does not care a whit for changing the world for better or for worse, and has no intentions or desires, but simply answers questions that are put to it and thinks about htings that it is asked to think about. It has no ambition and in any case no means to achieve any far-reaching changes even if it "wanted" to do so. It seems to me that this is what a super-intelligent question-answering system would be like. I see no inherent, even slight, danger arising from the presence of such a device.

I can't help but cringe reading that. But it really depends on what he meant by "inherent".

Comment author: torekp 08 January 2012 05:11:35PM 8 points [-]

It's amazing, and a little scary, that he thinks "just want[ing] to be left alone to think" couldn't lead to any harm. The more resources an agent gathers, the more thinking it can do...

Comment author: satt 08 January 2012 10:50:58PM 10 points [-]

...and the more forcefully it can ensure it is left alone.

Comment author: fortyeridania 09 January 2012 12:39:36AM *  4 points [-]

Yeah, this was an odd thing of him to write. The danger doesn't require any "emotional" qualities at all, just the wrong goal and sufficient power to achieve it.

Comment author: gwern 08 January 2012 04:15:02PM 3 points [-]

Oracle AIs aren't dangerous? Well, it's possible.