You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Viliam_Bur comments on Intelligence risk and distance to endgame - Less Wrong Discussion

-3 Post author: Kyre 13 April 2012 09:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (11)

You are viewing a single comment's thread.

Comment author: Viliam_Bur 13 April 2012 10:22:08AM *  9 points [-]

1) A lone human paperclip cultist resolves to convert the universe (but doesn't use AI).

Almost zero.

2) One quarter of the world has converted to paperclip cultism and war ensues. No-one has AI.

I will use 80% here as a metaphor for "I don't know".

3) A lone paperclip cultist sets the goal of a seed AI and uploads it to a botnet.

Almost 100%, assuming that the seed AI is correct.

4) As for 2) but the cultists have a superintelligent AI to advise them.

Almost 100%.

I think a better chess analogy would be like this: There is a chess-like game played on a 10000x10000 chessboard with rules so complex that no human is able to remember them all (there is a game mechanism that warns you if you try to break the rules), and you must make your move in a limited short time. When you play this game against other humans, both sides have the same disadvantage, so this is not a problem.

Now you are going to play it against a Deep Blue, and you feel pretty confident, because you start with 80000 pieces, and the Deep Blue only with 5 pieces. Five turns later, you have 79999 pieces and Deep Blue has 13 pieces, because it used some piece-duplicating moves, one of them you did not even know. However, you are still confident that your initial advantage will prevail at the end.