1) A lone human paperclip cultist resolves to convert the universe (but doesn't use AI).
Almost zero.
2) One quarter of the world has converted to paperclip cultism and war ensues. No-one has AI.
I will use 80% here as a metaphor for "I don't know".
3) A lone paperclip cultist sets the goal of a seed AI and uploads it to a botnet.
Almost 100%, assuming that the seed AI is correct.
4) As for 2) but the cultists have a superintelligent AI to advise them.
Almost 100%.
I think a better chess analogy would be like this: There is a chess-like game played on a 10000x10000 chessboard with rules so complex that no human is able to remember them all (there is a game mechanism that warns you if you try to break the rules), and you must make your move in a limited short time. When you play this game against other humans, both sides have the same disadvantage, so this is not a problem.
Now you are going to play it against a Deep Blue, and you feel pretty confident, because you start with 80000 pieces, and the Deep Blue only with 5 pieces. Five turns later, you have 79999 pieces and Deep Blue has 13 pieces, because it used some piece-duplicating moves, one of them you did not even know. However, you are still confident that your initial advantage will prevail at the end.
There are at least three objections to the risk of an unfriendly AI. One is that uFAI will be stupid - it is not possible to build a machine that is much smarter than humanity. Another is that AI would be powerful but uFAI is unlikely - the chances of someone building something that turn out malign, either deliberately or accidentally, is small. Another one that I haven't seen articulated, is the AI could be malign and potentially powerful, but effectively impotent due to its situation.
To use a chess analogy: I'm virtually certain that Deep Blue will beat me at a game of chess. I'm also pretty sure that a better chess program with vastly more computer power would beat Deep Blue. But, I'm also (almost) certain that I would beat them both at a rook and king vs king endgame.
If we try to separate out the axes of intelligence and starting position, where does your intuition tell you the danger area is ? To illustrate, what is the probability that humanity is screwed in each of the following ?
1) A lone human paperclip cultist resolves to convert the universe (but doesn't use AI).
2) One quarter of the world has converted to paperclip cultism and war ensues. No-one has AI.
3) A lone paperclip cultist sets the goal of a seed AI and uploads it to a botnet.
4) As for 2) but the cultists have a superintelligent AI to advise them.